Linux Kernel Installation

David Bandel

Issue #43, November 1997

Here's a Kernel Korner geared for the new user. Mr. Bandel guides us through the steps of configuring and compiling the kernel.

Linux is many users' introduction to a truly powerful, configurable operating system. In the past, a Unix-like operating system was out of reach for most. If it wasn't the operating system's 4-digit price tag, it was the hardware. Even the now free-for-personal-use SCO Unixware requires a system with SCSI drives, and most of us are using IDE to keep costs down. Along with the power that Linux brings comes the need to perform a task users have not had to do on simpler operating systems: configure the kernel to your hardware and operations.

Previous installation kernels from 1.2.x and before suggested that you rebuild; however, with the new 2.0.x kernel, rebuilding has almost become a necessity. The kernel that comes with the installation packages from Red Hat, Caldera, Debian and most others, is a generic, “almost everything is included” kernel. While rebuilding a kernel may seem like a daunting task and living with the installed kernel may not be too bad, rebuilding is a good introduction to your system.

Why Roll Your Own?

The standard installation kernels are an attempt to make as many systems as possible usable for the task of installing a workable Linux system. As such, the kernel is bloated and has a lot of unnecessary code in it for the average machine. It also does not have some code a lot of users want.

Then, of course, there's always the need to upgrade the kernel because you've bought new hardware, etc. Upgrading within a series is usually very straightforward. When it comes to upgrading, say from 1.2.something to 2.0.something, now the task is beyond the scope of this article and requires some savvy. Better to get a new distribution CD and start fresh—this is also true for upgrading to the experimental 2.1.x kernels.

Kernel Version Numbering

All Linux kernel version numbers contain three numbers separated by periods (dots). The first number is the kernel version. We are now on the third kernel version, 2. Some of you may be running a version 1 kernel, and I am aware of at least one running version 0 kernel.

The second number is the kernel major number. Major numbers which are even numbers (0 is considered an even number) are said to be stable. That is, these kernels should not have any crippling bugs, as they have been fairly heavily tested. While some contain small bugs, they can usually be upgraded for hardware compatibility or to armor the kernel against system crackers. For example, kernel 2.0.30, shunned by some in favor of 2.0.29 because of reported bugs, contains several patches including one to protect against SYN denial of service attacks. The kernels with odd major numbers are developmental kernels. These have not been tested and often as not will break any software packages you may be running. Occasionally, one works well enough that it will be adopted by users needing the latest and greatest support before the next stable release. This is the exception rather than the rule, and it requires substantial changes to a system.

The last number is the minor number and is increased by one for each release. If you see kernel version 2.0.8, you know it's a kernel 2.0, stable kernel, and it is the ninth release (we begin counting with 0).

Assumptions

I hate to make any assumptions; they always seem to come back to bite me. So I need to mention a few things so that we're working off the same sheet of music. In order to compile a kernel, you'll need a few things. First, I'll assume you've installed a distribution with a 2.0.x kernel, all the base packages and perhaps a few more. You'll also need to have installed gcc version 2.7 and all the supporting gcc libraries. You'll also need the libc-dev library and the binutils and bin86 packages (normally installed as part of a standard distribution install). If you download the source or copy it from a CD, you'll also need the tar and gunzip packages. Also, you'll need lots of disk real estate. Plan on 7MB to download, another 20MB to unpack this monster and a few more to compile it.

Needless to say, many of the things we will discuss require you to be logged in as root. If you've downloaded the kernel as a non-privileged user and you have write permission to the /usr/src subdirectory, you can still do much of this task without becoming root. For the newcomers to Linux, I highly recommend you do as much as possible as a non-privileged user and become root (type: su - face) only for those jobs that require it. One day, you'll be glad you acquired this habit. Remember, there are two kinds of systems administrators, those who have totally wrecked a running setup inadvertently while logged in as root, and those who will.

Obtaining/Upgrading the Source

Kernel sources for Linux are available from a large number of ftp sites and on almost every Linux distribution CD-ROM. For starters, you can go to ftp.funet.fi, the primary site for the Linux kernel. This site has a list of mirror sites from which you can download the kernel. Choosing the site nearest you helps decrease overall Internet traffic.

Once you've obtained the source, put it in the /usr/src directory. Create a subdirectory to hold the source files once they are unpacked using tar. I recommend naming the directories something like linux-2.0.30 or kernel-2.0.30, substituting your version numbers. Create a link to this subdirectory called linux using the following command:

ln -sf linux-2.0.30 linux

I included the -f in the link command because if you already have a kernel source in /usr/src, it will contain this link too, and we want to force it to look in our subdirectory.1 The only time you may have a problem is if linux is a subdirectory name, not a link. If you have this problem, you'll have to rename the subdirectory before continuing:

mv linux linux-2.0.8
Now issue the command:
tar xzvf linux-kernel-source.tar.gz
I have a habit of always including w (wait for confirmation) in the tar option string, then when I see that the .tar.gz or .tgz file is going to unpack into its own subdirectory, I ctrl-C out and reissue the command without the w. This way I can prevent corrupted archives from unpacking into the current directory.

On some versions of ln (notably version 3.13), the force option (-f) does not work. You'll have to first remove the link then establish it again. This works correctly by version 3.16.

Once you have the kernel unpacked, if you have any patches you wish to apply, now is a good time. Let's say you don't wish to run kernel 2.0.30, but you do want the tcp-syn-cookies. Copy the patch (called tcp-syn-cookies-patch-1) into the /usr/src directory and issue the command:

patch < tcp-syn-cookies-patch-1

This command applies the patch to the kernel. Look for files with an .rej extension in in the /usr/src directory. These files didn't patch properly. They may be unimportant, but peruse them anyway. If you installed a Red Hat system with some but not all of the kernel source (SPARC, PowerPC, etc.), you'll see some of these files. As long as they're not for your architecture, you're okay.

Preparation

As a final note, before we change (cd) into the kernel source directory and start building our new kernel, let's check some links that are needed. In your /usr/include directory, make sure you have the following soft links:

asm - /usr/src/linux/include/asm
linux - /usr/src/linux/include/linux
scsi - /usr/src/linux/include/scsi

Now, you see another reason to standardize the location of the kernel. If you don't put the latest kernel you wish to install in /usr/src/linux (via a link), the above links will not reach their intended target (dangling links), and the kernel may fail to compile.

How to proceed

Once everything else is set up, change directories into /usr/src/linux. Although you may want to stop off and peruse some of the documentation in the Documentation directory, particularly if you have any special hardware needs. Also, several of the CD-ROM drivers need to be built with customized settings. While they usually work as is, these drivers may give warning messages when loaded. If this doesn't bother you and they work as they should, don't worry. Otherwise, read the appropriate .txt, .h (header) files and .c (c code) files. For the most part, I have found them to be well commented and easy to configure. If you don't feel brave, you don't have to do it. Just remember you can always restore the original file by unpacking the gzipped tar file (or reinstalling the .rpm files) again.

Beginning to Compile

The first command I recommend you issue is:

make mrproper

While this command is not necessary when the kernel source is in pristine condition, it is a good habit to cultivate. This command ensures that old object files are not littering the source tree and are not used or in the way.

Configuring the Kernel

Now, you're ready to configure the kernel. Before starting, you'll need to understand a little about modules. Think of a module as something you can plug into the kernel for a special purpose. If you have a small network at home and sometimes want to use it (but not always), maybe you'll want to compile your Ethernet card as a module. To use the module, the machine must be running and have access to the /lib/modules This means that the drive (IDE, SCSI, etc., but could be an ethernet card in the case of nfs), the file system (normally ext2 but could be nfs) and the kernel type (hopefully elf) must be compiled in and cannot be modules. Modules aren't available until the kernel is loaded, the drive (or network) accessed, and the file system mounted. These files must be compiled into the kernel or it will not be able to mount the root partition. If you're mounting the root partition over the network, you'll need the network file system module, and your Ethernet card compiled.

Why use modules? Modules make the kernel smaller. This reduces the amount of protected space never given up by the kernel. Modules load and unload and that memory can be reallocated. If you use a module more than about 90% of the time the machine is up, compile it. Using a module in this case can be wasteful of memory, because while the module takes up the same amount of memory as if it were compiled, the kernel needs a little more code to have a hook for the module. Remember, the kernel runs in protected space, but the modules don't. That said, I don't often follow my own advice. I compile in: ext2, IDE and elf support only. While I use an Ethernet card almost all the time, I compile everything else as modules: a.out, java, floppy, iso9660, msdos, minix, vfat, smb, nfs, smc-ultra (Ethernet card), serial, printer, sound, ppp, etc. Many of these only run for a few minutes at a time here and there.

The next step is to configure the kernel. Here we have three choices—while all do the same thing, I recommend using one of the graphical methods. The old way was to simply type: make config. This begins a long series of questions. However, if you make a mistake, your only option is to press ctrl-C and begin again. You also can't go back in the sequence, and some questions depend on previous answers. If for some reason you absolutely can't use either of the graphical methods, be my guest.

I recommend using either make menuconfig or make xconfig. In order to use menuconfig, you must have installed the ncurses-dev and the tk4-dev libraries. If you didn't install them and you don't want to use the next method, I highly recommend that you install them now. You can always uninstall them later.

To run make xconfig, you must install and configure X. Since X is such a memory hog, I install, configure and startx only for this portion of the process, going back to a console while the kernel compiles so it can have all the memory it needs. The xconfig menu is, in my opinion, the best and easiest way to configure the kernel. Under menuconfig, if you disable an option, any subordinate options are not shown. Under xconfig, if you disable an option, subordinate options still show, they are just greyed out. I like this because I can see what's been added since the last kernel. I may want to enable an option to get one of the new sub-options in order to to experiment with it.

I'm going to take some space here to describe the sections in the kernel configuration and tell you some of the things I've discovered—mostly the hard way.

The first section is the code-maturity-level option. The only question is whether you want to use developmental drivers and code. You may not have a choice if you have some bleeding edge hardware. If you choose “no”, the experimental code is greyed out or not shown. If you use this kernel for commercial production purposes, you'll probably want to choose “no”.

The second section concerns modules. If you want modules, choose “yes” for questions 1 and 3. If you want to use proprietary modules that come with certain distributions, such as Caldera's OpenLinux for their Netware support, also answer “yes” to the second question since you won't be able to recompile the module.

The third section is general setup. Do compile the kernel as ELF and compile support for ELF binaries. Not compiling the proper support is a definite “gotcha”. You'll get more efficient code compiling the kernel for the machine's specific architecture (Pentium or 486), but a 386 kernel will run in any 32-bit Intel compatible clone; a Pentium kernel won't. An emergency boot disk for a large number of computers (as well as distribution install disks) is best compiled as a 386. However, a 386 will not run a kernel compiled for a Pentium.

Next comes block devices—nothing special here. If your root device is on an IDE drive, just make sure you compile it.

Then comes networking. For computers not connected to a network, you won't need much here unless you plan to use one computer to dial-out while others connect through it. In this case, you'll need to read up on such things as masquerading and follow the suggested guidelines.

SCSI support is next, though why it doesn't directly follow block devices I don't know. If your root partition is on a SCSI device, don't choose modules for SCSI support.

SCSI low-level drivers follow general SCSI support. Again, modules only for devices that don't contain the root partition.

The next section takes us back to networking again. Expect to do a lot of looking for your particular card here as well as some other support such as ppp, slip, etc. If you use nfs to mount your root device, compile in Ethernet support.

For those lucky enough to be needing ISDN support, the ISDN subsection will need to be completed.

Older CD-ROMs may require support from the next section. If you're using a SCSI or IDE CD-ROM, you can skip this one.

Next comes file systems. Again, compile what you need, in most cases ext2 and use modules for the rest.

Character devices are chosen next. Non-serial mice, like the PS/2 mouse are supported. Look on the bottom of your mouse. Many two-button mice are PS/2 type, even though they look and connect like serial mice. You'll almost certainly want serial support (generic) as a minimum. Generic printer support is also listed here.

The penultimate section is often the most troubling: sound. Choose carefully from the list and read the available help. Make sure you've chosen the correct I/O base and IRQs for your card. The MPU I/O base for a SoundBlaster card is listed as 0. This is normally 330 and your sound module will complain if this value is incorrect. Don't worry. One of the nice things about modules is you can recompile and reinstall the modules as long as the kernel was compiled with the hook. (Aren't modules great?).

The final section contains one question that should probably be answered as “no, kernel hacking”.

Save your configuration and exit.

I have, on several occasions, had trouble editing the numbers in menuconfig or xconfig to values I knew were correct. For whatever reason, I couldn't change the number or config wouldn't accept the number, telling me it was invalid. For example, changing the SoundBlaster IRQ from the config default of 7 to 5, and the MPU base I/O from 0 to 300. If you experience this problem, but everything else went well, don't despair. The file you just wrote when you did a “Save” and “Exit” is an editable text file. You may use your text editor of choice: Emacs, vi, CrispLite, joe, etc. Your configuration file is in the /usr/src/linux directory and is called .config. The leading dot causes the file to be hidden during a normal directory listing (ls), but it shows up when the -a option is specified. Just edit the numbers in this file that you had trouble with in the configuration process. Next, type make dep to propagate your configurations from the .config file to the proper subdirectories and to complete the setup. Finally, type make clean to prepare for the final kernel build.

Building the Kernel

We're now ready to begin building the kernel. There are several options for accomplishing this task:

  • make zImage: makes the basic, compressed kernel and leaves it in the /usr/src/linux/arch/i386/boot directory as zImage.

  • make zlilo: Copies the zImage to the root directory (unless you edited the top-level Makefile) and runs LILO. If you choose to use this option, you'll have to ensure that /etc/lilo.conf is preconfigured.

  • make zdisk: Writes zImage to a floppy disk in /dev/fd0 (the first floppy drive—the a: drive in DOS). You'll need the disk in the drive before you start. You can accomplish the same thing by running make zImage and copying the image to a floppy disk cp /usr/src/linux/arch/i386/boot/zImage /dev/fd0 Note that you'll need to use a high-density disk. The low density 720k disks will reportedly not boot the kernel.

  • make boot: Works just the same as the zImage option.

  • make bzImage: Used for big kernels and operates the same as zImage. You will know if you need this option, because make will fail with a message that the image is too big.

  • make bzdisk: Used for big kernels and operates the same as zdisk. You will know if you need this option, because make will fail with a message that the image is too big.

Other make options are available, but are specialized, and are not covered here. Also, if you need specialized support, such as for a RAM disk or SMP, read the appropriate documentation and edit the Makefile in /usr/src/linux (also called the top-level Makefile) accordingly. Since all the options I discussed above are basically the same as the zImage option, the rest of this article deals with make zImage--it is the easiest way to build the kernel.

For those of you who wish to speed up the process and won't be doing other things (such as configuring other applications), I suggest you look at the man page for make and try out the -j option (perhaps with a limit like 5) and also the -l option.

If you chose modules during the configuration process, you'll want to issue the commands:

make modules
make modules_install

to put the modules in their default location of /lib/modules/2.0.x/, x being the kernel minor number. If you already have this subdirectory and it has subdirectories such as block, net, scsi, cdrom, etc., you may want to remove 2.0.x and everything below it unless you have some proprietary modules installed, in which case don't remove it. When the modules are installed, the subdirectories are created and populated.

You could just as easily have combined the last three commands:

make zImage; make modules; make modules_install

then returned after all the disk churning finished. The ; (semicolon) character separates sequential commands on one line and performs each command in order so that you don't have to wait around just to issue the next command.

Once your kernel is built and your modules installed, we have a few more items to take care of. First, copy your kernel to the root (or /boot/ or /etc/, if you wish):

cp /usr/src/linux/arch/i386/boot/zImage /zImage

You should also copy the /usr/src/linux/System.map file to the same directory as the kernel image. Then change (cd) to the /etc directory to configure LILO. This is a very important step. If we don't install a pointer to the new kernel, it won't boot. Normally, an install kernel is called vmlinuz. Old-time Unix users will recognize the construction of this name. The trailing “z” means the image is compressed. The “v” and “m” also have significance and mean “virtual” and “sticky” respectively and pertain to memory and disk management. I suggest you leave the vmlinuz kernel in place, since you know it works.

Edit the /etc/lilo.conf file to add your new kernel. Use the lines from the image=/vmlinuz line to the next image= line or the end. Duplicate what you see, then change the first line to image=/zImage (assuming your kernel is in the root directory) and choose a different name for the label=. The first image in the file is the default, others will have to be specified on the command line in order to boot them. Save the file and type:

lilo

You will now see the kernel labels, and the first one will have an asterisk. If you don't see the label that you gave your new kernel or LILO terminates with an error, you'll need to redo your work in /etc/lilo.conf (see LILO man pages).

We're almost ready to reboot. At this point, if you know your system will only require one reboot to run properly, you might want to issue the command:

depmod -a 2.0.x

where x is the minor number of the kernel you just built. This command creates the dependencies file some modules need. You'll also want to make sure you don't boot directly into xdm. For Red Hat type systems, this means ensuring the /etc/inittab file doesn't have a default run level of 5, or that you remember to pass LILO the run level at boot time. For Debian systems, you can just type:

mv /etc/init.d/xdm /etc/init.d/xdm.orig
for now and move it back later.

Normal Rebooting the New Kernel

Reboot your machine using:

shutdown -r now

While typing reboot or pressing the ctrl+alt+del key combination usually works, I don't recommend either one. Under some circumstances, the file systems won't be properly unmounted and could corrupt open files. At the LILO prompt, if you need to boot the old kernel or pass some parameters for bootup and you don't see the boot: prompt, you can try pressing either the shift or ctrl key, and the boot: prompt should appear. Once you have it, press tab to see the available kernel labels. Type the label and optionally enter any parameters for bootup. Normally, however, the default kernel should boot automatically after the timeout interval specified in the /etc/lilo.conf file. During bootup, you may see a few error messages containing: SIOCADDR or the like. These usually indicate that a module (normally a network module) didn't load. We'll handle this shortly. If you got the error, “VFS, cannot mount root”, you didn't compile the proper disk or file-system support into the kernel.

Troubleshooting

Due to the different ways in which each distribution handles daemon startup from /etc/inittab, it is difficult in this article to cover all the possible reasons your bootup may not have gone smoothly and the reasons why. However, I can tell you where to start looking.

First, run depmod -a to ensure you have an up-to-date, module dependency file (it will be created in the appropriate subdirectory). If you get a string of errors about unresolved dependencies, old modules are present in the modules subdirectories, and you didn't configure the kernel with “Module Versions” enabled. This is not a fatal error. The modules you compiled and installed are good. Check the /etc/conf.modules file and make sure that any lines pointing to /lib/modules are complete:

/lib/modules/`uname -r`/xx

(Note: the grave quote on each side of uname -r is located above the Tab key in the upper left corner of the keyboard on a U.S. keyboard).

Make sure kerneld is running and that it is loaded early in the bootup process. If it is, then the system doesn't need to explicitly load modules, kerneld will handle it. Be careful about calling kerneld too early in the first rc script. kerneld will stop the bootup process forcing a hard reboot via the reset button or power switch, if it is called before the system knows its host name. If this happens to you, you can reboot passing LILO the -b argument which prevents init from executing any rc scripts. Next, look in /etc/rc.d/ at the rc, rc.sysinit and rc.modules files. One or more may point to a directory such as /etc/modules/`uname -r`/`uname -v` where a list of bootup modules are located. You can just copy the old file over to the new directory;

mkdir /etc/modules/`uname -r` ;
cp /etc/modules/2.0.xx/g#1 Thu 3 Sep 1997.\
        default /etc/modules/`uname -r`/\
        `uname -v`.default""

Your system will almost certainly have a different date for the modules file. Your system also may or may not use the default extension. Pay close attention to the use of grave quotes and double quotes in the above example, since both are needed in the proper places. Once you have found the keys to your system, you should be able to reboot into a properly functioning system. If you experience further problems, the best place to get quick, expert advice is on a mailing list dedicated to your particular distribution. Those successfully running a particular distribution usually delight in assisting novices with problems they may encounter. Why? Because they hit the same brick walls when they were novices and received help with many problems. Lurk a few days on a list, and if your question isn't asked by someone else, ask it yourself. Check the mail-list archives first, if any are present. These archives contain answers to frequently asked questions (FAQ).

Conclusion

While building a kernel tailored to your system may seem a daunting challenge for new administrators, the time spent is worth it. Your system will run more efficiently, and more importantly, you will have the satisfaction of building it yourself.

The few areas where you may encounter trouble are in remembering to rerun LILO after installing the new kernel, but you didn't overwrite your old one (or did you?), so you can always revert to one that worked from the lilo: prompt. Distribution specific problems during bootup may also be encountered during the first reboot but are usually easily resolved. Help is normally only an e-mail away for those distributions that don't come with technical support.

David Bandel is a Computer Network Consultant specializing in Linux, but he begrudgingly works with Windows and those “real” Unix boxes like DEC 5000s and Suns. When he's not working, he can be found hacking his own system or enjoying the view of Seattle from 2,500 feet up in an airplane. He welcomes your comments, criticisms, witticisms, and will be happy to further obfuscate the issue. You may reach him via e-mail at dbandel@ix.netcom.com or snaill mail c/o Linux Journal.