Introduction to Multi-Threaded Programming

Brian Masney

Issue #61, May 1999

A description of POSIX thread basics for C programmers.

The purpose of this article is to provide a good foundation of the basics of threaded programming using POSIX threads and is not meant to be a complete source for thread programming. It assumes the reader has a good strong foundation in C programming.

A thread is sometimes referred to as a lightweight process. A thread will share all global variables and file descriptors of the parent process which allows the programmer to separate multiple tasks easily within a process. For example, you could write a multi-threaded web server, and you could spawn a thread for each incoming connection request. This would make the network code inside the thread relatively simple. Using multiple threads will also use fewer system resources compared to forking a child process to handle the connection request. Another advantage of using threads is that they will automatically take advantage of machines with multiple processors.

As I mentioned earlier, a thread shares most of its resources with the parent process, so a thread will use fewer resources than a process would. It shares everything, except each thread will have its own program counter, stack and registers. Since each thread has its own stack, local variables will not be shared between threads. This is true because static variables are stored in the process' heap. However, static variables inside the threads will be shared between threads. Functions like strtok will not work properly inside threads without modification. They have re-entrant versions available to use for threads which have the format oldfunction_r. Thus, strtok's re-entrant version would be strtok_r.

Since all threads of a process share the same global variables, a problem arises with synchronization of access to global variables. For example, let's assume you have a global variable X and two threads A and B. Let's say threads A and B will merely increment the value of X. When thread A begins execution, it copies the value of X into the registers and increments it. Before it gets a chance to write the value back to memory, this thread is suspended. The next thread starts, reads the same value of X that the first thread read, increments it and writes it back to memory. Then, the first thread finishes execution and writes its value from the register back to memory. After these two threads finish, the value of X is incremented by 1 instead of 2 as you would expect.

Errors like this will probably not occur all of the time and so can be very hard to track down. This becomes even more of a problem on a machine equipped with multiple processors, since multiple threads can be running at the same time on different processors, each of them modifying the same variables. The workaround for this problem is to use a mutex (mutual exclusion) to make sure only one thread is accessing a particular section of your code. When one thread locks the mutex, it has exclusive access to that section of code until it unlocks the mutex. If a second thread tries to lock the mutex while another thread has it locked, the second thread will block until the mutex is unlocked and is once more available.

In the last example, you could lock a mutex before you increment the variable X, then unlock X after you increment it. So let's go back to that last example. Thread A will lock the mutex, load the value of X into the registers, then increment it. Again, before it gets a chance to write it back to memory, thread B gets control of the CPU. It will try to lock the mutex, but thread A already has control of it, so thread B will have to wait. Thread A gets the CPU again and writes the value of X to memory from the registers, then frees the mutex. The next time thread B runs and tries to lock the mutex, it will be able to, since it is now free. Thread B will increment X and write its value back to X from the registers. Now, after both threads have completed, the value of X is incremented by 2, as you would expect.

Now let's look at how to actually write threaded applications. The first function you will need is pthread_create. It has the following prototype:

int pthread_create(pthread_t *tid,
   const pthread_attr_t *attr,
   void *(*func)(void *), void *arg)

The first argument is the variable where its thread ID will be stored. Each thread will have its own unique thread ID. The second argument contains attributes describing the thread. You can usually just pass a NULL pointer. The third argument is a pointer to the function you want to run as a thread. The final argument is a pointer to data you want to pass to the function. If you want to exit from a thread, you can use the pthread_exit function. It has the following syntax:

void pthread_exit(void *status)
This will return a pointer that can be retrieved later (see below). You cannot return a pointer local to that thread, since this data will be destroyed when the thread exits.

The thread function prototype shows that the thread function returns a void * pointer. Your application can use the pthread_join function to see the value a thread returned. The pthread_join function has the following syntax:

int pthread_join(pthread_t tid, void **status)

The first argument is the thread ID. The second argument is a pointer to the data your thread function returned. The system keeps track of return values from your threads until you retrieve them using pthread_join. If you do not care about the return value, you can call the pthread_detach function with its thread ID as the only parameter to tell the system to discard the return value. Your thread function can use the pthread_self function to return its thread ID. If you don't want the return value, you can call pthread_detach(pthread_self()) inside your thread function.

Going back to mutexes, the following two functions are available to us: pthread_mutex_lock and pthread_mutex_unlock. They have the following prototype:

int pthread_mutex_lock(pthread_mutex_t *mptr)
int pthread_mutex_unlock(pthread_mutex_t *mtr)

For statically allocated variables, you must first initialize the mutex variable to the constant PTHREAD_MUTEX_INITIALIZER. For dynamically allocated variables, you can use the pthread_mutex_init function to initialize a mutex variable. It has the following prototype:

int pthread_mutex_init(pthread_mutex_t *mutex,
   const pthread_mutexattr_t *mutexattr)
Now we can look at actual code as shown in Listing 1. I have commented the code to help the reader follow what is being done. I have also kept the program very basic. It does nothing truly useful, but should help illustrate the idea of threads. All this program does is initiate 10 threads, each of which increments X until X reaches 4,000. You can remove the pthread_mutex_lock and unlock calls to further illustrate the uses of mutexes.

Listing 1. Example Program

A few more items need to be explained about this program. The threads on your system may run in the order they were created, and they may run to completion before the next thread runs. There is no guarantee as to what order the threads run, or that the threads will run to completion uninterrupted. If you put “real work” inside the thread function, you will see the scheduler swapping between threads. You may also notice, if you take out the mutex lock and unlock, that the value of X may be what was expected. It all depends on when threads are suspended and resumed. A threaded application may appear to run fine at first, but when it is run on a machine with many other things running at the same time, the program may crash. Finding these kinds of problems can be very cumbersome to the application programmer; this is why the programmer must make sure that shared variables are protected with mutexes.

What about the value of the global variable errno? Let's suppose we have two threads, A and B. They are already running and are at different points inside the thread. Thread A calls a function that will set the value of errno. Then, inside thread B, it will wake up and check the value of errno. This is not the value it was expecting, as it just retrieved the value of errno from thread A. To get around this, we must define _REENTRANT. This will change the behavior of errno to have it point to a per-thread errno location. This will be transparent to the application programmer. The _REENTRANT macro will also change the behavior of some of the standard C functions.

To obtain more information about threads, visit the LinuxThreads home page at http://pauillac.inria.fr/~xleroy/linuxthreads/. This page contains links to many examples and tutorials. It also has a link where you can download the thread libraries if you do not already have them. Downloading is necessary only if you have a libc5-based machine; if your distribution is glibc6-based, LinuxThreads should already be installed on your computer. The source code for threaded application that I wrote, gFTP, can be downloaded from my web site at http://www.newwave.net/~masneyb/. This code makes use of all concepts mentioned in this article.

Resources

Brian Masney is currently a student at Concord College in Athens, WV. He also works as a computer technician at a local computer store. In his spare time, he enjoys the outdoors and programming. He can be reached at masneyb@newwave.net.