This is the second article in a series about interprocess communication (IPC) in Linux. The first article focused on IPC through shared storage: shared files and shared memory segments. This article turns to pipes, which are channels that connect processes for communication. A channel has a write end for writing bytes, and a read end for reading these bytes in FIFO (first in, first out) order. In typical use, one process writes to the channel, and a different process reads from this same channel. The bytes themselves might represent anything: numbers, employee records, digital movies, and so on.
Pipes come in two flavors, named and unnamed, and can be used either interactively from the command line or within programs; examples are forthcoming. This article also looks at memory queues, which have fallen out of fashion—but undeservedly so.
The code examples in the first article acknowledged the threat of race conditions (either file-based or memory-based) in IPC that uses shared storage. The question naturally arises about safe concurrency for the channel-based IPC, which will be covered in this article. The code examples for pipes and memory queues use APIs with the POSIX stamp of approval, and a core goal of the POSIX standards is thread-safety.
Consider the man pages for the mq_open function, which belongs to the memory queue API. These pages include a section on Attributes with this small table:
The value MT-Safe (with MT for multi-threaded) means that the mq_open function is thread-safe, which in turn implies process-safe: A process executes in precisely the sense that one of its threads executes, and if a race condition cannot arise among threads in the same process, such a condition cannot arise among threads in different processes. The MT-Safe attribute assures that a race condition does not arise in invocations of mq_open. In general, channel-based IPC is concurrent-safe, although a cautionary note is raised in the examples that follow.
Let's start with a contrived command line example that shows how unnamed pipes work. On all modern systems, the vertical bar | represents an unnamed pipe at the command line. Assume % is the command line prompt, and consider this command:
The sleep and echo utilities execute as separate processes, and the unnamed pipe allows them to communicate. However, the example is contrived in that no communication occurs. The greeting Hello, world! appears on the screen; then, after about five seconds, the command line prompt returns, indicating that both the sleep and echo processes have exited. What's going on?
In the vertical-bar syntax from the command line, the process to the left (sleep) is the writer, and the process to the right (echo) is the reader. By default, the reader blocks until there are bytes to read from the channel, and the writer—after writing its bytes—finishes up by sending an end-of-stream marker. (Even if the writer terminates prematurely, an end-of-stream marker is sent to the reader.) The unnamed pipe persists until both the writer and the reader terminate.
In the contrived example, the sleep process does not write any bytes to the channel but does terminate after about five seconds, which sends an end-of-stream marker to the channel. In the meantime, the echo process immediately writes the greeting to the standard output (the screen) because this process does not read any bytes from the channel, so it does no waiting. Once the sleep and echo processes terminate, the unnamed pipe—not used at all for communication—goes away and the command line prompt returns.
Here is a more useful example using two unnamed pipes. Suppose that the file test.dat looks like this:
pipes the output from the cat (concatenate) process into the sort process to produce sorted output, and then pipes the sorted output into the uniq process to eliminate duplicate records (in this case, the two occurrences of the reduce to one):
The scene now is set for a program with two processes that communicate through an unnamed pipe.
Example 1. Two processes communicating through an unnamed pipe.
The pipeUN program above uses the system function fork to create a process. Although the program has but a single source file, multi-processing occurs during (successful) execution. Here are the particulars in a quick review of how the library function fork works:
1) The fork function, called in the parent process, returns -1 to the parent in case of failure. In the pipeUN example, the call is:
The returned value is stored, in this example, in the variable cpid of integer type pid_t. (Every process has its own process ID, a non-negative integer that identifies the process.) Forking a new process could fail for several reasons, including a full process table, a structure that the system maintains to track processes. Zombie processes, clarified shortly, can cause a process table to fill if these are not harvested.
2) If the fork call succeeds, it thereby spawns (creates) a new child process, returning one value to the parent but a different value to the child. Both the parent and the child process execute the same code that follows the call to fork. (The child inherits copies of all the variables declared so far in the parent.) In particular, a successful call to fork returns:
Zero to the child process
The child's process ID to the parent
3) An if/else or equivalent construct typically is used after a successful fork call to segregate code meant for the parent from code meant for the child. In this example, the construct is:
If forking a child succeeds, the pipeUN program proceeds as follows. There is an integer array:
to hold two file descriptors, one for writing to the pipe and another for reading from the pipe. (The array element pipeFDs is the file descriptor for the read end, and the array element pipeFDs is the file descriptor for the write end.) A successful call to the system pipe function, made immediately before the call to fork, populates the array with the two file descriptors:
The parent and the child now have copies of both file descriptors, but the separation of concerns pattern means that each process requires exactly one of the descriptors. In this example, the parent does the writing and the child does the reading, although the roles could be reversed. The first statement in the child if-clause code, therefore, closes the pipe's write end:
and the first statement in the parent else-clause code closes the pipe's read end:
The parent then writes some bytes (ASCII codes) to the unnamed pipe, and the child reads these and echoes them to the standard output.
One more aspect of the program needs clarification: the call to the wait function in the parent code. Once spawned, a child process is largely independent of its parent, as even the short pipeUN program illustrates. The child can execute arbitrary code that may have nothing to do with the parent. However, the system does notify the parent through a signal—if and when the child terminates.
What if the parent terminates before the child? In this case, unless precautions are taken, the child becomes and remains a zombie process with an entry in the process table. The precautions are of two broad types. One precaution is to have the parent notify the system that the parent has no interest in the child's termination:
A second approach is to have the parent execute a wait on the child's termination, thereby ensuring that the parent outlives the child. This second approach is used in the pipeUN program, where the parent code has this call:
This call to wait means wait until the termination of any child occurs, and in the pipeUN program, there is only one child process. (The NULL argument could be replaced with the address of an integer variable to hold the child's exit status.) There is a more flexible waitpid function for fine-grained control, e.g., for specifying a particular child process among several.
The pipeUN program takes another precaution. When the parent is done waiting, the parent terminates with the call to the regular exit function. By contrast, the child terminates with a call to the _exit variant, which fast-tracks notification of termination. In effect, the child is telling the system to notify the parent ASAP that the child has terminated.
If two processes write to the same unnamed pipe, can the bytes be interleaved? For example, if process P1 writes:
to a pipe and process P2 concurrently writes:
to the same pipe, it seems that the pipe contents might be something arbitrary, such as:
The POSIX standard ensures that writes are not interleaved so long as no write exceeds PIPE_BUF bytes. On Linux systems, PIPE_BUF is 4,096 bytes in size. My preference with pipes is to have a single writer and a single reader, thereby sidestepping the issue.
An unnamed pipe has no backing file: the system maintains an in-memory buffer to transfer bytes from the writer to the reader. Once the writer and reader terminate, the buffer is reclaimed, so the unnamed pipe goes away. By contrast, a named pipe has a backing file and a distinct API.
Let's look at another command line example to get the gist of named pipes. Here are the steps:
Open two terminals. The working directory should be the same for both.
In one of the terminals, enter these two commands (the prompt again is %, and my comments start with ##):
At the beginning, nothing should appear in the terminal because nothing has been written yet to the named pipe.
In the second terminal, enter the command:
Whatever is typed into this terminal is echoed in the other. Once Ctrl+C is entered, the regular command line prompt returns in both terminals: the pipe has been closed.
Clean up by removing the file that implements the named pipe:
As the utility's name mkfifo implies, a named pipe also is called a FIFO because the first byte in is the first byte out, and so on. There is a library function named mkfifo that creates a named pipe in programs and is used in the next example, which consists of two processes: one writes to the named pipe and the other reads from this pipe.
Example 2. The fifoWriter program
The fifoWriter program above can be summarized as follows:
The program creates a named pipe for writing:
where pipeName is the name of the backing file passed to mkfifo as the first argument. The named pipe then is opened with the by-now familiar call to the open function, which returns a file descriptor.
For a touch of realism, the fifoWriter does not write all the data at once, but instead writes a chunk, sleeps a random number of microseconds, and so on. In total, 768,000 4-byte integer values are written to the named pipe.
After closing the named pipe, the fifoWriter also unlinks the file:
The system reclaims the backing file once every process connected to the pipe has performed the unlink operation. In this example, there are only two such processes: the fifoWriter and the fifoReader, both of which do an unlink operation.
The two programs should be executed in different terminals with the same working directory. However, the fifoWriter should be started before the fifoReader, as the former creates the pipe. The fifoReader then accesses the already created named pipe.
Example 3. The fifoReader program
The fifoReader program above can be summarized as follows:
Because the fifoWriter creates the named pipe, the fifoReader needs only the standard call open to access the pipe through the backing file:
The file opens as read-only.
The program then goes into a potentially infinite loop, trying to read a 4-byte chunk on each iteration. The read call:
returns 0 to indicate end-of-stream, in which case the fifoReader breaks out of the loop, closes the named pipe, and unlinks the backing file before terminating.
After reading a 4-byte integer, the fifoReader checks whether the number is a prime. This represents the business logic that a production-grade reader might perform on the received bytes. On a sample run, there were 37,682 primes among the 768,000 integers received.
On repeated sample runs, the fifoReader successfully read all of the bytes that the fifoWriter wrote. This is not surprising. The two processes execute on the same host, taking network issues out of the equation. Named pipes are a highly reliable and efficient IPC mechanism and, therefore, in wide use.
Here is the output from the two programs, each launched from a separate terminal but with the same working directory:
Pipes have strict FIFO behavior: the first byte written is the first byte read, the second byte written is the second byte read, and so forth. Message queues can behave in the same way but are flexible enough that byte chunks can be retrieved out of FIFO order.
As the name suggests, a message queue is a sequence of messages, each of which has two parts:
The payload, which is an array of bytes (char in C)
A type, given as a positive integer value; types categorize messages for flexible retrieval
Consider the following depiction of a message queue, with each message labeled with an integer type:
Of the four messages shown, the one labeled 1 is at the front, i.e., closest to the receiver. Next come two messages with label 2, and finally, a message labeled 3 at the back. If strict FIFO behavior were in play, then the messages would be received in the order 1-2-2-3. However, the message queue allows other retrieval orders. For example, the messages could be retrieved by the receiver in the order 3-2-1-2.
The mqueue example consists of two programs, the sender that writes to the message queue and the receiver that reads from this queue. Both programs include the header file queue.h shown below:
Example 4. The header file queue.h
The header file defines a structure type named queuedMessage, with payload (byte array) and type (integer) fields. This file also defines symbolic constants (the #define statements), the first two of which are used to generate a key that, in turn, is used to get a message queue ID. The ProjectId can be any positive integer value, and the PathName must be an existing, accessible file—in this case, the file queue.h. The setup statements in both the sender and the receiver programs are:
The ID qid is, in effect, the counterpart of a file descriptor for message queues.
Example 5. The message sender program
The sender program above sends out six messages, two each of a specified type: the first messages are of type 1, the next two of type 2, and the last two of type 3. The sending statement:
is configured to be non-blocking (the flag IPC_NOWAIT) because the messages are so small. The only danger is that a full queue, unlikely in this example, would result in a sending failure. The receiver program below also receives messages using the IPC_NOWAIT flag.
Example 6. The message receiver program
The receiver program does not create the message queue, although the API suggests as much. In the receiver, the call:
is misleading because of the IPC_CREAT flag, but this flag really means create if needed, otherwise access. The sender program calls msgsnd to send messages, whereas the receiver calls msgrcv to retrieve them. In this example, the sender sends the messages in the order 1-1-2-2-3-3, but the receiver then retrieves them in the order 3-1-2-1-3-2, showing that message queues are not bound to strict FIFO behavior:
The output above shows that the sender and the receiver can be launched from the same terminal. The output also shows that the message queue persists even after the sender process creates the queue, writes to it, and exits. The queue goes away only after the receiver process explicitly removes it with the call to msgctl:
The pipes and message queue APIs are fundamentally unidirectional: one process writes and another reads. There are implementations of bidirectional named pipes, but my two cents is that this IPC mechanism is at its best when it is simplest. As noted earlier, message queues have fallen in popularity—but without good reason; these queues are yet another tool in the IPC toolbox. Part 3 completes this quick tour of the IPC toolbox with code examples of IPC through sockets and signals.