Starting from:
$30

$24

Operating System Lab 06 Solution





Objective 2

Inter-Process Communication (IPC) & its Methodologies 2

Pipe 2

Named Pipe 3

Message Queue 4

Shared Memory 5








Lab Manual

Contents
Objective
The purpose of this lab is to introduce you with IPC (Inter-Process Communication) and their methodologies

Inter-Process Communication (IPC) & its Methodologies
IPC is very vital in any Embedded System. A program may have to feed another process for it to proceed. It is inherent in all the embedded systems. Following are very commonly used IPC mechanisms.

Pipe
Named Pipe
Signals
Message Queue
Shared Memory
Semaphores
Remote procedure calls
Sockets
For this lab we will focus on pipes, named pipes, message queue and shared memory. Signals and semaphores will be cover in later labs

Pipe
A pipe is very simple way of communicating between two processes. One relevant real time example will be of watering plants in garden. To water the plants available at garden the tube will be connected to a water tank and another end of the pipe will be used to water the plants. Same is the scenario. When process A has to transfer data to process B it can use pipe. And most important thing is pipe here is unidirectional i.e. data can be sent in either of the directions at a time. If there needs to be dual communication then 2 pipes have to be used. Another thing to remember that pipes can only be used between related processes. No two different unrelated processes can use pipe. None will water plants in neighbor’s house. This is the case here.

PIPE can be created with pipe() system call and it will return two file descriptors accepting array of integers a argument. One file descriptor (FD) will be used as Read end file descriptor and second one can be used as Write end file descriptor. File descriptor is an integer allotted by the system for each file that is created.

Most important thing to remember in piped as conveyed earlier is, they can be used only with related processes (A process when has a child for itself then they become related).

Keeping the above basic points in mind, one can easily walkthrough the code presented below:



The execution of the above code is shown below:







So ‘Hellow Mr.Linux is sent to the child process which the user can see on the screen. One beauty in this mechanism is unless parent writes child cannot read and there exists a synchronization which is very vital. Only disadvantage associated with pipe is, it can be used for only related processes. Now this problem can be overcome by using Named pipe or FIFO.

Named Pipe
To overcome that we can use ‘Named Pipes’ which is also known as FIFO (First In First Out). Here the concept is slightly different. Taking a real world example again: suppose a person has to pass a letter to someone. Due to some situations, it cannot be given in person. Simple solution is that find a third person who is familiar to both the people. Now that third person will be able to hand over the paper to destination successfully. Same is the case with named pipe. It can be used for communication between two different processes. The sequence goes like this, Process A will write the data in a common file which Process B can also access. After data has been written by A, B will read the data from that common file. After reading the file can be deleted. The term file has to be refined. It is also called as FIFO in Linux which can be created with available system calls. System call mkfifo can be used to create a FIFO. In FIFO two different processes can communicate which is revealed with following given C code, where fifo_write.c is FIFO write program and fifo_read.c is read program. Write program has to be executed first then read can be executed. Even if the user executes read program, it will wait for the writer to write the data. So, here exists an auto synchronization which is highly appreciable feature.

C code for both read and write are presented below, mkfifo has to be specified with the access permissions. Recall from lab manual 02 that A file when created has got permissions associated with it. There are basically three kinds of users available in Linux and three kinds of permissions associated with a file. Next question would arise in minds that can the permissions be change? Yes, it can be altered. ‘chmod’ is the command meant for it.

 

The execution of the above code is given below



The execution of fifo_write and fifo_read is shown. If the read is executed first, it will wait until write is executed. Automatic synchronization will be there.

Message Queue
Two or more processes can exchange information via access to a common system message queue. The sending process places via some (OS) message-passing module a message onto a queue which can be read by another process. Each message is given an identification or type so that processes can select the appropriate message. Process must share a common key in order to gain access to the queue in the first place.

Message queues provide an asynchronous way of communication possible, meaning that the sender and receiver of the message need not interact with the message queue at the same time. Message queue has a wide range of applications. Very simple applications can be taken as example here.

Taking input from the keyboard
To display output on the screen
Voltage reading from sensor etc.
A task which has to send the message can put message in the queue and other tasks. A message queue is a buffer-like object which can receive messages from ISRs (Interrupt Service Routine), tasks and the same can be transferred to other recipients. In short, it is like a pipeline. It can hold the messages sent by sender for a period until receiver reads it. And biggest advantage which someone can have in queue is receiver and sender need not use the queue on same time. Sender can come and post message in queue, receiver can read it whenever needed. Message queue basically composed of few components. A message queue should have a start and it should have an end as well. Starting point of a queue is referred as head of the queue and terminating point is called tail of the queue. Size of the queue has to be decided by the programmer while writing the code. And a queue cannot be read if it is empty. Meanwhile, a queue cannot be written into if it is already full. And a queue can have some empty elements as well.

The message queue can be implemented in Linux machine with available system calls. The basic operations to be carried out in queue are

Creation/Deletion of queue
Sending/Receiving of message
Two different files have to be written here: one for sender and another one for receiver. Receiver will wait until the sender writes into the queue. One important advantage with message queue is, it support automatic synchronization between the sender and receiver. Receiver will wait until sender writes. Another advantage is memory can be freed after usage which is very essential in all software system.

Few thing can be taken into consideration before writing code for queue.

An Identifier has to be generated (key)
msgsnd() - will initialize the queue.
msgrcv() - will be used to receive the message
msgclt() - Control action can be performed with this call i.e. deletion can be done with msgclt().
Below codes are for demonstrating message queue, you may face administrative privileges if not having while running these codes.

 

The execution is shown below.



So the code should prompt the sender for typing the data to be sent to receiver. In parallel, from another terminal message_rcv would receive all the information that sender types. If receiver compiles and executes first, program will wait until sender drops the message.

Shared Memory
In the discussion of the fork ( ) system call, we mentioned that a parent and its children have separate address spaces. While this would provide a more secured way of executing parent and children processes (because they will not interfere each other), they shared nothing and have no way to communicate with each other. A shared memory is an extra piece of memory that is attached to some address spaces for their owners to use. As a result, all of these processes share the same memory segment and have access to it. Consequently, race conditions may occur if memory accesses are not handled properly. The following figure shows two processes and their address spaces. The yellow rectangle is a shared memory attached to both address spaces and both process 1 and process 2 can have access to this shared memory as if the shared memory is part of its own address space. In some sense, the original address space is "extended" by attaching this shared memory.





































This mechanism is very important and most frequently used. Shared memory can even be used between unrelated processes. By default page memory of 4KB would be allocated as shared memory. Assume process 1 wants to access its shared memory area. It has to get attached to it first. Though its P1’s memory area, it cannot get access as such. Only after attaching it can gain access. A process creates a shared memory segment using shmget(). The original owner of a shared memory segment can assign ownership to another user with shmclt(). It can also revoke this assignment. Other processes with proper permission can perform various control functions on the shared memory segment using shmctl(). Once created a shared memory segment can be attached to a process address space using shmcat(). It can be detached using shmdt(). The attaching process must be appropriate permissions for shmat(). Once attached, the process can read and write segment, as allowed by the permission requested in the attach operation. A shared memory segment can be attached multiple times by the same process. A shared memory segment is described by a control structure with a unique ID that points to an area of physical memory. The identifier of the segment is called the shmid. The structure definition for the shared memory segment control structure and prototypes can be found in <sys/shm.h. There are three steps:

Initialization
Attach
Detach
The client server scenario would be perfect to demonstrate shared memory, the general scheme of using shared memory is the following

For Server

Ask for a shared memory with a memory key and memorize the returned shared memory ID. This is performed by system call shmget().
Attach this shared memory to the server’s address space with system call shmat().
Initialize the shared memory, if necessary.
Do something and wait for all clients’ completion.
Detach the shared memory with system call shmdt().
Remove the shared memory with system call shmclt().
For Client

Ask for a shared memory with the same memory key and memorize the returned shared memory ID.
Attach this shared memory to the client’s address space
Use the memory
Detach all shared memory segments, if necessary
Exit.
Below is the two separate program for read and write presented here.

 

And the execution of the above code is









Execution is similar to the cases dealt in the past.

More products