Inter Process Communication (IPC)
Inter process communication (IPC) is a mechanism which allows processes to
communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of co-operation between them.
Processes can communicate with each other through both:
1. Shared Memory
2. Message passing
The Figure below shows a basic structure of communication between processes
via the shared memory method and via the message passing method.
i)
Shared Memory: An operating system can implement both method of
communication. First, we will discuss the shared memory methods of
communication and then message passing. Communication between processes
using shared memory requires processes to share some variable and it
completely depends on how programmer will implement it. One way of
communication using shared memory can be imagined like this: Suppose
process1 and process2 are executing simultaneously and they share some
resources or use some information from another process. Process1 generate
information about certain computations or resources being used and keeps it as
a record in shared memory. When process2 needs to use the shared information,
it will check in the record stored in shared memory and take note of the
information generated by process1 and act accordingly. Processes can use
shared memory for extracting information as a record from another process as
well as for delivering any specific information to other processes.
ii) Messaging Passing Method
In this method, processes communicate with each other without using any kind of
shared memory. If two processes p1 and p2 want to communicate with each other,
they proceed as follows:
∙ Establish a communication link (if a link already exists, no need to establish it
again.)
∙ Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)
Precedence Graph
It is a directed acyclic graph which is used to show the execution level of several
processes in operating system. It consists of nodes and edges. Nodes represent the
processes and the edges represent the flow of execution.
Properties of Precedence Graph :
Following are the properties of Precedence Graph:
∙ It is a directed graph.
∙ It is an acyclic graph.
∙ Nodes of graph correspond to individual statements of program code. ∙
Edge between two nodes represents the execution order.
∙ A directed edge from node A to node B shows that statement A executes first
and then Statement B executes.
Consider the following code:
S1 : a = x + y;
S2 : b = z + 1;
S3 : c = a - b;
S4 : w = c + 1;
If above code is executed concurrently, the following precedence relations exist:
∙c = a – b cannot be executed before both a and b have been assigned values. ∙ w
= c + 1 cannot be executed before the new values of c has been computed. ∙ The
statements a = x + y and b = z + 1 could be executed concurrently.
Producer-Consumer problem / Bounded Buffer Problem
There are two processes: Producer and Consumer. Producer produces some item
and Consumer consumes that item. The two processes share a common space or
memory location known as a buffer where the item produced by Producer is stored
and from which the Consumer consumes the item, if needed. There are two
versions of this problem: the first one is known as unbounded buffer problem in
which Producer can keep on producing items and there is no limit on the size of
the buffer, the second one is known as the bounded buffer problem in which
Producer can produce up to a certain number of items before it starts waiting for
Consumer to consume it.
Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Consumer process
item nextConsumed;
while (1) {
while (counter == 0)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
The statements counter++ and counter-- must be performed separately. If both
the producer and consumer attempt to update the buffer concurrently, there may
be a problem.
Assume counter is initially 5. If both the process are running concurrently the
value of count may be either 4 or 6, where the correct result should be 5.
Race condition: The situation where several processes access and manipulate
shared data concurrently. The final value of the shared data depends upon which
Process finishes last. To prevent race conditions, concurrent processes must be
synchronized.
Critical Section Problem
Critical section is a code segment that can be accessed by only one process at a
time. Critical section contains shared variables which need to be synchronized to
maintain consistency of data variables.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
∙ Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
∙ Progress : If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
in the critical section next, and the selection cannot be postponed indefinitely.
∙ Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section
problem.
In Peterson’s solution, we have two shared variables:
∙ boolean flag[i] :Initialized to FALSE, initially no one is interested in entering
the critical section
∙ int turn : The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions:
∙ Mutual Exclusion is assured as only one process can access the critical section at
any time.
∙ Progress is also assured, as a process outside the critical section does not
block other processes from entering the critical section.
∙ Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s Solution
∙ It involves Busy waiting
∙ It is limited to 2 processes.
Synchronization Hardware
∙ TestAndSet is a hardware solution to the synchronization problem. In
TestAndSet, we have a shared lock variable which can take either of the
two values, 0 or 1.
∙ 0 Unlock
∙ 1 Lock
∙ Before entering into the critical section, a process inquires about the lock. If
it is locked, it keeps on waiting until it becomes free and if it is not
locked, it takes the lock and executes the critical section.
do{
Acquire lock
//Critical section
Release lock
// remainder section
}while(true);
Semaphore:
Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait and signal that are used for process
synchronization.
The definitions of wait and signal are as follows −
∙ Wait
The wait operation decrements the value of its argument S, if it is positive. If S
is negative or zero, then no operation is performed.
wait(S)
while (S<=0);
S--;
∙ Signal
The signal operation increments the value of its argument S.
signal(S)
S++;
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
∙ Counting Semaphores
These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are
added, semaphore count automatically incremented and if the resources are
removed, the count is decremented.
∙ Binary Semaphores
The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. This is also known as mutex lock. The wait operation
only works when the semaphore is 1 and the signal operation succeeds when
semaphore is 0. It is sometimes easier to implement binary semaphores than
counting semaphores.
Classical Problems of Synchronization:
1. Bounded-Buffer Problem
2. Readers and Writers Problem
3. Dining-Philosophers Problem
Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem. This
problem is generalized in terms of the Producer-Consumer problem.
Solution to this problem is, creating two counting semaphores “full” and
“empty” to keep track of the current number of full and empty buffers
respectively. Producers produce a product and consumers consume the
product, but both use of one of the containers each time.
Shared data
semaphore full, empty, mutex;
Initially:
full = 0, empty = n, mutex = 1
Code for producer process
do {
…
produce an item in nextp
…
wait(empty);
wait(mutex);
…
add nextp to buffer
…
signal(mutex);
signal(full);
} while (1);
Code for consumer process
do {
wait(full)
wait(mutex);
…
remove an item from buffer to nextc
…
signal(mutex);
signal(empty);
…
consume the item in nextc
…
} while (1);
Dining-Philosphers Problem:
The Dining Philosopher Problem states that K philosophers seated around a
circular table with one chopstick between each pair of philosophers. There is one
chopstick between each philosopher. A philosopher may eat if he can pick up the
two chopsticks adjacent to him. One chopstick may be picked up by any one of its
adjacent followers but not both. This problem involves the allocation of limited
resources to a group of processes in a deadlock-free and starvation-free manner.
Shared data
semaphore chopstick[5];
Initially all values are 1
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…
eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
…
} while (1);
Readers and Writers Problem:
Suppose that a database is to be shared among several concurrent processes.
Some of these processes may want only to read the database, whereas others
may want to update (that is, to read and write) the database. We distinguish
between these two types of processes by referring to the former as readers and
to the latter as writers. Precisely in OS we call this situation as the readers
writers problem. Problem parameters:
∙ One set of data is shared among a number of processes.
∙ Once a writer is ready, it performs its write. Only one writer may write at a
time.
∙ If a process is writing, no other process can read it.
∙ If at least one reader is reading, no other process can write. ∙
Readers may not write and only read.
∙
Writer Process
wait(wrt);
…
writing is performed
…
signal(wrt);
Reader Process
wait(mutex);
readcount++;
if (readcount == 1)
wait(wrt);
signal(mutex);
…
reading is performed
…
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex):