Disabling Interrupts
The simplest solution is to have each process disable all interrupts just after entering its CS and re-enable them just before leaving it. With interrupts disabled, the processor can not switch to another process. Thus, once a process has disabled interrupts, it can examine and update the shared memory without fear that any other process will intervene.
This approach is generally unattractive because it is unwise to give user processes the power to turn off interrupts. Suppose one of them did it, and never turned them on again. That can be the end of the system. Furthermore, if the system has more than one processor, this method will fail again since the process can disable the interrupts of the processor it is being executed by.
Lock Variables
As a second attempt, let us look for a software solution. Consider having a single, shared lock variable initialized to 0. When a process wants to enter its CS, it first tests the lock. If the lock is 0, the process sets it to 1 and enters the CS. If the lock is already 1, the process just waits until it becomes 0.
#define FALSE 0 #define TRUE 1 int lock=FALSE PO: { while (TRUE) { while (lock) { }; /* wait */ lock=TRUE; CS( ) ; lock=FALSE; Non-CS( ) ; } } P1: { while (TRUE) { while (lock) { }; /* wait */ lock=TRUE; CS( ) ; lock=FALSE; Non-CS( ) ; } }
Unfortunately, this idea contains a fatal flaw. Suppose one process reads the lock and sees that it is 0. Before it can set the lock to 1, another process is scheduled, runs, and sets the lock to 1. When the first process runs again, it will also set the lock to 1, and two processes will be in their CSs at the same time. This is the situation we saw in our printer spooler example.
Now, it may be thought that we could get around this problem by first reading the lock value, then checking it again just before storing into it However this solution really does not help. The race now occurs if the second process modifies the lock just after the first process has finished its second check.
int lock=FALSE PO: { while (TRUE) { lock=TRUE; while (lock) { }; /* wait */ CS( ) ; lock=FALSE; Non-CS( ) ; } } P1: { while (TRUE) { lock=TRUE; while (lock) { }; /* wait */ CS( ) ; lock=FALSE; Non-CS( ) ; } }
Strict Alternation
A third approach to the mutual exclusion problem is given below:
int turn=0 PO: { while (TRUE) { while (turn ! = 0) { } ; /* wait */ CS( ) ; turn = 1; Non-CS( ) ; } } P1: { while (TRUE) { while (turn != 1) { } ; /* wait */ CS( ) ; turn = 0; Non-CS( ) ; } }
Here, the integer variable turn, initially 0, keeps track of whose turn it is to enter the CS and examine or update the shared memory. Initially, process 0 inspects turn, finds it to be 0, and enters its CS. Process 1 also finds it to be 0, and therefore sits in a tight loop continually testing turn to see when it becomes 1. Continuously testing a variable waiting for some value to appear is called busy waiting. It should usually be avoided, since it wastes processor
time.
When process 0 leaves the CS, it sets turn to 1, to allow process 1 to enter its CS. Suppose process 1 finishes its CS quickly, so both processes are in their non-CS sections, with turn set to 0. Now process 0 executes its whole loop quickly, coming back to its non-CS with turn set to 1. At this point, process 0 finishes its non-CS and goes back to the top of its loop. Unfortunately, it is not permitted to enter its CS now, because turn is 1 and process 1 is busy with its non-CS. This situation violates condition 3 set out before; process 0 is being blocked by a process not in its CS. Therefore, taking turns is not a good idea when one of the processes is much slower than the other.