程序代写代做代考 compiler algorithm CO2017 — Week 2 — Sharing resources among concurrent processes
CO2017 — Week 2 — Sharing resources among concurrent processes
CO2017 — Week 2 — Sharing resources among
concurrent processes
Dr Gilbert Laycock (gtl1)
2017–01–30
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 1 / 28
Recap
Recap
Abstract notion of process (and thread)
Potential to take advantage of concurrent programming:
Fully utilise CPU while some processes are blocked
Fully utilise multi-CPU systems
Some problems easier to solve with parallel reasoning
Some problems inherently non-deterministic
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 2 / 28
Recap
Overview
1 Inter-process communication
2 Critical regions and mutual exclusion
3 Possible solutions
Disabling interrupts
Access in turns; Peterson’s algorithm
Semaphores
4 Note that in this discussion we use process and thread
interchangeably.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 3 / 28
Shared resources Inter-process Communication
InterProcess Communication
Processes need to communicate, e.g. in a shell pipeline.
But, with concurrent operation, there are issues with
passing messages between processes;
safe sharing of resources;
correct sequencing in the presence of dependencies.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 4 / 28
Shared resources Example: shared print spool
Hazards of using shared resources
Programs often use shared resources, e.g:
Send/receive data to/from the same device (printer; etc.);
Use the same file or a block of RAM (variable; object attribute;
etc.) to store data.
The role of the OS is to avoid incorrect operation of programs due
to shared resource clashes.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 5 / 28
Shared resources Example: shared print spool
A shared resource: printer spooler
E.g. we want to ensure that files are printed one after another and
not, say, one page of one file, the next page of a different file.
Printer spool: a place on a disk organised into a number of
consecutive slots
Each process that wants to print a file
detects the first unoccupied slot;
stores the file name in the slot.
The printer manager periodically examines the slots and sends
the files to print.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 6 / 28
Shared resources Example: shared print spool
Scenario of a clash
Processes P1 and P2 are simultaneously running and both want
to print their files.
Process P1 detects the first unoccupied slot in the spooler
directory (slot 6 say) and then a context switch occurs
Process P2 resumes and detects that slot 6 is free, writes its file
name there and then a context switch occurs.
Process P1 resumes again: it writes its file name to slot 6 (erasing
the record of P2).
P2’s file will never be printed.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 7 / 28
Shared resources Race conditions
Race conditions
Race conditions are situations when two or more processes are
reading or writing some shared data and the result depends on
the order of execution.
A race condition occurred in the example because of combination
of two circumstances:
Process P1 was interrupted during the use of the shared resource.
Process P2 used the shared resource while P1 was interrupted.
To prevent the race condition happen, at least one of the
circumstances should be avoided.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 8 / 28
First approaches to synchronisation
Avoiding race conditions
A critical region is part of the program where shared memory is
accessed. Successful synchronisation requires:
Mutual Exclusion: no two processes should be in their critical
region at the same time.
Progress: no process running outside the critical region may
block other processes.
Bounded wait: each processes should enter its critical region
eventually.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 9 / 28
First approaches to synchronisation Disabling interrupts
Disabling interrupts: the method
At the hardware level, processors have instructions that
effectively disable all interrupts
A process accessing a shared resource can act as follows
1 Disable all interrupts
2 Access the shared resource
3 Re-enable interrupts
The absence of interrupts ensures that the process is not
interrupted while it accesses the shared resource.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 10 / 28
First approaches to synchronisation Disabling interrupts
Disabling interrupts: has it helped?
Disabling interrupts achieves mutual exclusion.
But
Disabling interrupts prevents any other processes running even if
it does not need the shared resource.
The strategy is non-preemptive.
So neither progress nor bounded wait is assured.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 11 / 28
First approaches to synchronisation Access in turns
Access in turns: Overview
A preemptive scheduling policy is needed.
Assume for simplicity that there are only two processes (P0 and
P1) running
The resource is associated with a shared variable “turn” that can
take values 0 or 1.
P0 accesses the resources only if (turn == 0); P1 accesses it
only if (turn == 1).
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 12 / 28
First approaches to synchronisation Access in turns
Access in turns: The algorithm
Algorithm for P0 Algorithm for P1
while(turn == 1) {} ; while(turn == 0) {} ;
Access the resource; Access the resource;
turn = 1; turn = 0;
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 13 / 28
First approaches to synchronisation Access in turns
How does the mechanism work?
Assume turn = 0
If P1 wants to access the resource it will be busy waiting (in the
empty while loop).
When P0 finishes using the resource it assigns turn = 1
enabling P1 to proceed.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 14 / 28
First approaches to synchronisation Access in turns
Disadvantage: useless busy waiting
If it is not its turn, a process cannot start using the resource even
if the other process is not interested.
So performing the while loop wastes CPU time.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 15 / 28
First approaches to synchronisation Peterson’s Algorithm
Towards Peterson’s algorithm
Keep track of which processes are interested in accessing the
critical region.
Assume we have only two processes, P0 and P1.
Each process uses a Boolean flag[pid] to express interest in
entering their critical section.
Both flag[0] and flag[1] are initialised to false.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 16 / 28
First approaches to synchronisation Peterson’s Algorithm
Incorrect attempt
This does not work. Why?
Algorithm for P0 Algorithm for P1
flag[0] = true; flag[1] = true;
while(flag[1]) {} ; while(flag[0]) {} ;
// access resource // access resource
… …
flag[0] = false; flag[1] = false;
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 17 / 28
First approaches to synchronisation Peterson’s Algorithm
Peterson’s algorithm
Keeps track of which processes are interested in accessing the
critical region.
Assume we have only two processes, P0 and P1.
Each process uses a Boolean flag[pid] to express interest in
entering the critical section.
Both flag[0] and flag[1] are initialised to false.
To avoid a deadlock, the process that starts the entry protocol
last is allowed to proceed to the critical section first.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 18 / 28
First approaches to synchronisation Peterson’s Algorithm
Peterson’s algorithm
flag[0] = false; flag[1] = false;
Algorithm for P0 Algorithm for P1
flag[0] = true; flag[1] = true;
turn = 1; turn = 0;
while(flag[1] && while(flag[0] &&
turn==1) {} ; turn==0) {} ;
// access resource // access resource
… …
flag[0] = false; flag[1] = false;
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 19 / 28
First approaches to synchronisation Busy waiting
Busy waiting in Peterson’s algorithm
The while loop in Peterson’s algorithm performs busy waiting
However, waiting is performed only if the other process is
interested in accessing the resource
Compare with the busy waiting required by the access in turns
method
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 20 / 28
First approaches to synchronisation Busy waiting
Informal Justification of correctness
1 If only P0 wants to access the resource. Then P0 can proceed to
the critical section because flag[1] = false.
And vice versa if only P1 wants the resource.
2 If both processes want to access the resource, then both flag[0]
and flag[1] are true.
3 Suppose that P1 is the last to modify turn, i.e. turn == 0.
4 Then P1 will busy-wait while P0 will enter the critical region.
5 And vice versa if P0 happens to be the last to set turn.
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 21 / 28
First approaches to synchronisation Problems with Peterson’s solution
Problems with Peterson’s Solution
The entry/exit protocols are directly programmed using busy-waiting
Busy-waiting protocols are difficult to design, understand, generalise,
and analyse
No separation between variables used for synchronisation and
“normal” variables
So, they are often error-prone and inefficient
Modern compilers and processors make it very difficult to make naïve
assumptions about the order statements are executed
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 22 / 28
Semaphores
Motivation for Semaphores
It would be much simpler to write entry/exit protocols using single
statements, e.g.
GETSEMAPHORE;
critical section;
RELEASESEMAPHORE;
non-critical section
We want to encourage the processor to context switch away from a
blocked process to another process (i.e. no busy waiting)
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 23 / 28
Semaphores Example of semaphores
Synchronisation in Railway Traffic
Railway traffic is synchronised as below:
A railway semaphore is used to signal whether the track ahead is clear or
not
As a train proceeds, semaphores are set and cleared
Railway semaphores are mechanisms that ensure mutually exclusive
occupancy of critical sections of track
Semaphores in concurrent programs are similar:
They provide a basic signalling mechanism to implement mutual
exclusion and condition synchronisation
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 24 / 28
Semaphores Definition of semaphore
Definition of Semaphore
Semaphores were introduced by Dijkstra (1968)
A semaphore is an integer variable that takes only non-negative values
(representing the quantity of a resource available)
Only two operations are permitted on a semaphore s:
Wait(s) (or P(s) for Prolaag – to “try to reduce” s):
If s > 0 then decrement s
else block the calling process
Signal(s) (or V(s) for Verhogen – to inscrease s):
If processes are blocked on s then unblock one
else increment s
Semaphores are often implemented efficiently with hardware facilities
and in the kernel of OSs by block-waiting
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 25 / 28
Semaphores Definition of semaphore
Remarks on Semaphores
Wait() and Signal() need to be implemented as synchronizedmethods
Wait() should wait until the semaphore value becomes > 0
Signal() should release other processes waiting on the semaphore
Wait() and Signal() use block-waiting rather than busy-waiting. A
blocked process uses no CPU time.
If a semaphore only takes 0 or 1, it is called binary; otherwise, called
general (or counting)
Need to ensure that the initial value is ≥ 0 and overflow should not
occur by increment
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 26 / 28
Semaphores Definition of semaphore
Simple semaphore example
Semaphore s = new Semaphore(1);
Algorithm for Process 0 Algorithm for Process 1
Wait(s); Wait(s);
// access resource // access resource
… …
Signal(s); Signal(s);
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 27 / 28
Summary
Summary
Processes (or threads) need to communicate/share resources
Concurrent processes can result in race conditions in access to
critical region
Various (low level) strategies to avoid the problems and achieve
mutual exclusion
progress
bounded wait
Considered: Disable interrupts; access in turns; Peterson’s
method; Semaphores
gtl1–R895 W2L2 — Concurrently shared resources 2017–01–30 28 / 28
Shared resources
Inter-process Communication
Example: shared print spool
Race conditions
First approaches to synchronisation
Disabling interrupts
Access in turns
Peterson’s Algorithm
Busy waiting
Problems with Peterson’s solution
Semaphores
Example of semaphores
Definition of semaphore