Lecture 8: Consistency; Critical Sections
Scribe: Jakub Graniczka
Synchronization
Coordinate actions in a shared address space.
maintain consistency of data (screw-ups; race conditions)
efficiency (short waits; CPUs busy doing useful work)
clarity and simplicity
Thread A action1 action20 action3 action40 …
Thread B action10 action2 action30 action4 …
Types of Synchronization
Sequence Coordination (simple with threads)
action1 (); action2 (); action3 (); …
Isolation (harder with threads)
two actions must not interfere (ex. 1 and 10)
one way to implement is atomicity → if you do X() then any other action Y() is either finished before X() starts or it starts after X() finishes
problem with threads → default is no atomicity, no isolation
ex. Bank
2 locks → transfer (acct1, acct2, amt)
can be atomic → deposit (acct, d)
withdraw (acct, w)
lock all accts → audit ()
Synchronization Mechanisms
Read/Write Coherence
assumes stores are atomic, a load occurs all before or all after store and vice versa
doesn’t work for large objects (struct { int [10000]})
doesn’t work for small objects (struct { unsigned int flag ; unsigned int val})
must be aligned and byte size to work
typedef int lock_t {
void lock (lock_t *p) {
while (*p)
continue;
*p = 1; //precondition: we don’t have the lock
}
Void unlock (lock_t *p)
*p = 0; //precondition: we have the lock
}
xchng %eax, (%ebx)
test_and_set
int tas (int *p, int new) {
int old = *p;
*p = new;
return old;
asm (“ “) //atomic
}
void lock (…) {
while (tas(p,1) == 1)
continue;
…
}
Pipes
rev. Hard Modularity
1. multiple machines / message passing
2. virtual machines
→ implement message passing atop syscall virtualization
problem ex. (A & B & C & D) | (W & X & Y)
A → | | → W
B → | Pipe | → X
C → | | → Y
D → | |
simplified version:
1. read + write do only 1 byte
2. no virtualization, no syscalls or traps
enum {N=512} //must be a power of 2
struct pipe {
char buf [N];
size_t r,w;
lock_t l;
}
void write (struct pipe *p, char c) {
lock (&p->l);
while (p->w - p->r == N)
continue;
p -> buf [p->w++%N];
unlock (&p->l);
}
char read (struct pipe *p) {
lock (&p->l);
while (p->w == p->r)
continue;
char r = p -> buf [p->r++%N];
unlock (&p->l);
return r;
}
Coarse-grain locks
+ simpler
+ easier to implement
- don’t scale → bottlenecks
Fine-grain locks
+- opposite of coarse-grain locks
rl, wl
write
lock (&p->wl);
… //critical section
unlock (&p->wl);
return
Critical sections
+ get mutual exclusion (safety)
+ get bounded wait (performance)
lock (&l)
a();
b();
-----------------------
while(c())
d();
e();
-----------------------
f();
unlock (&l)
Goldilocks principle for critical sections:
too large → bad performance
too small → race conditions
make them just the right size:
1. look for shared state
2. look for writes to the stack
3. look for dependent reads
4. lock unlock boundaries around it
A | B - if A waits for input, B locks and loops while A is waiting (waits on empty pipe).