COMPUTER SCIENCE 111 - WINTER 2012 - LECTURE 9 SCRIBE NOTES

Lecture 9: Synchronization Primitives - Deadlock

Table of Contents

  1. Synchronization with lock/unlock technique:
    1. Basic lock implementation:
    2. Read lock vs. Write lock:
    3. Readcare vs. Writecare:
  2. Blocking - Mutexes:
  3. Condition Variables
  4. Deadlock
    1. Four conditions to have deadlock:
    2. How to avoid deadlock:
  5. References:



1/ Synchronization with lock/unlock technique:

* Issue: Only critical section can not guarantee synchronization in today complex programming. We need more powerful tools for this task.

a) Basic lock implementation:

Simple lock/unlock technique:


typedef bool lock_t;

void lock(lock_t *l) {
    while (*l)   //these codes should be in Critical section to avoid other proc to grasp the lock
         continue;
    *l = true;
}

void unlock (lock_t *l) {
    *l = false;
}
*** In x86 instruction set: Ex: lock incl (x) => Hardware: grasp the bus => incl x => release bus

To understand how lock works, let examine another function named xchgl (x,r), that will exchange the content of r and x automatically. (Restriction: r & x must be aligned)


int xchgl(int *p, int new) {
    int old = *p;
    *p = new;
    return old;
    // the corresponding assembly code: asm("xchgl %...)
}
=> This implementation will not work properly since it creates a racing condition when running multiple threads. To avoid this problem, a lock should be introduced to lock the data exchanging when the codes run.

typedef int lock_t;

void lock(lock_t *l) {
    while (xchgl(l,1) != 0)   
         continue;
}

This implementation will guarantee that there is not any other thread can jump in to change the data r, x while the lock is in place as long as these two conditions hold:

***What if we want to increase by two when the function is called. To do so, we will want to introduce two atomics:

....
lock_t vl;
int v;
...
lock (&vl);
v += 2;
unlock(&vl);
....
Here is another example to show how the lock works. Assume we need a function named cmpxchgl() to compare the swap two values. Firstly, let draft the function this way:

bool cmpxchgl (int *p, int old, int new) {
     if (*p == old)  {
         *p = new;
         return true;
     }
     else return false;
}   
then somewhere in the program, the function is called in the following way: ... while (!cmpxchgl(&v, v, v+2) ... => This function call will result in a undefined return due to the fact that while the function cmpxchgl compares value of variable v, another might already change that value and return a wrong answer. This is one of typical issues when accessing to shared resouces among threads. => To correct this problem, just add an atomic identity as following:

do {
   int old = v; //lw&sw are atomic in x86 machine, we can assume that in general as well
   while (!cmpxchgl(&v, old, old+2)
}    
* Question: is this code is absolutely safe?


b) Read lock vs. Write lock:

* Issue: How to lock large objects, since the size of the lock is not unlimited!


struct rwlock_t {
    lock_t l;
    int readers;   //Number of reader accessing the object
    bool writer;   //Is there any one writing to the object?
}

void lock_for_read (struct rwlock *l) {
    lock(& l->l);
    l->readers++; // bounds for readers is concerned
    unlock(& l->l);
}

void unlock_for_read (struct rwlock *l) {
    lock(& l->l);
    l->readers --; // bounds for readers is concerned
    unlock(& l->l);
}
So far, the implementation well prevents any foreseenable race condition. We move to the write parts:

void lock_for_write (struct rwlock *l) {
    lock(& l->l);
    l->writer = true; 
    unlock(& l->l);
}
Turn out that this implementation is problematic since we need to check if there is any reader locking writers currently...So this check is neccessary and is implemented as following:

void unlock_for_read (struct rwlock *l) {
    while (l->reader) // to make sure that one reader can access at a time
         continue;
    for ( ; ; ) {
         lock(& l->l);
         if (!l->readers)
              break; 
         unlock(& l->l);
    }
}

void unlock_for_write (struct rwlock *l) {
    unlock(& l->l);
}

c) Readcare vs. Writecare:


struct pipe {
    lock_t l;
    char buf[N];
    size_t r, w;
}

void writec (struct pipe *p, char c){
    for (;;) {
         lock(& p ->l);
         if (p->w - p->r l);
    }
}

char readc(struct pipe *p) {
    for(;;) {
        lock(& p->b);
        if (p->w != p->r)
            break;
        unlock(& p->b);
    }
    char c = p->buf[p->r++];
    unlock(& p->b);
    return c;
}

This implementation still wastes too much CPU time just for waiting. We will need another technique to improve efficiency.

2/ Blocking - Mutexes:

* Issue: The lock so far still "chews up" too much CPU resource

=>To improve CPU usage, another technique is introduced called "blocking - mutex", that uses a simple idea: "just lock when you can, otherwise, release CPU to others".


typedef struct {
   ...
   bool blocked;
   ...
} proc_t;

struct bmutex {
    lock_t l;
    bool locked;
    proc_t * blocked_list; // Link list pointer points to the next element
}

void acquire (struct bmutex_t *b) {
    proc_t *self;
    for ( ; ; ) {
        lock(& b->l);   	//own low level lock
        add (&b -> blocked_list, self); // add self to b->blocked_list
        self->locked = true;
        if (!b->locked)  	// nobody have resource
              break; 	
        unlock(& b->l);  	// let others have chance to acquire resources
        schedule();      	// release control of CPU to others
    }
    lock(& b->l); 		// take resource after schedule()
    self->locled = false;		
    b->locked = true;   	// grasp low level lock
    remove (&b->blocked_list, self);  // remove self from blocked_list;
    unlock(& b->l);
}

void release (bmutex_t *b) {
    lock(& b->l);
    set the first process in p->blocked_list to RUNNABLE
    if (b-> blocked_list)
          b->blocked_list->locked = false;
    b->locked = false;
    unlock(& b->l);
}

3/ Condition Variables

* Ideas: Have system wake me up when the condition is met.
* Purpose: Not waste CPU resource for waiting , polling...
* Wanted API: wait_for(p->w - p->r less than N), then wake up....
* Components of a condition variable:


struct pipe {  //added two new fields to the current struct pipe
    bmutex_t b;
    char buf[N];
    size_t r, w;
    condvar_t nonempty;
    condvar_t nonfull;
}

With the present of the condition variable, the completed implementation of the readcare() and writecare() functions now becomes:


void writec (struct pipe *p, char c){
    acquire (&p->l);
    while (p->w - p->r l)
    p->buf[(p->w ++) %N] = c;
    notify(&p->nonempty);
    release(&p->l);
}

char readc (struct pipe *p) {
    acquire (& p->b);
    while (p->w == p->r)
        wait (& p->nonempty, & p->b);
    char c = p->buf[p->r++];
    notify (& p->nonfull);
    release(& p->b);
}

4/ Deadlock

* Issue: Even when we use the condition variables, and check the condition carefully, there are still synchronization problem, and this is called Deadlock.

Deadlock problem occurs when we synchronize things too much: over-synchronization.

For example: Consider the code snipet below:


 for ( ; ; )  {
     if (read(ifd, buf, sizeof(buf) 
           write(ofd, buf, 1);
}

We know for sure this code will not work, because read() and write() function work on the same shared memory, and write() will wait for read() finishes, which in turn, waits for write() to finish. This is called deadlock.

For clearer example, consider the below codes:


 n = copy (ifd, ofd, n)    // run on thread #1, assumely
 if (copy(ofd, ifd, n)) {  // run on thread #2, assumely
     ....
 }

Then both threads will wait for each other for ever, and this creates a deadlock.

Deadlock is actually a race condition.

* Four conditions to have deadlock:

To avoid deadlock:

*Method A: System maintains a dependency graph & denies the lock if the acquirement would create a cycle in the dependency graph.

*Method B: Number all resources in order: from 1 to n or we can use memory address instead, then let threads acquire resources in order. Every one must release its current holding resource in order to acquire new ones

5/ References:

*** Reference: