CS 111 10/29/2014
Written by:  Mohammed Junaid Ahmed, Robert Abbott

Lecture 8: Consistency; Critical Sections


From last Lecture:


Preemption & Scheduling:

Round Robin Scheduling:
  Starvation is possible: if processes are continually added to the front of the queue old jobs won't get completed
    solution: give old jobs priority will eliminate starvation problem

Overflow Example:
  void deposit(long int amt)
 {
    if (amt > LLONGMAX - balance) {
      balance += amount;
    }
    return True;
 }

  bool withdraw (long int amt)
 {
    if balance < amt) { return False; }
    balance -= amt;
    return True;
 }
  // race conditions are still possible if deposit and
  // withdraw occur at roughly the same time

Synchronization:Goal is to prevent race conditions from occurring. We could eliminate preemption but rather than eliminating preemption all together we could add system calls to disable and enable preemption. These calls prevent the scheduler from preempting processes.

  - Assume preemption will only be disabled for bounded (short) periods of time
   bool withdraw (long int amt)
  {
    disable_preemption();
    // CRITICAL SECTION
    if balance < amt) { return False; }
    balance -= amt;
    enable_preemption();
   // END CRITICAL SECTION
    return True;
  }

Rules for finding critical sections:
    - Find variables shared by multiple threads
    - Look for accesses to the shared variables
    - look for writes
    - For each write look for 'dependent reads'
    - when you read data from somewhere and that data
        is used to calculate the change to the shared variable

  Example: new bank API methods
  void transfer (acct1, acct2, amt)
  {
    // checks on both acct1 and acct2 must be one critical section
    // neither account can be altered while the other is being checked
  }

  bool audit ()
  {
    // lots of reads, one write
    // return ok or failed
  }
  These solutions only work if there is one CPU running. If there are
  multiple cores then disabling preemption doesn't stop other CPUs
  from altering a shared resource.

  This approach is only used in embedded systems


Keys to better synchronization:
  isolation: two processes operate in different address spaces
  atomicity: enforcing isolation even when threads share variables
    - two actions Ai, Bi
    - Ai will completely finish execution before Bi starts OR
    - Bi will completely finish execution before Ai starts

Serializability: Explanation for / justification of the CPUs
    observable behaviour. If we can explain the execution as a
    sequence of threads it shows the program is correct.

Threaded Operating System:
    enum { BUFSIZE = 1024 }

    struct pipe {
      char buf[BUFSIZE];
    }

    // busy wait handles full and empty pipes but not synchronization
    void write1 (struct pipe *p, char c)
   {
      // busy wait
      while (p->w - p->r == BUFSIZE) {
        continue;
      }
      p-> buf[p->w++%BUFSIZE] = c;
    }

    char read1 (struct pipe *p)
   {
      // busy wait
      while(p->w == p-> r) {
        continue;
      }
      return p->buf[p->r++%BUFSIZE];
    }
Handling Synchronization with locking:
    // BUFSIZE must be a power of two
    enum { BUFSIZE = 1024 }
    void write1 (struct pipe *p, char c) {
      // busy wait
      // infinite loops in critical sections are a no no
      // so... lock and  unlock each time through the loop
      while (1) {
        lock();
        if (p->r != p-> w) break;
        unlock();
      }
      p-> buf[p->w++%BUFSIZE] = c;
    }

    char read1 (struct pipe *p) {
      // busy wait
      // infinite loops in critical sections are a no no
      // so... lock and unlock each time through the loop
      while(1) {
        lock();
        if (p->r != p-> w) break;
        unlock();
      }
      char c = p->buf[p->r++%BUFSIZE];
      unlock();
      return c;
    }
  Using this spin lock method is not super-efficient


How to implement a lock:

    We will implement lock() by using lower-level atomic
    primitives which will prevent race conditions.

    Atomic Primitives:
    atomic load & store
      - on x86 32-bit align loads and stores are atomic
    int locked;

    void lock (void) {
      // critical section
      while (locked) {
        continue;
      }
      locked = 1;
      // end critical section
    }

    void unlock (void) {
      // critical section
      locked = 0;
      // end critical section
    }
This implementation is flawed and will be discussed further next lecture…