Prepared by Ryan Hansberry, James Baumgarten, and Kasey Carrothers, for a lecture given by
Paul Eggert at UCLA on March 25, 2013.
Leftovers from Last Lecture
Round Robin Scheduling:
Round Robin scheduling is a form of preemptive scheduling.
Each task gets a time slice, say 10 ms each, and the scheduler cycles through tasks giving them a turn
An advantage of RR is a low wait time.
The non-preemptive schedulers can all work with preemption - just run
the scheduler each slice.
"Polite" Round Robin
New arrivals to the queue will get first priority to use the CPU immediately.
Possible problem: a task may starve if new tasks keep showing up in front of it.
"Rude" Round Robin
New arrivals get put at the end of the implicit (already existing) queue.
Easier to implement usually, because you already have a primitive to put processes at the end of the queue.
Starvation is no longer an issue.
Threads and Synchronization
The main software engineering problem with threads is how to do synchronization, that is synchronous access to memory.
The default behavior of threads is unsynchronized access; they can all stomp on each others' memory.
Bill Gates's Bank Account
It's easy to write buggy, incorrect code when working with threads. Let's examine code that works with a bank account:
long long int balance = 0; // measures pennies
void deposit (long long int amt)
{
balance += amt;
}
void withdraw (long long int amt)
{
if (amt <= balance) {
balance -= amt;
return true;
}
return false;
}
However, if this application is multithreaded, two threads might be able to interfere with each other.
Dealing with Threads
The volatile Keyword
Tells the complier not to optimize the variable.
Never caches the variable's value in a register.
Still does not solve the problems with synchronization: interrupts may still happen.
Synchronization Primitives
Pretend there is a syntax like so: [[...]]
Means that everything in the brackets cannot be interrupted.
Adds 2 machine instructions: Begin and end critical section
Potential Problems:
Never closing a critical section
Forgetting to enclose code that is critical
Closing a critical section without opening one.
Nesting critical sections (assume this is not allowed).
Pipes and Critical Sections
How can we synchronize pipes? Let's consider the following code:
enum { N = 1024 };
// Assume this pipe will be used for 1 byte I/O
struct pipe {
char buf[N];
unsigned r, w;
};
void writec(struct pipe *p, char c)
{
p->buf[p->w++%N] = c;
}
char readc (struct pipe *p)
{
return p->buf[p->r++%N];
}
If a pipe is empty, readc() can either wait, or
return an error. Let's look at the implementation for waiting:
Problem: can have two writers at once. Let's consider locks to solve this...
Locks
Race conditions can be resolved using critical sections.
Locks are used to control thread access to critical sections.
Spinlocks
Primitives:
void lock(spinlock_t *l);
void unlock(spinlock_t *l);
Keep in mind:
Caller must ensure waits are bounded.
Do not lock something twice.
You cannot unlock something that has not been locked by you.
Lock granularity must be chosen based on the Goldilocks Principle:
Can't create too large of a critical section.
A programmer may have too small of a critical section, and not fully prevent race conditions.
Spinlocks have an efficiency problem - they spin the CPU trying to get the lock.
How about a new primitive that blocks if the resource is not available.