Process | Arrival | Time | Weight |
A | 0 | 5 | 0 |
B | 1 | 2 | t |
C | 2 | 9 | 2t |
D | 3 | 4 | 3t |
Unfortunately, there are some downsides to Round Robin scheduling. Take a look:
For more information about preemptive scheduling please click here.
For more information about Round Robin scheduling please click here.
There are two types of priorities that can be set: internal and external. External priorities are set by users, while internal priorities are set by the operating system itself (the scheduler) and remind the OS what jobs are most important. Here are some types of scheduling and how they set their priorities:
Shortest Job First (SJF) | Priority = length of job SJF is unfair because there is the potential that processes will not be able to run |
First Come First Serve (FCFS) | Priority = arrival time FCFS is fair because processes can't starve |
Longest Job First (LJF) | Priority = -(length of job) LJF is unfair because there is the potential that processes will not be able to run |
Eggert Scheduling (ES) | Priority = length of job + arrival time ES is unfair because there is the potential that processes will not be able to run |
Use the ps command to learn more about the processes running on a system. Click here to learn about the various flags that can be set and the expected output of ps. Generally the priority shown will be a combination of the external and internal priorities. We will look at a specific flag that can be set and used to modify a process's external priority: nice.
Some facts about NICE:
$ nice make //adds 5 to niceness $ nice -n9 make //adds 9 to niceness $ -n-9 make //subtracts 9 from niceness //only root users can make a process "meaner"
|
There are several problems that could potentially occur during scheduling. First, with preemptive scheduling at any point the process that is being run could be thrown out in favor of a different job. This could interrupt a crucial action the program is taking. Next, a problem could occur when scheduling threads. A single thread could be executing an important action and suddenly lose the CPU. This means a programmer has to write their application under the assumption that the CPU is incredibly flaky.
Preventing race conditions will remove difficult-to-debug errors from your code. Here are three methods that protect against race conditions:
There are two important attributes of atomic actions: they are small, and thus easy to implement, and they are indivisible.
Small atomic actions:
Indivisible atomic actions:
Let's look at an example explaining how to "cheat" while implementing atomic actions:
Suppose we write a program for a bank than allows users to
Now, suppose that with an initial balance of $100 we successfully execute processes A, B, and C almost simultaneously. The ending balance will be $105.
What if we had looked inside the bank balance between process A and process B? We would have seen $95, which means that process B was executed before process A. However, on the outside the program's behavior looks correct. This is allowed! In operating systems, this is known as the principle of serializability. It is directly related to observability: if the user cannot see the bug it does not exist.
Now let's write the code for the deposit and withdraw functions.
long
long balance; //prevent overflow by making balance a long long bool deposit(long long pennies) { if(!(0 <= pennies && pennies <= (LLONG_MAX - balance))) //checks for overflow return 0; balance += pennies; //what if some other thread performs an action that leads to an overflow here? return true; } bool withdraw(long long pennies) { if(!(0 <= pennies && pennies <= balance)) return 0; balance -= pennies; //what if some other thread performs an action that leads to an overflow here? return true; } |
We need to fix the areas where two different threads running at the same time could cause an overflow in the balance variable. Here is an attempt at fixing withdraw:
bool
withdraw(long long pennies) { long long b = balance; if(!(0 <= pennies && pennies <= b)) return 0; //but we could lose a deposit here balance = b - pennies; return true; } |
We fixed one issue but in the process created another issue. Clearly we need a new tool to help with parallel threads. The solution: critical sections.
Please click here for more information about atomicity.
A critical section is a set of instructions such that while your instruction pointer is executing that set, no other instruction pointer is pointing at a critical section for that same object. Less formally, a critical section is a section of code where other threads can't mess with a shared data structure.
There are two main tools supported by critical sections:
There is a discrepancy between supporting mutual exclusion and bounded weight: one advocates large critical sections while the other needs small. The solution is the Goldilocks Principle: make the critical section large, but no larger than it has to be.
We want to run our pipes really fast, so we're going to give up memory safety to improve performance. Here is a first (very flawed) attempt at implementing pipes using threads:
#define
PIPE_SIZE (1 << 12) struct pipe { char buff[PIPE_SIZE]; unsigned r, w; } char readc(struct pipe *p) { while(p->r == p->w) //protect against reading from an empty pipe continue; return p->buff[p->r++ % PIPE_SIZE]; } char writec(struct pipe *p, char c) { while(p->w - p->r == PIPE_SIZE) //protect against writing to full pipe continue; p->buff[p->w++ % PIPE_SIZE] = c; } |
When we're trying to read or write to the pipe it is very clear that the two processes could interfere with each other and cause errors.
To fix this issue a lock (that defines a critical section) will have to be used. Specifically, we are going to use a spin lock to fix our program. We can lock and unlock the critical section using two functions, aptly named lock and unlock.
void lock(lock_t*)
void unlock(lock_t*)
Now we can try to fix our buggy code using locks:
lock_t
l; char readc(struct pipe *p) { //critical start lock(&l); while(p->r == p->w) continue; return p->buff[p->r++ % PIPE_SIZE]; unlock(&l); //critical end } char writec(struct pipe *p, char c) { //critical start lock(&l); while(p->w - p->r == PIPE_SIZE) continue; p->buff[p->w++ % PIPE_SIZE] = c; unlock(&l); //critical end } |
This is safe but unfortunately the critical sections are too large. It is possible that the locks will spin forever. We need to try again, this time with smaller critical sections.
lock_t
l; char readc(struct pipe *p) { lock(&l); while(p->r == p->w) { //repeatedly lock and unlock in an attempt to prevent endless spinning lock(&l); unlock(&l); } //Problem: we're still spinning too much! char c = p->buff[p->r++ % PIPE_SIZE]; unlock(&l); return c; } |
There are two problems with the above code, with the first being marked above. The second problem is that the locks we have now exclude too much. To visualize this consider two different pipes P and Q. If pipe P reaches a critical section and locks, pipe Q will no longer be able to perform its own processes because it is locked out too. Any one lock will restrict access to other pipes.
To fix the second problem the lock will be moved into the pipe struct:
struct
pipe { char buff[PIPE_SIZE]; unsigned r, w; lock_t l; } char readc(struct pipe *p) { lock(&(p->l)); while(p->r == p->w) { lock(&(p->l)); unlock(&(p->l)); } char c = p->buff[p->r++ % PIPE_SIZE]; unlock(&(p->l)); return c; } |
This fixes the problem of one lock restricting access to other pipes.
bool
isempty(struct pipe *p) { lock(&(p->l)); bool ise = p->r == p->w; unlock(&(p->l)); return ise; } |
We can make a finer-grained lock by putting two locks in the pipe struct:
readlock(rl);
writelock(wl);
Please click here or here for more information about critical sections and locks.