The idea for FIRST COME FIRST SERVE scheduling policy is that the request
from the first process will be completed first, then start on the request that
came after that, and so on.
Here is an example of a FCFS and how it measures up:
Job Arrival(t) Workload
A 0 5 B 1 2 C 3 9 D 3 4
The problem with this scheduling policy is that there are long waiting times for
short jobs.
The process B has a workload of 2 and logically should complete first since
it's relatively
short. However, process A's request came first and supercedes process B. We
need something fairer.
The assumption under SJF is that we know the workload of each job, thus giving
shorter jobs priority
so that the really fast jobs complete first!
SJF Utilization = 20/(20+3d) Same as FCFS. SJF Average Wait Time = 0 + (4+d) + (9+3d) + (4+2d) = 4.25 + 1.5d
SJF has the minimal average wait time if the assumption is that the process
starts and finishes.
However this scheduling policy has a problem: it can starve larger jobs
if there are much more shorter jobs.
What else is there?
For premption, there is a clock interrupt every 1/100 second (a real
world figure).
The 1/100 second interrupt is practical because humans have a relatively slow
response rate.
Premption breaks down long jobs into series of smaller ones. A version of
premption is:
Round-Robin Scheduling (FCFS + Premption)
Using the FCFS table, here is how Round-Robin works:
Process -------------------time--------------------->
A A0 A1
A2
B B0 B1Finished!
C C0
C1 C2
D D0 D1
D2
Process A has higher priority than B, B has higher priority than C, etc. This
priority could come from user
or could be a hardware property.
Round-robin Utilization = 20/(20+20d) Worst out of the 3
Average Wait Time = 1.5d (this is proportional to the clock interrupt time)
Turnover time is also the worst out of the 3.
Priorities can be statically assigned or dynamically assigned (e.g.
because of devices or to avoid starvation).
Priority based scheduling may be:
If not careful, you may run into PRIORITY INVERSION:
Mars Pathfinder, 1997
(low time)
| Tlow
Tmed
Thi
| (runnable)
(waiting)
(waiting)
t lock(&m);
i -----------------context switch (interrupt)------->lock(&fm);
m (runnable) (runnable ) (waiting
on lock)
e
<----context switch--------
| |
|
|
A common solution to the priority inversion problem is to temporarily give the
low priority a high priority
until it releases lock, then reset the priority. This is called PRIORITY
LENDING/STEALING.
(Live within realworld timing constraints) vs. (relatively easy).
Hard realtime scheduling has the following properties:
Soft realtime scheduling has the following properties:
You will need mutexes in combination with scheduling in order to keep track of
each processes's state.
Here is an implementation of how to implement a mutex that blocks a process
while waiting for a lock:
typedef struct {
mutex_t m;
bool locked;
proc_t *blocked_list; // list
of threads waiting for this mutex
} bmutex_t
void acquire (bmutex_t *b) {
for (;;){
lock (&b->m);
// set our threads state to BLOCKED
// add self to b->blocked_list
if(!b->locked)
break;
unlock(&b->m);
schedule;} // causes some other threads to
run on this CPU
b->locked = true;
//set our threads state to RUNNABLE
// remove self from b->blocked_list
unlock(&b->m);
}
void release (bmutex_t *b) {
lock(&b->m);
// set all processes in b-> blocked list to RUNNABLE
b->locked = false;
unlock(&b->m);
}
Semaphore is a blocking mutex that uses a type
int than a type bool
-locked when int != 0
-unlocked when > 0
In order to implement a semaphore, you only need to change several lines from
previous code that
has anything to do with b->locked.
Semaphores are used to allow N processes (N = initial value of b->locked) to "acquire" the lock.
Some odd syntax for semaphores:
A blocking mutex is called a binary semaphore.
Here's some example code using a blocking mutex:
void writec(struct pipe *p, charc) {
for (;;) {
acquire (&p->b)
if (p->w-p->r != N)
release (&p->b);
}
p->buf[p->w++%N] = c;
release (&p->b);
}
void readc(struct pipe *p) {
for (;;) {
acquire (&p->b)
if (p->w-p->r != 0)
release (&p->b);
}
char c = p->buf[p->r++%N];
release (&p->b);
return c;
}
The technique for implementing blocking mutexes is:
You will most likely need (and implement) the following functions:
Here is an example of a blocking mutex for a pipe:
struct pipe {
...
condvar_t nonfull, nonempty;
}
void writec {struct pipe *p , char c)
{
acquire (&p->b);
while (p->p->r == w); // the condition
wait(&p->nonfull, &p->b);
p->buf[p->w++%N] = c;
notify(&p->nonempty);
release(&p->b);
}