Scribes: Cameron Ketcham and Nick Graham

Scheduling

Scheduling Metrics

Scheduling metrics have been studied long before the first scheduler was made for a CPU. Big companies such as Ford wanted an optimal way put their cars together, making sure they had all the parts at the right times.

Metrics

Task Timeline for factories


credit Spring 08 CS111 scribe notes

the definition of task is critical

when measuring these quantities we use statistics

As programmers we want to try keep the variance small so the users wait about the same amount of time for the same types of tasks.

Side note: Scheduling can apply to CPU (time), disk, memory (RAM), network, battery, etc. We will be dealing with CPU (time)

Statistics for multiple tasks

but these two goals conflict


Really Simple Scheduler - FCFS: First Come First Served (FIFO)

example

tasksarrival timeruntimefirst output
A152
B221
C394
D441

Schedule

AAAAAδBBδCCCCCCCCCδDDDD

Processwait timeresponse timelatency
A025
B4+δ5+δ6+δ
C5+2δ9+2δ14+2δ
D13+3δ14+3δ17+3δ

latency = wait time + run time
response = wait time + time to first output

utilization = real work / total work = 20 / (20 + 3δ)
average wait = 5.5 + 1.5δ

FCFS - good properties

FCFS - downside


Shortest Job First

scheduler always chooses the shortest runnable task (assuming it knows the length of the jobs)

Schedule for SJF

AAAAAδBBδDDDDδCCCCCCCCC

Processwait timeresponse timelatency
A025
B4+δ5+δ6+δ
C9+3δ13+3δ18+3δ
D4+2δ5+2δ8+2δ

processes A and B are the same as before
utilization = real work / total work = 20 / (20 + 3δ) (same as FCFS)
average wait = 4.25 + 1.5δ (better!)

SJF - good properties

SJF - downside


Round Robin (RR)

Preemption increases the power of the scheduler

the units of time that each process is allowed to run for is called a quantum. For this example we will use Q=2

Schedule for RR

AAδBBδCCδDDδAAδCCδDDδAδCCCCC

Processwait timeresponse timelatency
A0215+7δ
B1+δ2+δ3+δ
C2+2δ10+5δ18+8δ
D3+3δ4+3δ10+5δ

traded off wait time for utilization
utilization = real work / total work = 20 / (20 + 8δ)
average wait = 1.5 + 1.5δ


FCFS: starvation is impossible
SJF: starvation is possible
RR: starvation? YES if new tasks are put at the beginning of the queue. NO if they are put at the end.

Families

Priority Scheduling

Always schedule the highest priority task first.
priority 1 > priority 2
Priorities can be external (set by user) or internal (set by the OS)
Priority scheduling has starvation problems if any priority scheme is allowed.
One solution: subtract age of task from its priority.

Multi level priority queues

System level queue and interactive tasks queue.
Different algorithm for each queue.
Batch tasks: you can move stuff from one queue to another.

Real Time Schedules

There's no real deadline: "Do what you can."

Hard real time

(e.g. Nuclear power plant)
Deadline must be met no matter what
Predictability trumps performance
Disable caches
Disable Interrupts

Soft real time

(e.g. video player)
Some deadlines can be missed.
Earliest deadline first (is one scheduling algorithm)
Rate-monotonic scheduler: tasks that need to be done more often get the highest priority.
Old tasks get killed.

Synchronization

Coordinate actions (of multiple threads) in a shared address space

Maintain data consistency. Efficient (both utilization and waiting) Simple, clean

Fix race condition

Race condition: bad behavior due to code being executed in wrong order.

Example

(Parenthesized numbers represent order of execution)
shared data:
int balance = 10;
Dr. Eggert
void deposit(unsigned int n) { // Eggert tries to deposit $1
	balance+= n; 
}
(1)  LD     balance, r1 
(5)  add    n, r1 
(6)  st     r1, balance 
Dr. Eggert's nephew Bill
void withdraw(unsigned int n) { // Nephew tries to withdraw $1
	if(n<=balance)
		balance-= n; 
}
(2)  LD     balance, r1 
(3)  sub    n, r1 
(4)  st     r1, balance 

When we're done there are $11 in the account!


Concepts:
OBSERVABILITY: what can you see?
SERIALIZABILITY

_______interface_________
|                        |
|       SYSTEM           |
|                        |
| (stuff going on in //) |
|________________________|

Within the Stuff going on in parallel there are transitions between states.
The transitions are things like deposit/withdrawl.
Behavior in these states can be messed up, confused, complicated.
No matter what happens at low (internal) level, observers can explain it with a series of transitions.