CS111: Lecture 7 Scribe Notes
Scheduling Algorithms
October 21, 2013
Created By: Jose Vega
Scheduling
- We need
- Scheduling Mechanism
- Scheduling Policy
First, we focus on the scheduling mechanism.
- A simple implementation
- Use yield(); to give up CPU to somebody else.
- We can think of yield(); as a no-op system call.
Note that yield is relatively expensive compared to sum ^= i;.
While this code will allow us to schedule other processes the way
we want, it's slowing down our loop. Instead, we will yield every
once in a while:
Polling Versus Busy Waiting
What we would like
- Yield, but tell the OS we are waiting for a particular device D to
become ready.
- For each process, the kernel scheduler records what it's waiting for.
(process table)
- Efficient way to wake up processes
- "Device" can be disk controller, alarm clock, etc
- Policy only worries about unblocked processes
Problem
- Uncooperative processes without yield();
- We are assuming cooperation
- Sometimes we can't assume this
(ex: seasnet can't assume students would write programs that yield)
- Possible solution: have the compiler inject yields
- Compilers aren't very good at this - code ends up yielding too often.
We want an approach that doesn't assume cooperation
- Linux yield equivalent
- Any system call that doesn't do anything (context switch)
ex) sleep(0) or getpid()
- So assuming cooperation may be okay when programs have a few system calls.
- What about a loop without system calls?
- Compiler yield injection for loops, goto's and functions
(yield in each function to catch recursion)
- Change the machine
- Every (say) 10ms, the machine inserts a system call yield();
- Motherboard has timer interupts.
- MB sends signal periodically via the bus to CPU
- CPU traps
- Enters kernel
- Kernel can just return, or return to some other process.
- We call this "Preemptive" Scheduling
+ No runaway processes
+ No yields in our code
- Complicates critical sections (ex: file renaming)
- Any pair of instructions can have an arbitrary delay inserted
Does preemptive scheduling solve infinitely looping code?
- Doesn't solve the issue, since the CPU will keep coming back to it.
- May add a time limit, after which we send a signal to kill the process.
- Could kill off processes that are actually making progress.
- Suppose you have 8 cores and 100 processes
What's the machanism now?
- We assumed just one core
- We need to prevent overlap
- Otherwise, the mechanism is not too different
Scheduling Policy
Sample Issues
- Long Term
- Which processes will be admitted to the system?
- Admitting everything eventually swamps the CPU
- In Unix, this is asked when calling fork();
- Fork may be denied for lack of resources (fork() < 0)
- Medium Term
- Which processes' memory are in RAM versus on secondary storage (disk, etc)?
- Short Term
- If we have more runnable processes than CPUs, which processes get CPUs?
Real Time Scheduling
- "Hard" real time scheduling
- Ex: car's braking system
- Hard deadlines cannot be missed
- Missing a deadline is considered an error as serious as a system crash
- Predictability is more important than performance
- Avoid timer interrupts
- Use polling or even busu waiting
- Since cache is not predictable, disable it
- "Soft" real time scheduling
- EX: video playback
- Some deadlines can be missed
- There is a cost to missing a deadline, but it is much less
than missing a hard deadline
- Possible algorithm for multiple deadlines
- Earliest deadline first
- Simple, but needs preemption to prevent one deadline from holding
the CPU too long
- Problem: now an overloaded CPU will do everything at once, but poorly
We might implement priorities
- Rate-monotonic scheduling: assign priority to more frequent tasks
Scheduling Metrics
- Schedulers tend not to work well
- But how do we measure how well the scheduler works?
- Scheduling metrics
- We use the example of a car factory
- Aggregate Statistics
- Utilization (max 1.0): % of factory time spent doing useful work
- Throughput: # of jobs per second
- Fairness: every admitted job will eventually finish
(This is a simplified version of fairness)
- Sometimes, these are competing goals
EX: a huge order for model Ts comes in
- Now we can't take other orders if we want to maximize
our utilization, since context switching costs will drop it.
Simple fair example
- First come, first serve (FCFS): 't' represents context switching time
- While fair, this policy has high wait times
- Shortest job first (SJF)
+ Average wait time is less
- Long jobs can starve (unfair)
- Run times need to be known in advance
- Using SJF on the previous example
- Avg wait time = 4.25 + 1.5t
- Throughput goes down
- FCFS + Preemption
Work order will be: A-B-C-D-A-B-C-D-A-C-D-A-C-D-A-C-C-C-C-C
- Run time/response time up, worse than FCFS, SJF
- Better for interactive systems
- Different job classes
- Sysadmins
- Faculty
- Students
Priority - Based Scheduling
- Two kinds
- External - set by users / admins
- Internal - set by scheduler
- "Niceness" in Linux
- Greater niceness means a process is willing to wait
- nice(+3) to increase (modifier must be positive
- Root can decrease niceness