CS 111 Scribe Notes: Lecture 7 Scheduling Algorithms (4/20/09)

by Achint Goel and Christopher Knight

(continued from last lecture)

Signals (in the operating system)

Introduction to UNIX Signals Programming

Pros

Cons

Alternatives to Signals

For example, lets examine the case of a power failure while a process is running. There will be a dedicated file located somewhere where the application can access (i.e. /dev/signal) that contains a single byte of information. This byte will indicate whether there is power to the system or not (i.e. 1 for power and 0 for no power). The applications will then have to read this byte to check for a power failure at set intervals (i.e. every 5 ms).

Pros

Cons

It is possible to address the cons of this alternative method in two possible ways:

Blocking Read:

Multithreading:

But how will the signal checking thread tell the main thread that there is a power failure without signals? This all leads to the need for signals and the ability to cross virtualization barriers.

(end of previous lecture)

Scheduling Overview

Scheduling Wiki

Scheduling exists in every aspect of life. It is the study of how to share limited resources across many users. In the world of computing, this used to be done manually. There would be many users seeking to run their applications on a single machine. They would have to hand their applications to the operator, who would then choose which program to run next.

Cons

Because of all these cons, we should automate!

A simple but automated method to scheduling is where each process decides which process to run next. This method is called cooperative scheduling and is often used in embedded applications.

Most system calls have a yield feature built in so that other processes can run while the kernel handles the syscall. This greatly increases performance and CPU utilization.

To solve this problem, we would like to have the ability to insert yields at runtime. In Linux, this is called a timer interrupt. Every 10 ms, the hardware traps and the kernel takes control. This acts as if the application called yield(). This idea of forcing a process to yield is called preemptive scheduling

Cooperative vs. Preemptive

Cooperative Scheduling:

Preemptive Scheduling:

Better Managing the Infinite Loop Problem:

Scale of Scheduling

When scheduling is being discussed in the context of computer science, it can be viewed at one of many different layers.

We are primarily concerned with the short term.  When examining algorithms at the short term level, they can, for the most part, be abstracted and reapplied at the other levels.  When comparing scheduling agorithms, it becomes necessary to determine the "better" choice.  Initially, there is no clear way to do this, thus some metrics must be defined.

Metrics for Schedulers:

When examining these metrics, its common to look for things such as the average wait time, the worst case response time, etc.

Scheduling Examples

(All units in seconds)

Example 1: First Come First Serve Scheduling

Job A Job B Job C
Arrival Time 0 1 2
Runtime 3 2 7

Assuming that it takes ε seconds in between jobs to switch to the next job, and if each letter corresponds to 1 second of that job running, then the job scheduling would look as the following:

The wait time for the following jobs to start is:

This means the average wait time for a job to start is (5/3) + ε .

Assuming the same conditions as above, the turnaround time for each of the following jobs is:

This means the average turnaround time for a job is (17/3) + ε.

The total time it takes to finish all the jobs is 12 + (2ε), and the cpu is utilized for 12 (3 for Job A, 2 for Job B, and 7 for Job C), so the utilization is 12/(12 + (2ε)) %

Example 2: First Come First Serve Scheduling

Job A Job B Job C
Arrival Time 2 0 1
Runtime 3 2 7

Assuming that it takes ε second in between jobs to switch to the next job, and if each letter corresponds to 1 second of that job running, than the job scheduling would look as the following:

The wait time for the following jobs to start is:

This means the average wait time for a job to start is (8/3) + ε.

Assuming the same conditions as above, the turnaround time for each of the following jobs is:

This means the average turnaround time for a job is (20/3) + ε.

Example 3: Shortest Job First

If we assume the same arrival times, run times, and assumptions as Ex. 2 for Jobs A, B, C, but change the scheduling to SJF (shortest job first), then the job scheduling would look as the following:

The wait time for the following jobs to start is:

This means the average wait time for a job to start is (4/3) + ε and the average turnaround time for a job is (16/3) + ε.

Pros to SJF:

Cons to SJF:

Preemption and Scheduling

We can break up each of the jobs into shorter time segment pieces (i.e. 10 ms pieces) and then sequentially schedule each piece of the job separately (i.e. Preemption + FCFS).

For example, if we assume the quantum is 1 second, and we assume the same arrival time and run time as Ex. 2, and that it takes ε seconds in between jobs to switch to the next job, and if each letter corresponds to 1 second of that job running, than the job scheduling would look as the following:

The wait time for the following jobs to start is:

This means the average wait time for a job to start is ε seconds.  Although this average wait time is the best so far, the round robin approach increases the turnaround time for a job compared to FCFS or SJF.

The total time it takes to finish all the jobs is 12 + (8ε) seconds, and the cpu is utilized for 12 seconds (3 seconds for Job A, 2 seconds for Job B, and 7 seconds for Job C), which gives us a utilization value of 12/(12 + (8ε))%.

If you were to shrink the quantum for a job, then:

In Linux, the typical quantum time for a job is around 10 ms.  This is primarily because human beings don't have sesitive enough perception to notice a time of this length.  This gives a "snappy" feel to the user interaction.