Complete requests in the order they arrive. Calculation of the distance traveled by the head (seek time) depends on position of the head before access.
Gives the distance between the location of the disk head and the location of the ith block we are trying to access
The distance to be traveled depends on random location of head and next block to be accessed. The average seek time is calculated as follows:
Complete requests depending upon which requests have the shortest seek time.
Let us suppose there is an elevator that only goes in one direction, up or down, and stops only to pick individuals up or let them off while maintain the same direction. Apply this to disk scheduling by only handling requests for blocks that are in same direction of travel as head. Like SSTF, but only in the head's current direction of travel
Scheduling system where head only handles requests while traveling in one direction. (like an elevator that only goes up).
Applying "dallying" (waiting for multiple similar requests before acting) to disk arm movement. Motivating Example:
Given two processes A and B which request the following blocks:
A: 10, 11, 12, 13, 14
B: 500, 501, 502, 503, 504
Request | Head Position | Head Travel Distance |
---|---|---|
0 | 0 | |
A: 10 | 10 | 10 |
B: 500 | 500 | 490 |
A: 11 | 11 | 489 |
B: 501 | 501 | 490 |
... | ... | ... |
Without dallying, the head has a high amount of seek time because of the travel required in servicing requests from A and B in alternating order. Using dallying, we receive a request, but then dally ~2ms waiting for another request from process A for a block that is near the first request. This way we can service the two requests from A first, before sending the head off to process B's request; limiting travel time and latency of I/Os.
In short - Our file systems will probably be adjusted to work on flash based disks in the next 5 years.
An on disk (or SSD) data structure that provides an abstraction of a set of files.
Idea: Imagine disk as a giant array:
Directory | File 1 | /// | File 2 | ///////// | File 3 | ///// | ... |
File name (16 bytes) | Index of sector (16 bits) | File Length (16 bits) |
The RT-11 File system by Digital EQ had almost the same implementation!
Boot Sector | Super Block | File Allocation Table (FAT) | Data |
Array of integers where each entry corresponds to a block number on the disk. Entries store the following integers:
Value | Meaning |
---|---|
-1 | Free Block |
0 | EOF has been reached |
N | Index of next block in file |
We use the FAT to find where successive parts of the file are, because each entry points to the next one. So we chain to get all the blocks of a file until we reach a block that is EOF. The FAT is cached in memory and is only 1/2000 of the file system's size.
Handling Sub-directories and File NamesDirectories are data and are stored in blocks as an array. Directory entries consist of:
File name (11 bits) | File Size in bytes (32 bits) | File's first block location (16 bits) |
Inode idea: data structure on disk that describes a file independent of any directory
Owner |
Date |
Block 0 |
Block 1 |
Block 2 |
Block 3 |
... |
Block 9 (indirect block) |
FFS used 8 KiB blocks and 32bit file pointers. Calculating the size of the largest file permitted:
10 + 2048 + 2048 * 8 KiB = ~ 2x10e24 bytes = 16 MB file size limit using 9 direct blocks and one indirect block.
Solution to file size:
Use indirection further by adding an additional block pointer entry to the inode which points to a tree of indirect blocks.
- This allows for 2048^2 *8KiB = 2x10e32 bytes = 32 GB file sizes.
Original Unix - No way to rename files, instead we have we have:
link(old, new) //writes to destination directory from old directory
unlink(old) //removes from old directory
How does OS know when to delete when unlink is called? A reference counter in inode tracks the number of directories pointing to a file.
Once this counter is zero, the file can be deleted from the disk.
Unix and BSD:
fsck function is run after reboot to mark files with reference count of 0 as free.