Winter 2016, Lecture 14

File System Robustness

Written by Aditya Kotte and Ritam Banerjee


What's the Worst That Could Happen?

Gzip very good at compressing files, but it has a bug…

ifd = open("foo",O_RDONLY);
ofd = open("foo.gz",O_WRONLY|O_CREATE..); 
write(ofd, buf, ⬭);
read(ifd, buf, bufsize);
if(close(fd)!=0) error();
if(close(ofd)!=0)error();
                        	<---crash?
if(unlink(“foo”)!=0)error();
exit(0);

System crash at that point could cause you to lose all your data. How?

We are being so careful about closing all the files.


So what’s the bug?
Let’s go on to file system layout....

BSD File System Layout

(Figure 1 below)
“Figure

Free block bitmap keeps track of where available blocks are.
This is how we grow files. Must be fast.
1 bit/block of data. That’s a 8*8192 = 65536:1 Very good compression ratio!

This is plausibly cacheable in RAM.
This is a classic "Performance" issue.

Performance Issues with File Systems


Overhead of reading from disk:
seek time-time for read head to move to sector (10ms)
+ rotational latency - physical constraint of disk drive (8ms)
+ transfer from disk + copy to RAM from cache (0.1ms)
≅ 18 ms

18 ms is a long time for a computer… enough time to run 8 million instructions!

File systems try hard to avoid using secondary storage for these reasons.

I/O Metrics


These often offset each other. An improvement in one comes at a cost to others.
Files systems that have good latency often have poor throughput & vice versa.

How to improve File System performance