Last Time: Performance, Organizaiton
This Time: Implementation
Next Time: Robustness
Operate transparently for most operations
Will behave as if operating directly on the target file
Dead symbolic links arise when the original file is deleted since the symbolic links is not autoamtically updated or deleted
Contain text that is interpreted by the OS as a path to the target file or directory
Has a different inode # thatn its target
Relative to its containing directory
Can cycle
One way we can detect cycles is by adding a max cycle limit before the lookup fails
ln -s a b
ln -s b a // a points to be abd b points to a
ln -s x x //self loop
Text editors other than Emacs can change /etc/passwd. Because they won't create the symbolic link, Emacs can't use it to detemrine if other editrs are editing the file. Emacs solves this by calling stat('/etc/passwd') to determine its modification time. If the file is modified after Emacs opened it, this means that another editor is editing the file as well and so Emacs will warn you before rewriting the buffer.
Emacs can crash causing it to not delete the symlink. Emacs solves this by checking the pid.
Emacs can go into an inifnite loop with teh lock, so the process still exists but Emacs will never delete the symlink.
Another application can remove the lock file or change what it poitns to.
The filename after that last slash is at least 254 characters and so Emacs can't prepend the '.#'.
Directory entry that associates a name with a file on the file system
Multipe hard links can be created for the same file
Hard links will not dangle (map to nonexistent file)
Hard links cannot loop
Complicate the user model. If the files are hard linked, removing one file does not reclaim the space allocated to it. User has to keep track of which files are linked together in order to free space.
No need for directory entries. If files could only have one name, that information can be stored directly in the inode
Create the possibility for cycles since directories which circularly link would be impossible to follow.
Indicate how many files are hardl inked to a particular inode, including the original file
Once the link count reaches 0, the inode and the corresponding data are freed so the disk can allocate new inodes and data to those regions.
There can be a bug in the file system where the link is removed by th te link count was not decremented. The FS will now leak blocks.
The link count can be decremented when a link was not removed. This creates a dangling pointer, which has undefined behavior.
Link count overflow occurs when the link count overflows the size of the integer holding it
Each directory contains at least two entries: . and .. (this is simpler)
Omit . and .. from disk since namei "knows: about them (this is more efficient)
$ln / /usr/bin/oops //This creates a cycle
Fix 'find' so that it detects loops and skips them
Give 'find' a limit of 1000
Two-iterator solution (really slow!)
Don't allow cycles (POSIX/Unix/Linux)
Dynamic Cycle Detection (this is usually too slow - nodes are on dis, cycle of unbounded length
No hard links to directories (other than . and ..)
Store short symbols in inode itself
Two block size (short blocks 1/4 of size)
Book Sector - Stores machine code to be loaded into RAM to boot the OS
Super Block - Contains the file system metadata and defines the file system type, size, status, and information
Block Bitmap - Used to track allocated blocks. Usually a block of bits that indicates whether a particular disk block is free or in use
Inode Table - Contains a listing of all the inodes of a file system
Units which perform read and writes on the file system.
Usually 512 bytes long
If it is too big, then bus time will be wasted from grabbing sectors off the disk
Group of sectors
Usually 8192 bytes (16 sectors) long
Bigger block sizes increase fragmentation and causes smaller files to waste space
Smaller blocks allow flexibility
Most file systems pick a bigger block size than a sector size since it gives extra throughput
Creats smaller virtual disk out of one larger phsyicla disk
Take physical drives and divide them into an array of blocks
Each partition can be treated independently
Representation a file
Talks about a file regardless of how it is accessed
Size - in bytes
File type - Directory, regular file, symlink, etc.
Permissions - describes user/group/other access to the file
Link Count - number of hard links to the file
Timestamp - time last modified and time last accessed
Address of data blocks - pointers to the block that store the file's actual contents
All parts of the file name except the '/'
usr, bin, and grep are file name components in /usr/bin/grep
aka pathnames
Operate above the file name
Abstract
$shred file
Try to shred the file and delete all of its contents permanently
This command will overwrite the content of the file three times with random data
Sometimes there can still be traces of the old data
There are certain devices which can read the ghosts of the old track
If a malicious person knows what the new data is (zeroes), they can easily find old data
Even if we have overwritten with enough data, the file can still be recovered in a log based file system
When you do a write, the actual data is not modified
The write data is posted in a log and does not actually update the data until much later
If a malicious person knows what the new data is (zeroes), they can easily find old data
Best thing to do is delete the entire filesystem
Melt the device
Physically shred the device
Degauss the device
Overwrite with random data