CS 111 Lecture 14 Scribe Notes (Spring 2014)
Scribed by Rebecca Sousa and Christian Youngers
I. Virtual Memory
What happens in a page fault?
- Hardware traps
- regular interrupt = INT 0x80; transfer to kernel, then back to next instruction
- page fault = movl %eax, 0x59328; transfer to kernel, then back to same instruction )
- Deal with page fault (mechanism):
- assume kernel has function:
off_t swapmap( proc, vpn )
{
return disk address for that page, or FAIL;
}
^
(should be fast, but not too fast)
void pfault( proc, vpn, accessType, ... )
{
off_t da = swapmap( proc, vpn, );
if ( da == FAIL )
kkill( proc, SIGSEGV );
else
{
( procv, vpnv, ppnv ) = removal_policy( ... );
write page at ppnv to disk at swapmap( procv, vpnv );
update proc's page table tp that vpn -> ppnv;
read page at ppnv from disk at da
update procv's page table so that vpnv -> 0
}
}
// vpn = virtual page number
// vpnv = virtual page number of victim
// ppn = physical page number
// vpnv = physical page number of victim
- Remember: “Free memory is wasted memory!”
Two common optimizations to avoid or minimize overhead of page faults
1) Demand paging:
- Read program's first page in RAM and jump to it
- + improved wait time
- - move page faults during execution
- The cost of demand paging is illustrated below, with the following parameters:
- N = # of pages in program
- U = # of pages actually used in this run N >= U > 0
- C = cost of reading a page
- F = cost of faulting
total costlatency cost
----------------------
|
no demand paging NC| NC
|
------------------------------------------
|
demand pagingU (C + F) - F| C
|
2) Dirty Bit:
- The dirty bit keeps track of when a page in RAM != what is on disk
- When evicting a victim, don't write to disk if the dirty bit is 0
- How to implement:
- Call up the hardware guys and have another bit for the dirty bit
- Or we can implement this with the software by only using the
r-w-x bits
- Always set w = 0, even if page is writeable
- When page is written, change to w = 1
Page Fault Policy
- Which page should be the victim to get pushed to RAM?
- Suppose app accesses pages at random...
- ...then any policy will do, no one policy is better than the rest
First in First Out
- Trace: sequence of vpns corresponding to apps behavior
- Example of a trace with 5 virtual pages: 0 1 2 3 0 1 4 0 1 2 3 4
- Let’s
FIFO - 3 Pages fit in RAM
0 1 2 3 0 1 4 0 1 2 3 4
A ^0 0 0 ^3 3 3 ^4 4 4 4 4 4
B ^1 1 1 ^0 0 0 0 0 ^2 2 2
C ^2 2 2 ^1 1 1 1 1 ^3 3
- 9 Page Faults
Now Imagine 4 Pages Fit in RAM
0 1 2 3 0 1 4 0 1 2 3 4
A ^0 0 0 0 0 0 ^4 4 4 4 ^3 3
B ^1 1 1 1 1 1 ^0 0 0 0 ^4
C ^2 2 2 2 2 2 ^1 1 1 1
D ^3 3 3 3 3 3 ^2 2 2
- 10 Page Faults
- This is called Belady's Anomaly
Oracle (Best Possible Scenario)
0 1 2 3 0 1 4 0 1 2 3 4
A ^0 0 0 0 0 0 0 0 0 ^2 2 2
B ^1 1 1 1 1 1 1 1 1 ^3 3
C ^2 ^3 3 3 ^4 4 4 4 4 4
- 7 Page Faults
- Optimal!!
Least Recently Used (LRU)
0 1 2 3 0 1 4 0 1 2 3 4
A ^0 0 0 ^3 3 3 ^4 4 4 ^2 2 2
B ^1 1 1 ^0 0 0 0 0 0 ^3 3
C ^2 2 2 ^1 1 1 1 1 1 ^4
- 10 Page Faults
- LRU Mechanism can be simulated by software with
timestamps of last use
II. Distributed Systems & RPC
- Remote Procedure Calls: Distributed Systems ::
System Calls : Kernels
(API/ABI)
- Similar kind of Abstraction, BUT...
RPCs differ because
- Caller & callee don't share address space
(hard modularity for free!)
- No call by reference; only call by value
- Caller & callee may have different architectures
for example:
- x86-64 client -- little endian
- sparc64 server -- big endian
Resolving Endianness
- Sender transmits data in form recipient prefers
(polite sender)
- Sender always transmits own format to recipient
(rude sender)
- Standardize on big endian
Marshalling/Unmarshalling
client server
-----------------------------------------------------------
little end -> | | -> | | -> | | -> | | -> | | -> big end
- Marshalling is implemented in the substrate (not part of the API)
- Stubs: code that runs in client, glues together the client
to the network
- can be generated automatically from protocol description