CS 111: Operating Systems (Spring 2013)

 Lecture 15: Virtual Memory (continued) and Caching Policies

Week 8: Thursday (May 23rd)

By: Tarun Teja Gotimukala and Mitchell Binning

_______________________________________________________________________________________________________________________________

Table of Contents:

1.     Page Based Memory Management (cont.)

a.     Linear Page Tables

b.     Two-Level Page Tables

c.      Page Faulting

2.     Tuning Virtual Memory Performance

3.     Caching Policies

a.     FIFO

b.     Optimal/Oracle

c.      LRU

4.     Virtual Memory and Apache

 

 

1) Page Based Memory Management:

·       Linear Page Table

o   With linear tables, the MMU (Memory Management Unit) splits a virtual address into page number and page offset components, and this page number is used to index into an array of page table entries. The actual physical address is the concatenation of the page frame number in the page table entry and the page offset of the virtual address.

o   To put it in simple words, a linear page table is used to map virtual memory to the physical memory. This works by acquiring the virtual block number, which act like pointers, to look up the appropriate physical block in the page table. Each entry in the page table is called a “page table entry” and holds an integer that .denotes a physical block. The leading bit in the entry denotes whether the page is actually in physical memory.

A.

o   As the picture denotes, Pages in x86 are 4MiB big and virtual addresses are split up with the first 20 bits being the page table lookup and the last 12 bits being extra data bits. So each process has its own page table with 2^22 entries and each entry is 4 bytes long.

phys_page p = pmap(virt_page);

int pmap(int v){

return PAGE_TABLE[v];

}

·       2 – Level Page Table

o   x86 supports an approach to improve the memory usage of the linear page table called the two level page tables. It is similar to the way inodes hold indirect blocks to point to more data. As the name suggests it’s is “two” level page table, which means there are two levels. Just like how the inodes hold indirect blocks, the top level entries map point to the lower page entries which map to the physical block. The virtual address is split up to have equal bits to denoting Hi and Lo, and the rest of bits perform the same tasks as the linear page table.

o   Basically, we added an extra level of indirection. These Hi and Lo secondary tables allow for more efficient use of memory.

A.

o   The code to obtain the mapping is much more difficult, comparatively, to linear page table.

C code to model how hardware acts:

A.

 

c)   Page Swapping and Faulting

o   Page Swapping

A.

o   Page Faults

            (i.e., set ip to SEGV handler before returning)

 

Mechanism for swapping pages in:

A.

RETI then re-executes the instruction

 

§  Thrashing – If a high proportion of instructions fault, the system is unusably slow. (You thrash, you die!)

§  What if the Kernel gets big? Can you use Virtual Memory inside the Kernel?

2) Virtual Memory Tuning

Page faulting is a slow process; here are some suggestions to increase the performance:

Is a method of virtual memory management that follows the idea that pages should only be brought into memory if the executing process demands them. In this system the operating system copies a page into physical memory only if an attempt is made to access it (i.e., if a page fault occurs). No pages exist in the physical memory before the process starts, and many page faults will occur until most of a process's working set of pages is located in physical memory.

Situation occurs when a bit in a memory cache or virtual memory page has been changed by a processor, but has yet to be updated in the storage. This dirty bit method acts as a flag that indicates whether an attribute needs to be updated,  it is achieved by designating all the page table entries to 0 at frequent intervals and whenever there is a change we set it to 1. If the bit is 0 when the page is victimized, it is a win for us because we only need to replace it; we can eliminate the time for doing a store for that page, hence it leads to an increase the efficiency and performance of the system.

3. Caching Policy: How to choose the victim?

Depends, if pages are accessed at random, any policy works just as well.

We can choose any page to be the victim, but the memory is normally accessed based on locality of reference. (Note: this "doesn't matter" answer also assumes that every file on disk has an equal cost of access.)

We need to decide which ones to bring to cache (i.e. Physical memory).

a)  FIFO

Assume 5 virtual pages, 3 physical pages

A.

 

    b) Optimal Policy (Oracle Policy)

A.

                            Predict via runs

A.

    d) LRU Policy

A.

 

4. Virtual memory and Apache

Originally Apache was made very simply and made use of a very naïve implementation.

To give one an idea, it looked roughly like this:

 

A.

 

            In order to make this faster we don’t copy the entire process, instead we copy the page table. We then make have the permissions set to not be writable, thus preventing us from accidentally altering both process files. When we want to alter some data we copy only the page we want to alter, and then make the parent point to one version with the ability to write to it and have the child point to the other copy with the ability to write to it. All other data does not have permission to write. This is depicted in the diagram below:

A.