CS 111 Lecture 16 Scribe Notes (Fall 2013)

by Christopher Ishida for a lecture by Professor Paul Eggert, November 27, 2013


RPC (Remote Procedure Call)


Can be incompatible:


Need a network convention:
  1. Always little endian
  2. Always big endian
  3. Use a flag (Sometimes used for performance)
Big endian tends to win

Problem of Marshalling (Serializing, Pickling)


Caller
DATA

---------->

Callee
DATA
Data must be disassembled and sent so that it can be placed back how it was.

Stub Generation:

Caller wants to say:
  stat(fd, &st)
    int struct stat*
  int stat(int fd, struct stat *st) {
    return unpickled st val;
  }
Server code:
  int stat (int fd, struct stat *p) {
    if (tab[fd] ... )
      *p = st;
  }
Client code:
    Glue code:
    pickle fd;
    ship fd over net;
    wait for answer;
    unpickle struct stat;
    *st = (unpickled version)

rpcgen - generates glue code from protocol spec

RPC failure modes:


  1. Messages get lost
  2. Messages can get corrupted
  3. Network might go down, or be slow (and can't tell if down or not)
  4. Server might go down, or be slow (loop)
  5. Client might go down, or be slow (loop)
Message gets lost
If no response ______ client won't know whether server got request
    Couple of ways to deal with this:
  1. Timeout: If you don't hear back for a certain amount of time, assumes message was not sent.
  2. Sequence Number: unique number for each message sent, so one will know when out of order. (TCP, this is similar to checksum)
Message gets corrupted:
Checksum the message (at protocol level, end to end)
Resend if corrupted
Isn't perfect but works decently
Possible solutions:

Example protocol: X


X window system

Client:                 ------------>        Server:
Send x (co-ord)   ------------>        Read x
Send y (co-ord)   ------------>        Read y
Send "blue"         ------------>        Read "blue", color pixel "blue"
Read reply           ------------>        Send "OK"

This is slow! (One pixel at a time)

How can we speed it up?

In parallel:

NFS Network File System


Unix file syscalls on wheels (read, write, rename, mkdir...etc. Not dup, pip..etc.)

NFS Client :

process p ---> kernel ---> ps process desciptor ---> file descriptor ---> for NFS (stub for NFS protocol) ---> read request
NFS protocol looks like Unix file syscalls
Aside: seasnet keeps records of previous states, history of files (can access using cd .snapshot)


Suppose:
(cat a & mv b c) > b
   Does this work in NFS?
   Yes, because still have file handle which doesn't change inode number
(cat a & rm b) > b
   -Naive implementation requires NFS server to keep track of client state
NFS model:   Server doesn't care about client state
NFS server is "stateless"
if server creashes and reboots, clients won't care (except for performance)
Suppose:
client 1: cat a > b    (WRITE request will fail)
client 2: rm b     ("stale file handle")
   then errno == ESTALE;

NFS synchronization issues

Process 1: Process 2:
get_time_of_day(...); 1. write(fd, buf, 1024);
2. read(fd, buf, 1024); get_time_of_day(...);
Latency issues & caching issues
NFS (in general) does not have read-after-write consistency
However, it does have open-after-close consistency (must do pending writes)
   Considered to be heavyweight operations