[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

1. Round-robin mutual exclusion

Consider the following algorithm for mutual exclusion in a synchronous message-passing system with n processes, communicating via an arbitrary (but connected) network. Group the synchronous rounds into phases of 2n rounds 0..2n-1, and have each process i=0..n-1 enter its critical section at the start of round 2i and leave at the start of round 2i+1 within each phase. Clearly no two processes are ever in the critical section at the same time.

  1. What happens to this algorithm if we run it in an asynchronous system using the alpha synchronizer?
  2. What happens to this algorithm if we run it in an asynchronous system using the beta synchronizer?

1.1. Solution

  1. In the alpha synchronizer, we may have many processes in their critical section simultaneously. If I am at distance k from you, I can be up to k rounds ahead of you. So by carefully arranging the process ids in a high-diameter network (e.g. on a line), we can have a constant fraction of the processes enter the critical section at once.
  2. In the beta synchronizer, synchronization is global: no process can enter round i+1 until ever process has signaled it is done with round i. So in this case we still get mutual exclusion.

2. Atomic snapshots with enormous registers

Suppose that we modify the single-scanner snapshot of Riany et al. (see AtomicSnapshots) so that each updater, instead of storing high and low fields in each memory cell, stores a list of all (seq, value) pairs it has ever written. So the new Update procedure would look like this:

procedure Update(value):
    seq ← curr_seq
    memory[i].high ← append(memory[i].high, (value, seq))
  1. Modify the Scan procedure to work with this Update procedure and argue that it works with a single scanner.

  2. Does your modified Scan procedure produce linearizable snapshots with multiple scanners? Why or why not?

2.1. Solution

Here is the revised Scan procedure:

procedure Scan():
    curr_seq ← curr_seq + 1
    for j ← 0 to n-1:
        h ← memory[j]
        view[j] ← value in h with largest seq < curr_seq

With a single scanner, this is equivalent to the original RST algorithm. The argument is that the original RST algorithm satisfies an invariant that the (seq, value) pair with the largest seq less than curr_seq is always found in either memory[j].high or memory[j].low (proof: first show that in RST, memory[j].low is always ≤ memory[j].high, then show that memory[j].high is copied to memory[j].low only if memory[j].high.seq < curr_seq, implying that the new memory[j].low now has the largest seq less than curr_seq). So taking the most recent value less than seq in memory[j] in the revised algorithm is equivalent to taking the most recent value less than seq in RST, and the new algorithm works using the same proof that RST works.

With multiple scanners, the new algorithm fails because a slow scanner could read an old value of curr_seq, add 1 to it, sleep for a long time, and wipe out much more recent sequence numbers, messing up the histories.


2014-06-17 11:58