[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

1. Consensus numbers of tagged registers

A tagged register is a register that supports read and write operations on pairs (tag, value) where each tag is an element of some totally-ordered set and each value is arbitrary. The purpose of the tags is to prevent lost updates: a write with an tag smaller than the current value in the register will be discarded. Read operations return the current value of the register as usual.

Consider the following variants of tag registers, which differ in how they handle writes with tags equal to the current tag. For each variant, determine its consensus number (as defined in WaitFreeHierarchy).

  1. Writes with tags equal to the current tag go through as usual.
  2. Writes with tags equal to the current tag are discarded.
  3. Writes with tags equal to the current tag cause the register to go into a special failure state. Any subsequent read returns failure and any subsequent write has no effect.

1.1. Solution

  1. This version has consensus number 1. Proof: To show that this objects can't do 2-process consensus, run the usual FLP-style argument until we reach a configuration C with pending operations x and y with Cx 0-valent and Cy 1-valent. We can exclude all cases except where x and y are writes to the same register by the usual case analysis. Assume without loss of generality that tag(x) ≤ tag(y); then Cx is indistinguishable from Cxy to py.

  2. This version has consensus number ∞. Proof: Use one tagged register as a sticky register, by initializing it to (0,⊥) and having each process write (1,input) and then read the register and return whatever value its sees. Since only the first write with tag 1 succeeds, every process returns the same value.
  3. This version also has consensus number ∞. Suppose that the register is initialized to (0,⊥). A process with input 0 writes (0,0) to the register, while a process with id i (which we assume is at least 1) writes (i,1). If a 0-input process gets in first, the register moves to the failed state, and any process seeing failed can return 0. While if a process with input 1 gets in first, then any subsequent 0-input writes are ignored and subsequent 1-input writes don't cause failure. So any process seeing a non-failed return value can return 1.

2. Counters

Here is an implementation of a counter from atomic registers. Each process maintains a register whose value is equal to the number of increment operations carried out by that process; to execute an increment operation, the process increases the value in its register by 1. To read the counter, a process reads all the registers one at a time and sums up their values.

  1. Is this counter linearizable?1

  2. Suppose we use the same mechanism to build a generalized counter, where an increment operation may increase or decrease the value of the counter by an arbitrary amount, not just +1. Is the resulting object linearizable?

2.1. Solution

  1. This counter is linearizable. We can construct an explicit linearization ordering by assigning to each increment operation the sum of the values in the registers when the increment performs its write operation and to each read operation the value returned, and then order the operations first by value, then by increments before reads, and finally by execution order within reads with the same value. We can show that this ordering is consistent with the execution ordering because the values in the registers are non-decreasing; if one increment does its write before another, then the sum of the registers for the first increment is smaller; if one read starts after another finishes, the second read collects values that are no smaller than the first and is thus ordered later; if a read follows an increment, it collects values that are at least as large as those in the registers at the time of the increment's write operation; and similarly if a read precedes an increment, it sees values that are no greater than those in the registers when the increment writes and strictly smaller in the case of the increment's register. This establishes that the linearization order is consistent with the execution order. We must also show that the resulting execution is consistent with the sequential behavior of a counter; this means that each read must return the number of increments that precede it. Here we observe that the value of each register counts the number of completed increments by that process, so that when an increment performs a write, the total value of the registers counts the number of increments that have performed writes (in the real execution) at that time. So the read value is consistent.
  2. The generalized counter is not linearizable. For example, consider the following execution involving processes p1 doing inc(+1), p2 doing inc(+2), and q1 and q2 each doing reads, where the underlying operations are interleaved as:

    1. q2 reads 0 from r[1].

    2. p1 writes 1 to r[1].

    3. q1 reads 1 from r[1].

    4. q1 reads 0 from r[2].

    5. p2 writes 2 to r[2].

    6. q2 reads 2 from r[2].

    Now q1 returns 1 and q2 returns 2; but there is no way to order the two increment operations so that both of these values are present in the counter in the same sequential execution.

  1. Recall that an implementation is linearizable if, for every execution, there is a sequential execution of the simulated object such that if operation A finishes before operation B starts in the actual execution, operation A comes before operation B in the sequential execution. Equivalently, an implementation is linearizable if we can assign some time within the interval spanned by each operation such that we get a sequential execution of the simulated object if we treat each operation as happening atomically at its assigned time. (1)

2014-06-17 11:58