[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

1. Sequential consistency

An implementation of a collection of shared objects is sequentially consistent if, for each concurrent execution X, there is a single sequential execution S that includes only operations started in X such that X|p = S|p for all non-faulty processes p. This is weaker than linearizability because it doesn't require that the order of operations in X to control the order of operations in S, except for operations of the same process.

Suppose we use the Attiya, Bar-Noy, and Dolev algorithm (see SharedMemoryVsMessagePassing) to implement a distributed shared memory in a message passing system with up to f < n/2 crash failures, where we provide exactly n simulated single-writer registers, one for each process. But since we are willing to accept sequential consistency instead of linearizability, we consider simplifying the read implementation.

  1. Suppose a reader sends a read message to all processes, and returns the value with the highest timestamp from the first ⌈(n+1)/2⌉ responses it receives, without sending any other messages. Is the resulting protocol sequentially consistent?

  2. Suppose instead that a read simply returns the value with the highest timestamp that it has personally received (or the initial value ⊥ if it has received none), without sending any messages at all. Is the resulting protocol sequentially consistent?

1.1. Solution

  1. This is not sequentially consistent, even with only one register. Suppose a writer writes 1 (sending it to all processes) followed by 2, but crashes after sending 2 to only one of the processes. We can now have any reader return an arbitrary sequence of 1 and 2 values in response to read operations. To get a 1, let it get responses from ⌈(n+1)/2⌉ processes that all have the 1; to get a 2, include the one process with a 2. But then a sequence of reads 1 2 1 2 (for example) is not consistent with only two write operations.
  2. This is not sequentially consistent, but we need to consider two registers to demonstrate this fact. Let p and q each write 1 to its own register and then read the other's. Consider an execution in which messages between p and q are delayed until after both have completed both operations. Then p and q both read ⊥ on their first read. But in any sequential execution, one of the two process writes first, and since this write precedes the other's first read, the other process should return 1. So there is no single sequential execution that makes both p and q happy.

2. Failure detectors

Paxos is usually implemented with the Ω failure detector. This provides to each process the identity of some leader process (which can change over time), with the property that eventually all processes agree forever on a single non-faulty leader.

  1. Prove or disprove: It is possible to implement Ω in an asynchronous system with up to n-1 faults using ⋄P.
  2. Prove or disprove: It is possible to implement ⋄P in an asynchronous system with up to n-1 faults using Ω.

2.1. Solution

  1. Proof: The ⋄P detector guarantees that eventually ever non-faulty process is not suspected by any process (strong accuracy) and that every faulty process is eventually suspected by every process (strong completeness). So eventually it holds that every process suspects the same set of processes. To implement Ω, have each process point to the lowest-id process it doesn't suspect; once all processes agree on the suspect set, they all agree on the leader.
  2. Disproof: Suppose we have an alleged implementation of ⋄P from Ω. Consider a system in which process 1 is non-faulty and Ω never points to any other process at any time. Put process 2 to sleep. If process 1 ever suspects 2, wake it up long enough to send one message, then put it to sleep again; by repeating this process, we keep the failure detector from satisfying eventual strong accuracy. If it ever decides not to suspect process 2 forever, we make process 2 be faulty, violating strong completeness. In either case the detector doesn't work.

3. Flaky consensus objects

Here we consider randomized protocols that almost solve consensus. Protocol ½A provides validity and termination (with probability 1), and provides agreement with probability at least 1/2. Protocol ½V provides agreement and termination (with probability 1), and provides validity with probability at least 1/2. Your goal is to use one or more instances of these protocols to build a working consensus protocol that satisfies all of validity, agreement, and probability-1 termination.

  1. Prove or disprove: There is a deterministic protocol that solves consensus given an unbounded number of copies of ½A.
  2. Prove or disprove: There is a deterministic protocol that solves consensus given an unbounded number of copies of ½V.

3.1. Solution

  1. Proof: We need a mechanism to detect agreement when ½A works. There are several ways to do this, but the easiest is probably to use the racing counters mechanism described in RandomizedConsensus, but replacing the shared coin with a no-op that preserves the process's current preference. This gives us a mechanism that (a) guarantees that every process decides on the common input value within 2 rounds when there is agreement, and (b) guarantees that if any process decides in some round r, then every other process adopts the decision value as its preference by round r. We use this mechanism in a sequence of phases: in each phase, every process calls the instance of ½A for that phase using its current preference as input. It adopts the output of ½A as its new preference, then runs 3 rounds of the racing counters mechanism. If it has not decided at the end of these three rounds, it uses its preference from the last round of racing counters as input to the next phase. We thus get:

    With probability at least 1/2, all processes enter the racing-counters part of each phase with the same preference and thus decide. So we terminate with probability 1, and after 2 phases on average.
    Enforced by the racing-counters mechanism.
    Follows from the validity property of the racing-counters mechanism and ½A.
  2. Proof: We reduce to the previous case. To implement ½A from ½V, have each process write its input to a register before calling ½V. When a process obtains the common output x from ½V, it checks it against the registers and substitutes its own input if it doesn't see x anywhere. This satisfies validity always, and satisfies agreement provided x is indeed an input to some process (i.e., at least half the time).

2014-06-17 11:58