[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

/Solutions.

1. Sequential consistency

An implementation of a collection of shared objects is sequentially consistent if, for each concurrent execution X, there is a single sequential execution S that includes only operations started in X such that X|p = S|p for all non-faulty processes p. This is weaker than linearizability because it doesn't require that the order of operations in X to control the order of operations in S, except for operations of the same process.

Suppose we use the Attiya, Bar-Noy, and Dolev algorithm (see SharedMemoryVsMessagePassing) to implement a distributed shared memory in a message passing system with up to f < n/2 crash failures, where we provide exactly n simulated single-writer registers, one for each process. But since we are willing to accept sequential consistency instead of linearizability, we consider simplifying the read implementation.

  1. Suppose a reader sends a read message to all processes, and returns the value with the highest timestamp from the first ⌈(n+1)/2⌉ responses it receives, without sending any other messages. Is the resulting protocol sequentially consistent?

  2. Suppose instead that a read simply returns the value with the highest timestamp that it has personally received (or the initial value ⊥ if it has received none), without sending any messages at all. Is the resulting protocol sequentially consistent?

2. Failure detectors

Paxos is usually implemented with the Ω failure detector. This provides to each process the identity of some leader process (which can change over time), with the property that eventually all processes agree forever on a single non-faulty leader.

  1. Prove or disprove: It is possible to implement Ω in an asynchronous system with up to n-1 faults using ⋄P.
  2. Prove or disprove: It is possible to implement ⋄P in an asynchronous system with up to n-1 faults using Ω.

3. Flaky consensus objects

Here we consider randomized protocols that almost solve consensus. Protocol ½A provides validity and termination (with probability 1), and provides agreement with probability at least 1/2. Protocol ½V provides agreement and termination (with probability 1), and provides validity with probability at least 1/2. Your goal is to use one or more instances of these protocols to build a working consensus protocol that satisfies all of validity, agreement, and probability-1 termination.

  1. Prove or disprove: There is a deterministic protocol that solves consensus given an unbounded number of copies of ½A.
  2. Prove or disprove: There is a deterministic protocol that solves consensus given an unbounded number of copies of ½V.

Clarification added 2008-03-22: Assume a wait-free shared-memory system. In addition to the flaky consensus objects, you may also use as many atomic registers as you need, but up to n-1 processes may fail.

Clarification added 2008-03-26: The probability that ½A or ½V works may depend on the behavior of the adversary. In particular, if you try to use either as a substitute for a local coin (with appropriately set inputs), you may find that the adversary tweaks them to be completely deterministic (or not, depending on what is most likely to break your algorithm).


2014-06-17 11:58