[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

For more up-to-date notes see http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf.

Population protocols are a model of distributed computing by extremely limited agents that have limited control over who they talk to. They were first described in Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computation in networks of passively mobile finite-state sensors. Distributed Computing 18(4):235–253, March 2006 (conference version was in PODC 2004), as a model of sensor networks with mobile nodes, although the best real-world application for them might be chemical computing. Since this paper, there has been a small cottage industry of population-protocol papers, which are instructive as an example of how a new model develops over time.

We will just describe some of the basic population protocol results. See here for a more detailed survey of the population-protocol literature.

1. The basic population protocol model

Essentially just a pile of finite-state agents, with transitions as asymmetrical interactions between pairs of agents. A configuration consists of an assignment of states to the set of agents. A transition consists of choosing two agents and updating their states according to a joint transition function, where the new states of both agents depend on the old states of both agents (this abstracts out whatever actual communication mechanism the agents use, though it implicitly assumes that communication is instantaneous).

Formally, we have a set of states Q and a transition function δ:Q×Q→Q×Q. A transition consists of pairing two agents (the initiator and the responder) and updating their states according to the transition function.


1.1. Inputs and outputs

Input map Σ → Q supplies the initial state of each agent based on its local input. Output map Q → Σ extracts a local output from each agent.

Ideally, we want to converge to a configuration where all agents have the same output, and this common output persists over all future transitions. This is called a stable computation.

1.2. Fairness

Getting a stable computation depends on scheduling of transitions. A sufficiently powerful adversary could choose, for example, to never schedule a transition involving some particular agent. In this case the victim agent will never update its state and never converge to the correct output. So we need some restriction on the adversary. The usual condition is

Global fairness
If a transition C→C' from a global configuration C to a global configuration C' is enabled infinitely often, eventually it occurs.

This is an attempt to abstract out the key feature of randomized scheduling (where the next two agents to interact are chosen at random according to some probability distribution) without using actual probabilities. We could also consider random scheduling, which turns out to give the population more computational power if we allow some chance of failure.

1.3. Interaction graph

It may be that not all agents can talk to each other (maybe our sensors are geographically localized, or maybe we are not mixing our test tube very well). This is modeled by an interaction graph, a directed graph in which an agent a can initiate an interaction with an agent b if and only if there is an edge from a to b. Restricting the interaction graph generally makes population protocols more powerful—we can always simulate an unrestricted graph by having virtual agent states migrate around on top of the actual agents.

2. Stably computable predicates

The first big open problem in population protocols was the question of what predicates could be stably computed in population protocol on a complete interaction graph assuming only global fairness. Because all agents with the same state are indistinguishable, we can describe an input or a configuration by giving a vector that counts how many agents are in each state, and we can think of the number of agents in a given state as representing some variable in unary.

2.1. Semilinear predicates

Some predicates we can compute (these were all in the AADFP paper):

These predicates are those definable in first-order Presburger arithmetic or equivalently those that can be expressed as the union of a finite collection of cones of the form { b + ∑ aixi } where b is a base vector, the ai are non-negative coefficients, and the xi are finitely many offset vectors that can be added zero or more times to b. A finite union of cones is called a semilinear set.

2.2. Impossibility result

Several papers following AADFP showed that one could compute semilinear predicates in many tweaked versions of the original model, but nobody could see how to do anything else without restricting scheduling or the interaction graph. Eventually it was shown Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computational power of population protocols. Distributed Computing 20(4):279–304, November 2007 (the original results were in a PODC 2006 paper that this is the journal version of) that only the semilinear predicates are computable without extra assumptions.

The proof of the full result is a little messy. We will show a weaker result that gives a glimmering of the full result, which is that the set of output-stable configurations with a given output is semilinear. The full result says the same thing about the set of input configurations that produce a given output, assuming that every input configuration eventually converges to a single output value.

A central tool is Dickson's Lemma: Every subset of ℕd has finitely many minimal elements (under the product ordering). A corollary to Dickson's Lemma is that any infinite subset of ℕd contains an infinite ascending sequence x1 < x2 < x3 < ... .

Why do we like Dickson's Lemma? Suppose we take the set S of output-stable configurations with a given output (0, say). Then this set is characterized by the property that starting from any x∈S, there is no sequence of interactions that produces an agent with a different output. So now let R be the set of configurations from which a nonzero output is reachable. It's easy to see that if x is in R, and x ≤ y, then y is also in R; from y, we can produce a nonzero output by following the same sequence of transitions in x while ignoring any extra agents. Now let R0 be the set of minimal vectors in R. Then (a) if x ∈ R0 and x ≤ y, then y is in R (already shown), and (b) if y is in R, then there exists some x ∈ R0 such that x ≤ y (if not, there is some minimal y0 ≤ y that we forgot to include in R0). In other words, x is in S if and only if x is not ≥ some y in R0.

Writing this as a first-order formula in Presburger arithmetic, x is in S if and only if it holds that ¬ ∨y ∈ R₀i xi ≥ yi. This means that the set of output-stable configurations with a particular output is recognizable in first-order Presburger arithmetic and is thus semilinear.

3. More power from stronger models

Stock population protocols are not very strong; the limitation to semilinear sets means that they can't compute non-semilinear predicates like A > B√2 or A⋅B = C. Intuitively, the way that a population protocol typically fails is that it can do a loop (it eventually converges to the terminal state) but not a nested loop (it can't tell when the inner loop has converged, making it safe to start the next iteration of the outer loop). By strengthening the model, we can give population protocols more power. Some possibilities from the literature (see the survey paper for details):

4. Some open problems in the base model

The base model is unsatisfying in several ways: it provides no notion of time, and the semilinearity bound limits what we can compute. There are variants of the base model that (to my knowledge) have yet not been explored that might overcome these limitations:

Adding a time measure
Right now, global fairness just means that something happens eventually, but eventually can mean nigh-unto-forever for us mere mortals. In our usual asynchronous models, we could define a time unit as the minimum time until every process gets to do something. What is the analog of this for globally fair executions? If we only care about local fairness (every pairwise interaction eventually happens), then we could define a time unit as the time until every pair of agents meets at least once. But local fairness doesn't seem strong enough to solve most problems in this model.
Replacing random scheduling with oblivious scheduling

A standard dodge in traditional distributed models is to replace an assumption of random scheduling with an assumption of oblivious scheduling plus randomization in the algorithm (this is similar to the usual randomized algorithm trick of faking a random input by using a random algorithm instead, as in randomized QuickSort.) Can we do this for population protocols? Suppose that we have an oblivious adversary (possibly just with local fairness), but transitions are probabilistic. Can we boost local fairness to global fairness?1 Can we beat the semilinearity limit?


  1. My guess: yes. (1)

2014-06-17 11:58