*For more up-to-date notes see http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf.*

The **session problem** demonstrates some limitations on *globally* simulating a synchronous message-passing system with an asynchronous message-passing system. In a global simulation, our goal is to produce a simulation that looks synchronous "from the outside"; that is, that looks synchronous to an observer that can see the entire schedule. In contrast, a synchronizer (see Synchronizers) produces a simulation that looks synchronous "from the inside"—the resulting execution is indistinguishable from a synchronous execution to any of the processes, but an outside observer can see that different processes execute different rounds at different times.

In our description, we will mostly follow AttiyaWelch §6.2.2; see AttiyaWelch for references on the origin of the problem.

A solution to the session problem is an asynchronous protocol in which each process repeatedly executes some **special action**. Our goal is to guarantee that these special actions group into s sessions, where a session is an interval of time in which every process executes at least one special action. We also want the protocol to terminate: this means that in every execution, every process executes a finite number of special actions.

A synchronous system can solve this problem trivially in s rounds: each process executes one special action per round. For an asynchronous system, a lower bound of Attiya and Mavronicolas (based on an earlier bound of Arjomandi et. al., who defined the problem in a slightly different communication model), shows that if the diameter of the network is D, there is no solution to the s-session problem that takes (s-1)D time or less in the worst case. The argument is based on reordering events in any such execution to produce fewer than s sessions, using the happens-before relation from LogicalClocks.

# 1. Outline of the proof

(See AttiyaWelch §6.2.2 for the real proof.)

Fix some algorithm A for solving the s-session problem, and suppose that its worst-case time complexity is (s-1)D or less. Consider some synchronous execution of A (that is, one where the adversary scheduler happens to arrange the schedule to be synchronous) that takes (s-1)D rounds or less. Divide this execution into two segments: an initial segment β that includes all rounds with special actions, and a suffix δ that includes any extra rounds where the algorithm is still floundering around. We will mostly ignore δ, but we have to leave it in to allow for the possibility that whatever is happening there is important for the algorithm to work (e.g. to detect termination).

We now want to perform a causal shuffle on β that leaves it with only s-1 sessions. The first step is to chop β into at most s-1 segments β_{1},β_{2}, ... of at most D rounds each. Because the diameter of the network is D, there exist processes p_{0} and p_{1} such that no chain of messages starting at p_{0} within some segment reaches p_{1} before the end of the segment. It follows that for any events e_{0} of p_{0} and e_{1} of p_{1} in the same segment β_{i}, it is not the case that e_{0}⇒_{βδ}e_{1}. So there exists a causal shuffle of β_{i} that puts all events of p_{0} after all events of p_{1}. By a symmetrical argument, we can similarly put all events of p_{1} after all events of p_{0}. In both cases the resulting schedule is indistinguishable by all processes from the original.

So now we apply these shuffles to each of the segments β_{i} in alternating order: p_{0} goes first in the even-numbered segments and p_{1} goes first in the odd-numbered segments, yielding a sequence of shuffled segments β'_{i}. This has the effect of putting the p_{0} events together, as in this example with (s-1) = 4:

βδ|(p

_{0},p_{1})

= β_{1}β_{2}β_{3}β_{4}δ|(p_{0},p_{1})

= β'_{1}β'_{2}β'_{3}β'_{4}δ|(p_{0},p_{1})

= (p_{1}p_{0}) (p_{0}p_{1}) (p_{1}p_{0}) (p_{0}p_{1}) δ

= p_{1}(p_{0}p_{0}) (p_{1}p_{1}) (p_{0}p_{0}) p_{1}δ

(here each p_{0}, p_{1} stands in for a sequence of events of each process).

Now let's count sessions. We can't end a session until we reach a point where both processes have taken at least one step since the end of the last session. If we mark with a slash the earliest places where this can happen, we get a picture like this:

p

_{1}p_{0}/p_{0}p_{1}/p_{1}p_{0}/p_{0}p_{1}/p_{1}δ

We have at most (s-1) sessions! This concludes the proof.