[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

For more up-to-date notes see http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf.

1. Low-level view: I/O automata

An I/O automaton A is an automaton where transitions are labelled by actions, which come in three classes: input actions, triggered by the outside world; output actions triggered by the automaton and visible to the outside world; and internal actions, triggered by the automaton but not visible to the outside world. These classes correspond to inputs, outputs, and internal computation steps of the automaton; the latter are provided mostly to give merged input/output actions a place to go when automata to be composed together. A transition relation trans(A) relates states(A)×acts(A)×states(A); if (s,a,s') is in trans(A), it means that A can move from state s to state s' by execution action a.

There is also an equivalence relation task(A) on the output and internal actions, which is used for enforcing fairness conditions---the basic idea is that in a fair execution some action in each equivalence class must be executed eventually (a more accurate definition will be given below).

The I/O automaton model carries with it a lot of specialized jargon. We'll try to avoid it as much as possible. One thing that will be difficult to avoid in reading LynchBook is the notion of a signature, which is just the tuple (in(A), out(A), int(A)) describing the actions of an automaton A.

1.1. Enabled actions

An action a is enabled in some state s if trans(A) contains at least one transition (s,a,s'). Input actions are always enabled---this is a requirement of the model. Output and internal actions---the "locally controlled" actions---are not subject to this restriction. A state s is quiescent if only input actions are enabled in s.

1.2. Executions, fairness, and traces

An execution of A is a sequence s0 a0 s1 a1 ... where each triple (si, ai si+1) is in trans(A). Executions may be finite or infinite; if finite, they must end in a state.

A trace of A is a subsequence of some execution consisting precisely of the external (i.e., input and output) actions, with states and internal actions omitted. If we don't want to get into the guts of a particular I/O automaton---and we usually don't, unless we can't help it because we have to think explicitly about states for some reaons---we can describe its externally-visible behavior by just giving its set of traces.

1.3. Composition of automata

Composing a set of I/O automata yields a new super-automaton whose state set is the Cartesian product of the state sets of its components and whose action set is the union of the action sets of its components. A transition with a given action a updates the states of all components that have a as an action and has no effect on the states of other components. The classification of actions into the three classes is used to enforce some simple compatibility rules on the component automata; in particular:

  1. An internal action of a component is never an action of another component---internal actions are completely invisible.
  2. No output action of a component can be an output action of another component.
  3. No action is shared by infinitely many components.1 In practice this means that no action can be an input action of infinitely many components, since the preceding rules mean that any action is an output or internal action of at most one component.

All output actions of the components are also output actions of the composition. An input action of a component is an input of the composition only if some other component doesn't supply it as an output; in this case it becomes an output action of the composition. Internal actions remain internal (and largely useless, except for bookkeeping purposes).

The tasks equivalence relation is the union of the tasks relations for the components: this turns out to give a genuine equivalence relation on output+internal actions precisely because the first two compatibility rules hold.

Given an execution or trace X of a composite automaton that includes A, we can construct the corresponding execution or trace X|A of A which just includes the states of A and the actions visible to A (events that don't change the state of A drop out). The definition of composition is chosen so that X|A is in fact an execution/trace of A whenever X is.

1.4. Hiding actions

Composing A and B continues to expose the outputs of A even if they line up with inputs of B. While this may sometimes be desirable, often we want to shove such internal communication under the rug. The model lets us do this by redefining the signature of an automaton to make some or all of the output actions into internal actions.

1.5. Fairness

I/O automata come with a built-in definition of fair executions, where an execution of A is fair if, for each equivalence class C of actions in tasks(A),

  1. the execution is finite and no action in C is enabled in the final state, or
  2. the execution is infinite and there are infinitely many occurrences of actions in C, or
  3. the execution is infinite and there are infinitely many states in which no action in C is enabled.

If we think of C as corresponding to some thread or process, this says that C gets infinitely many chances to do something in an infinite execution, but may not actually do them if it gives ups and stops waiting (the third case). The finite case essentially says that a finite execution isn't fair unless nobody is waiting at the end. The motivation for this particular definition is that it guarantees (a) that any finite execution can be extended to a fair execution and (b) that the restriction X|A of a fair execution or trace X is also fair.

Fairness is useful e.g. for guaranteeing message delivery in a message-passing system: make each message-delivery action its own task class and each message will eventually be delivered; similarly make each message-sending action its own task class and a process will eventually send every message it intends to send. Tweaking the task classes can allow for possibilities of starvation, e.g. if all message-delivery actions are equivalent then a spammer can shut down the system in a "fair" execution where only his (infinitely many) messages are delivered.

1.6. Specifying an automaton

Typical approach is to write down preconditions and effects for each action (for input actions, the preconditions are empty), e.g. a spambot:

(Plus an initial state, e.g. state = null, where null is not a possible message, and a task partition, of which we will speak more below when we talk about liveness properties.)

2. High-level view: traces

When studying the behavior of a system, traces are what we really care about, and we want to avoid talking about states as much as possible. So what we'll aim to do is to get rid of the states early by computing the set of traces (or fair traces) of each automaton in our system, then compose traces to get traces for the system as a whole. Our typical goal will be to show that the resulting set of traces has some desirable properties, usually of the form (1) nothing bad happens (a safety property); (2) something good eventually happens (a liveness property); or (3) the horribly complex composite automaton representing this concrete system acts just like that nice clean automaton representing a specification (a simulation).

Very formally, a trace property specifices both the signature of the automaton and a set of traces, such that all traces (or perhaps fair traces) of the automata appear in the set. We'll usually forget about the first part.

Tricky detail: It's ok if not all traces in P are generated by A (we want trace(A) subset of P, but not necessarily equal). But trace(A) will be pretty big (it includes, for example, all finite sequences of input actions) so hopefully the fact that A has to do something with inputs will tell us something useful.

2.1. Example

A property we might demand of the spambot above (or some other abstraction of a message channel) is that it only delivers messages that have previously been given to it. As a trace property this says that in any trace t, if tk = spam(m), then tj = set-message(m) for some j < k. (As a set, this is just the set of all sequences of external spambot-actions that have this property.) Call this property P.

To prove that the spambot automaton given above satisfies P, we might argue that for any execution s0a0s1a1..., that si = m in the last set-message action preceding si, or null if there is no such action. This is easily proved by induction on i. It then follows that since spam(m) can only transmit the current state, that if spam(m) follows si = m that it follows some earlier set-message(m) as claimed.

However, there are traces that satisfy P that don't correspond to executions of the spambot; for example, consider the trace set-message(0) set-message(1) spam(0). This satisfies P (0 was previously given to the automaton spam(0)), but the automaton won't generate it because the 0 was overwritten by the later set-message(1) action. Whether this is indicates a problem with our automaton not being nondeterministic enough or our trace property being too weak is a question about what we really want the automaton to do.

2.2. Types of trace properties

2.2.1. Safety properties

P is a safety property if

  1. P is nonempty.
  2. P is prefix-closed, i.e. if x is in P then xy is in P.
  3. P is limit-closed, i.e. if x1, x1x2, x1x2x3, ... are all in P, then so is the infinite sequence obtained by taking their limit.

Because of the last restrictions, it's enough to prove that P holds for all finite traces of A to show that it holds for all traces (and thus for all fair traces), since any trace is a limit of finite traces. Conversely, if there is some trace or fair trace for which P fails, the second restriction says that P fails on any finite prefix of P, so again looking at only finite prefixes is enough. The spambot property mentioned above is a safety property.

Safety properties are typically proved using invariants, properties that are shown by induction to hold in all reachable states.

2.2.2. Liveness properties

P is a liveness property of A if any finite sequence of actions in acts(A) has an extension in P. Note that liveness properties will in general include many sequences of actions that aren't traces of A, since they are extensions of finite sequences that A can't do (e.g. starting the execution with an action not enabled in the initial state). If you want to restrict yourself only to proper executions of A, use a safety property. (It's worth noting that the same property P can't do both: any P that is both a liveness and a safety property includes all sequences of actions because of the closure rules.)

Liveness properties are those that are always eventually satisfiable; asserting one says that the property is eventually satisfied. The typical way to prove a liveness property is with a progress function, a function f on states that (a) drops by 1 every time something that happens infinitely often happens (like an action from an always-enabled task class) and (b) guarantees P once it reaches 0.

An example would be the following property we might demand of our spambot: any trace with at least one set-message(...) action contains infinitely many spam(...) actions. (As a set of traces, this would be the union of all sequences of actions that don't contain a set-message and all sequences of actions that contain infinitely many spam actions---equivalently, all sequences of spam actions, whether finite or infinite, and including the empty sequence.) Whether the spambot automaton will satisfy this property (in fair traces) depends on its task partition. If all spam(...) actions are in the same equivalence class, then any execution with at least one set-message will have some spam(...) action enabled at all times thereafter, so a fair trace containing a set-message can't be finite (since spam is enabled in the last state) and if infinite contains infinitely many spam messages (since spam messages of some sort are enabled in all but an initial finite prefix). On the other hand, if spam(m1) and spam(m2) are not equivalent in task(A), then the spambot doesn't satisfy the liveness property: in an execution that alternates set-message(m1) set-message(m2) set-message(m1) set-message(m2) ... there are infinitely many states in which spam(m1) is not enabled, so fairness doesn't require doing it even once, and similarly for spam(m2).

2.2.3. Other properties

Any other property P can be expressed as the intersection of a safety property (= the closure of P) and a liveness property (= P union all finite sequences that aren't prefixes of traces in P). The intuition is that the safety property prunes out the excess junk we threw into the liveness property to make it a liveness property, since any sequence that isn't a prefix of a trace in P won't go into the the safety property. This leaves only the traces in P.

Example

Let P = { 0n1infinity } be the set of traces where we eventually give up on our pointless 0-action and start doing only 1-actions forever. Then P is the intersection of the safety property S = { 0n1m } union P (the extra junk is from prefix-closure) and the liveness property L = { 0n11m0x | x in {0,1}* } union P. S says that once we do a 1 we never do a 0, but allows finite executions of the form 0n where we never do a 1. L says that we eventually do a 1-action, but that we can't stop unless we later do at least one 0-action.

2.3. Compositional arguments

The product of trace properties P1, P2 ... is the trace property P where T is in P iff T|sig(Pi) is in Pi for each i. If {Ai} satisfy {Pi} individually, then their composition satisfies the product property. (For safety properties, often we prove something weaker about the Ai, which is that each Ai individually is not the first to violate P---i.e., it can't leave P by executing an internal or output action. In an execution where inputs by themselves can't violate P, P holds.)

Product properties let us prove trace properties by smashing together properties of the component automata, possibly with some restrictions on the signatures to get rid of unwanted actions. The product operation itself is in a sense a combination of a Cartesian product (pick traces ti and smash them together) filtered by a consistency rule (smashed trace must be consistent); it acts much like intersection (and indeed can be made identical to intersection if we treat a trace property with a given signature as a way of describing the set of all T such that T|sig(Pi) is in Pi).

2.3.1. Example

Consider two spambots A1 and A2 where we identify the spam(m) operation of A1 with the set-message(m) operation of A2; we'll call this combined action spam1(m) to distinguish it from the output actions of A2. We'd like to argue that the composite automaton A1+A2 satisfies the safety property (call it Pm) that any occurrence of spam(m) is preceded by an occurrence of set-message(m), where the signature of Pm includes set-message(m) and spam(m) for some specific m but no other operations. (This is an example of where trace property signatures can be useful without being limited to actions of any specific component automaton.)

To do so, we'll prove a stronger property super-Pm, which is Pm modified to include the spam1(m) action in its signature. Observe that super-Pm is the product of the safety properties for A1 and A2 restricted to sig(super-Pm), since the later says that any trace that includes spam(m) has a previous spam1(m) and the former says that any trace that includes spam1(m) has a previous set-message(m). Since these properties hold for the individual A1 and A2, their product (and thus the restriction) super-Pm holds for A1+A2, and so Pm (as a further restriction) holds for A1+A2 as well.

Now let's prove the liveness property for A1+A2, that at least one occurrence of set-message yields infinitely many spam actions. Here we let L1 = { at least one set-message => infinitely many spam1 actions } and L2 = { at least one spam1 action => infinitely many spam actions }. The product of these properties is all sequences with (a) no set-message actions or (b) infinitely many spam actions, which is what we want. This product holds if the individual properties L1 and L2 hold for A1+A2, which will be the case if we set task(A1) and task(A2) correctly.

2.4. Simulation arguments

Show that traces(A) are a subset of traces(B) (possibly after hiding some actions of A) by showing a simulation relation f:states(A) -> states(B) between states of A and states of B. Requirements on f are

  1. s in start(A) implies f(s) includes some element of start(B).
  2. (s,a,s') in trans(A) and s reachable implies that for any reachable u in f(s) there is a sequence of actions x that takes u to some v in f(s') with trace(x) = trace(a).

Using these we construct an execution of B matching (in trace) an execution of A by starting in f(s0) and applying the second part of the definition to each action in the A execution (including the hidden ones!)

2.4.1. Example

A single spambot A can simulate the conjoined spambots A1+A2. Proof: Let f(s) = (s,s). Then f(null) = (null, null) is a start state of A1+A2. Now consider a transition (s,a,s') of A; the action a is either (a) set-message(m), giving s' = m; here we let x = set-message(m) spam1(m) with trace(x) = trace(a) since spam1(m) is internal and f(s') = (m,m) the result of applying x; or (b) a = spam(m), which does not change s or f(s); the matching x is spam(m), which also does not change f(s) and has the same trace.

A different proof could take advantage of f being a relation by defining f(s) = { (s,s') | s' in state(A2) }. Now we don't care about the state of A2, and treat a set-message(m) action of A as the sequence set-message(m) in A1+A2 (which updates the first component of the state correctly) and treat a spam(m) action as spam1(m) spam(m) (which updates the second component---which we don't care about---and has the correct trace.) In some cases an approach of this sort is necessary because we don't know which simulated state we are heading for until we get an action from A.

Note that the converse doesn't work: A1+A2 don't simulate A, since there are traces of A1+A2 (e.g. set-message(0) spam1(0) set-message(1) spam(0)) that don't restrict to traces of A. See LynchBook §8.5.5 for a more complicated example of how one FIFO queue can simulate two FIFO queues and vice versa (a situation sometimes called bisimulation).

Since we are looking at traces rather than fair traces, this kind of simulation doesn't help much with liveness properties, but sometime the connection between state plus a liveness proof for B can be used to get a liveness proof for A (essentially we have to argue that A can't do infinitely many action without triggering a B-action in an appropriate task class). Again see LynchBook §8.5.5.


CategoryDistributedComputingNotes

  1. Surprise! Infinite (but countable) compositions are permitted. (1)


2014-06-17 11:58