*For more up-to-date notes see http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf.*

# 1. Knowledge

Let's suppose we have a collection of predicates (e.g. p) for describing states of the system, and that we have a **modal operator** K_{i} such that K_{i} p means that process i "knows" that p is true. Exactly what it means that process i knows that p is true is a bit up for grabs, but a reasonable interpretation is that p is true in every execution consistent with what p saw directly. Alternatively, we could give a set of axioms for what i knows, like the traditional axiom system that modal logicians call S5:

- K1
- Any propositional tautology is true.
- K2
(K

_{i}p /\ K_{i}(p => q)) => K_{i}q. (Modus ponens applies to what I know.)- K3
K

_{i}p => p. (I know only true things.)- K4
K

_{i}=> K_{i}K_{i}p. (I know what I know.)- K5
¬K

_{i}p => K_{i}¬K_{i}p. (I know what I don't know.)

and inference rules:

- R1
From p and p => q infer q.

- R2
From p infer K

_{i}p. (Note that this does not mean that process i knows everything that is true, just that if we can prove from whatever our initial position of ignorance that p is true, so can process p.)

S5 was invented to talk about necessity rather than knowledge (ultimately in order to clarify ontological proofs of the existence of God!), but it works for knowlege too. Note that we can add a superscript to K_{i} to get K_{i}^{t}, what i knows at time t. In this case we probably also want to add an axiom K_{i}^{t} p => K_{i}^{t+a} p so that i doesn't forget anything.

(Some philosophers like to drop some of these axioms or add others. See Fagin and Halpern, Reasoning about knowledge and probability, JACM 41(2):340-367, 1994 for an extensive discussion of appropriate axiom systems for distributed systems; this is also where the above description of S5 is taken from.)

If processes are **honest** and sufficiently talkative, then we can add an axiom that says that if i sends j a message at round t that is delivered in round t+1 (in a synchronous system), then K_{j}^{t} p => K_{i}^{t+1} p. This can generate complicated chains of knowledge, e.g. K_{1} K_{2} ¬K_{1} p "Process 1 knows that process 2 knows that process 1 doesn't know p." Many algorithms can be analyzed by looking at how such chains of knowledge evolve over time.

# 2. Common knowledge

Let E p = K_{1} p /\ K_{2} p /\ K_{3} p ... K_{n} p be the statement "everybody knows p." Define E^{k} p = p if k = 0 and E E^{k-1} p for k > 0. Finally, define Cp as the infinite AND of E^{k} p for all k, i.e.

C p = p /\ E p /\ E

^{2}p /\ E^{3}p /\ ...

If "C p" holds, then the processes are said to have **common knowledge** of p. The notion of common knowledge was developed by the philosopher David_Lewis, but is known in ComputerScience mostly because of the work of Halpern and Moses (PODC 1984 applying it to the CoordinatedAttack problem.

# 3. Common knowledge and coordinated attack

Basic idea:

- Can't do coordinated attack without common knowledge that everybody will attack.
- Can't achieve common knowledge with message loss.
- But it's worse: if we don't have common clocks and there is bounded by unknown message delay, we still can't achieve common knowledge at any finite time. (This has implications for clock synchronization.)
- Ways out: synchronized clocks, leverage existing common knowledge, accept small forms of disagreement (e.g. attacking within 1 message delay of each other.)