[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

PDF version

Mathematical logic is the discipline that mathematicians invented in the late nineteenth and early twentieth centuries so they could stop talking nonsense. It's the most powerful tool we have for reasoning about things that we can't really comprehend, which makes it a perfect tool for ComputerScience.

1. The basic picture

Reality

Abstract model

Theory

herds of sheep
piles of rocks
collections of fingers

natural numbers: 0, 1, 2, ...

∀x ∃y y = x+1

We want to model something we see in reality with something we can fit in our heads. Ideally we drop most of the features of the real thing that we don't care about and keep the parts that we do care about. But there is a second problem: if our model is very big (and the natural numbers are very very big), how do we know what we can say about them?

1.1. Axioms, models, and inference rules

One approach is to true to come up with a list of axioms that are true statements about the model and a list of inference rules that let us derive new true statements from the axioms. The axioms and inference rules together generate a theory that consists of all statements that can be constructed from the axioms by applying the inference rules. The rules of the game are that we can't claim that some statement is true unless it's a theorem: something we can derive as part of the theory.

Simple example: All fish are green (axiom). George Washington is a fish (axiom). From "all X are Y" and "Z is X", we can derive "Z is Y" (inference rule). Thus George Washington is green (theorem). Since we can't do anything else with our two axioms and one inference rule, these three statements together form our entire theory about George Washington, fish, greenness, etc.

Theories are attempts to describe models. A model is typically a collection of objects and relations between them. For a given theory, there may be many models that are consistent with it: for example, a model that includes both green fishy George Washington and MC 10,000-foot Abraham Lincoln is consistent with the theory above, because it doesn't say anything about Abraham Lincoln.

1.2. Consistency

A theory is consistent if it can't prove both P and not-P for any P. Consistency is incredibly important, since all the logics people actually use can prove anything starting from P and not-P.

1.3. What can go wrong

If we throw in too many axioms, you can get an inconsistency: "All fish are green; all sharks are not green; all sharks are fish; George Washington is a shark" gets us into trouble pretty fast.

If we don't throw in enough axioms, we underconstrain the model. For example, the Peano axioms for the natural numbers (see example below) say (among other things) that there is a number 0 and that any number x has a successor S(x) (think of S(x) as x+1). If we stop there, we might have a model that contains only 0, with S(0)=0. If we add in 0≠S(x) for any x, then we can get stuck at S(0)=1=S(1). If we add yet another axiom that says S(x)=S(y) if and only if x=y, then we get all the ordinary natural numbers 0, S(0)=1, S(1)=2, etc., but we could also get some extras: say 0', S(0') = 1', S(1') = 0'. Characterizing the "correct" natural numbers historically took a lot of work to get right, even though we all know what we mean when we talk about them. The situation is of course worse when we are dealing with objects that we don't really understand; here the most we can hope for is to try out some axioms and see if anything strange happens.

Better yet is to use some canned axioms somebody else has already debugged for us. In this respect the core of mathematics acts like a system library—it's a collection of useful structures and objects that are known to work, and (if we are lucky) may even do exactly what we expect.

1.4. The language of logic

The basis of mathematical logic is propositional logic, which was essentially invented by Aristotle. Here the model is a collection of statements that are either true or false. There is no ability to refer to actual things; though we might include the statement "George Washington is a fish", from the point of view of propositional logic that is an indivisible atomic chunk of truth or falsehood that says nothing in particular about George Washington or fish. If we treat it as an axiom we can prove the truth of more complicated statements like "George Washington is a fish or 2+2=5" (true since the first part is true), but we can't really deduce much else. Still, this is a starting point.

If we want to talk about things and their properties, we must upgrade to predicate logic. Predicate logic adds both constants (stand-ins for objects in the model like "George Washington") and predicates (stand-ins for properties like "is a fish"). It also lets use quantify over variables and make universal statements like "For all x, if x is a fish then x is green." As a bonus, we usually get functions ("f(x) = the number of books George Washington owns about x") and equality ("George Washington = 12" implies "George Washington + 5 = 17"). This is enough machinery to define and do pretty much all of modern mathematics.

We will discuss both of these logics in more detail below.

1.5. Standard axiom systems and models

Rather than define our own axiom systems and models from scratch, it helps to use ones that have already proven to be (a) consistent and (b) useful. Almost all mathematics fits in one of the following models:

In practice, the usual way to do things is to start with sets and then define everything else in terms of sets: e.g., 0 is the empty set, 1 is a particular set with 1 element, 2 a set with 2 elements, etc., and from here we work our way up to the fancier numbers. The idea is that if we trust our axioms for sets to be consistent, then the things we construct on top of them should also be consistent (although if we are not careful in our definitions they may not be exactly the things we think they are).

2. Propositional logic

Propositional logic is the simplest form of logic. Here the only statements that are considered are propositions, which contain no variables. Because propositions contain no variables, they are either always true or always false.

Examples of propositions:

Examples of non-propositions:

As the last two examples show, it is not enough for a statement to be always true—whether a statement is a proposition or not is a structural property. But if a statement doesn't contain any variables (or other undefined terms), it is a proposition, and as a side-effect of being a proposition it's always true or always false.

2.1. Operations on propositions

Propositions by themselves are pretty boring. So boring, in fact, that logicians quickly stop talking about actual statements and instead haul out placeholder names for propositions like p, q, or r. But we can build slightly more interesting propositions by combining propositions together using various logical connectives, like:

Negation

The negation of p is written as ¬p, or sometimes -p or p with a line over it. It has the property that it is false when p is true, and true when p is false.

Or

The or of two propositions p and q is written as p ∨ q, (or maybe p \/ q in ASCII) and is true as long as at least one, or possibly both, of p and q is true.1 This is not always the same as what "or" means in English; in English, "or" often is used for exclusive or which is not true if both p and q are true. For example, if someone says "You will give me all your money or I will stab you with this table knife", you would be justifiably upset if you turn over all your money and still get stabbed. But a logician would not be at all surprised, because the standard "or" in propositional logic is an inclusive or that allows for both outcomes.

Exclusive or

If you want to exclude the possibility that both p and q are true, you can use exclusive or instead. This is written as p⊕q, and is true precisely when exactly one of p or q is true. Exclusive or is not used in classical logic much, but is important for many computing applications, since it corresponds to addition modulo 2 (see ModularArithmetic) and has nice reversibility properties (e.g. p⊕(p⊕q) always has the same truth-value as q).

And

The and of p and q is written as p ∧ q, (or p /\ q) and is true only when both p and q are true.2 This is pretty much the same as in English, where "I like to eat ice cream and I own a private Caribbean island" is not a true statement when made by most people even though most people like to eat ice cream. The only complication in translating English expressions into logical ands is that logicians can't tell the difference between "and" and "but": the statement "2+2=4 but 3+3=6" becomes simply "(2+2=4) ∧ (3+3=6)."

Implication

This is the most important connective for proofs. An implication represents an "if...then" claim. If p implies q, then we write p → q, p ⇒ q, or p ⇒ q, depending on our typographic convention and the availability of arrow symbols in our favorite font. In English, p ⇒ q" is usually rendered as "If p, then q", as in "If you step on your own head, it will hurt." The meaning of p ⇒ q is that q is true whenever p is true, and the proposition p ⇒ q is true provided (a) p is false (in which case all bets are off), or (b) q is true. In fact, the only way for p ⇒ q to be false is for p to be true but q to be false; because of this, p ⇒ q can be rewritten as ¬p ∨ q. So, for example, the statements "If 2+2=5, then I'm the Pope", "If I'm the Pope, then 2+2=4", and "If 2+2=4, then 3+3=6", are all true, provided the if...then is interpreted as implication. Normal English usage does not always match this pattern; instead, if...then in normal speech is often interpreted as the much stronger biconditional (see below).

Biconditional

Suppose that p ⇒ q and q ⇒ p, so that either both p and q are true or both p and q are false. In this case, we write p ⇔ q, p ⇔ q, or p ⇔ q, and say that p holds if and only if q holds. The truth of p ⇔ q is still just a function of the truth or falsehood of p and q; though there doesn't seem any connection between the two sides of the statement, "2+2=5 if and only if I am the Pope" is still true (provided it is not uttered by the Pope). The only way for p ⇔ q to be false is for one side to be true and one side to be false.

The result of applying any of these operations is called a compound proposition.

Here's what all of this looks like when typeset nicely. Note that in some cases there is more than one way to write a compound expression. Which you choose is a matter of personal preference, but you should try to be consistent.

\begin{tabular}{cc}
NOT $p$ & $\neg p, \overline{p}$ \\
$p$ OR $q$ & $p \vee q$ \\
$p$ XOR $q$ & $p \oplus q$ \\
$p$ AND $q$ & $p \wedge q$ \\
$p$ implies $q$ & $p \rightarrow q, p \Rightarrow q$ \\
$p$ iff $q$ & $p \leftrightarrow q, p \Leftrightarrow q$ \\
\end{tabular}

2.1.1. Precedence

When reading a logical expression, the order of precedence in the absence of parentheses is negation, and, or, implication, biconditional. So (¬p ∨ q ∧ r ⇒ s ⇔ t) is interpreted as ((((¬p) ∨ (q ∧ r)) ⇒ s) ⇔ t). Or and and are associative, so (p ∨ q ∨ r) is the same as ((p ∨ q) ∨ r) and as (p ∨ (q ∨ r)), and similarly (p ∧ q ∧ r) is the same as ((p ∧ q) ∧ r) and as (p ∧ (q ∧ r)).

There does not seem to be a standard convention for the precedence of XOR, since logicians don't use it much. There are plausible arguments for putting XOR in between AND and OR, but it's probably safest just to use parentheses.

Implication is not associative, although the convention is that it binds "to the right", so that a ⇒ b ⇒ c is read as a ⇒ (b ⇒ c); few people ever remember this, so it is usually safest to put in the parentheses. I personally have no idea what p ⇔ q ⇔ r means, so any expression like this should be written with parentheses as either (p ⇔ q) ⇔ r or p ⇔ (q ⇔ r).

2.2. Truth tables

To define logical operations formally, we give a truth table. This gives, for any combination of truth values (true or false, which as computer scientists we often write as 1 or 0) of the inputs, the truth value of the output. So truth tables are to logic what addition tables or multiplication tables are to arithmetic.

Here is a truth table for negation:

p

¬p

0

1

1

0

And here is a truth table for the rest of the logical operators:

p

q

p∨q

p⊕q

p∧q

p⇒q

p⇔q

0

0

0

0

0

1

1

0

1

1

1

0

1

0

1

0

1

1

0

0

0

1

1

1

0

1

1

1

See also RosenBook §§1.1–1.2 or BiggsBook §§3.1–3.3.

We can think of each row of a truth table as a model for propositonal logic, since the only things we can describe in propositional logic are whether particular propositions are true or not. Constructing a truth table corresponds to generating all possible models. This can be useful if we want to figure out when a particular proposition is true.

2.3. Tautologies and logical equivalence

A compound proposition that is true no matter what the truth-values of the propositions it contains is called a tautology. For example, p ⇒ p, p ∨ ¬p, and ¬(p ∧ ¬p) are all tautologies, as can be verified by constructing truth tables. If a compound proposition is always false, it's a contradiction. The negation of a tautology is a contradiction and vice versa.

The most useful class of tautologies are logical equivalences. This is a tautology of the form X ⇔ Y, where X and Y are compound propositions. In this case, X and Y are said to be logically equivalent and we can substitute one for the other in more complex propositions. We write X ≡ Y if X and Y are logically equivalent.

The nice thing about logical equivalence is that is does the same thing for Boolean formulas that equality does for algebraic formulas: if we know (for example), that p∨¬p is equivalent to 1, and q∨1 is equivalent to q, we can grind q∨p∨¬p ≡ q∨1 ≡ 1 without having to do anything particularly clever. (We will need cleverness later when we prove things where the consequence isn't logically equivalent to the premise.)

To prove a logical equivalence, one either constructs a truth table to show that X ⇔ Y is a tautology, or transforms X to Y using previously-known logical equivalences. Some examples:

p ∧ ¬p ≡ 0: Construct a truth table

p

¬p

p ∧ ¬p

0

0

1

0

0

1

0

0

0

and observe that the last two columns are always equal.

p ∨ p ≡ p:

p

p∨p

0

0

1

1

p ⇒ q ≡ ¬p ∨ q: Again construct a truth table

p

q

p ⇒ q

¬p ∨ q

0

0

1

1

0

1

1

1

1

0

0

0

1

1

1

1

¬(p ∨ q) ≡ ¬p ∧ ¬q: (one of De Morgan's laws; the other is ¬(p ∧ q) ≡ ¬p ∨ ¬q).

p

q

p ∨ q

¬(p ∨ q)

¬p

¬q

¬p ∧ ¬q

0

0

0

1

1

1

1

0

1

1

0

1

0

0

1

0

1

0

0

1

0

1

1

1

0

0

0

0

p∨(q∧r) ≡ (p∨q)∧(p∨r) (one of the distributive laws; the other is p∧(q∨r) ≡ (p∧q)∨(p∧r)).

p

q

r

q∧r

p∨(q∧r)

p∨q

p∨r

(p∨q)∧(p∨r)

0

0

0

0

0

0

0

0

0

0

1

0

0

1

0

0

0

1

0

0

0

0

1

0

0

1

1

1

1

1

1

1

1

0

0

0

1

1

1

1

1

0

1

0

1

1

1

1

1

1

0

0

1

1

1

1

1

1

1

1

1

1

1

1

(p ⇒ r) ∨ (q ⇒ r) ≡ (p ∧ q) ⇒ r. Now things are getting messy, so building a full truth table may take awhile. But we have take a shortcut by using logical equivalences that we've already proved (plus associativity of ∨):

This last equivalence is a little surprising. It shows, for example, that if somebody says "It is either the case that if you study you will graduate from Yale with distinction, or that if you join the right secret society you will graduate from Yale with distinction", then this statement (assuming we treat the or as ∨) is logically equivalent to "If you study and join the right secret society, then you will graduate from Yale with distinction." It is easy to get tangled up in trying to parse the first of these two propositions; translating to logical notation and simplifying using logical equivalence is a good way to simplify it.

Over the years, logicians have given names to many logical equivalences. Only a few of these are actually useful to remember. These include de Morgan's laws: ¬(p ∨ q) ≡ (¬p ∧ ¬q) and ¬(p ∧ q) ≡ (¬p ∨ ¬q) (see De Morgan's law for more versions of these), commutativity of AND and OR, the equivalence of p ⇒ q and ¬p ∨ q, and the contraposition rule p ⇒ q ≡ ¬q ⇒ ¬p (see below for more about contraposition). Anything else you need you can probably derive from these or by writing out a truth table.

2.4. Inverses, converses, and contrapositives

The inverse of p ⇒ q is ¬p ⇒ ¬q. So the inverse of "If you take CPSC 202, you will surely die" is "If you do not take CPSC 202, you will not surely die." There is often no connection between the truth of an implication and the truth of its inverse: "If I am Barack Obama then I am a Democrat" does not have the same truth-value as "If I am not Barack Obama then I am not a Democrat", at least according to current polling numbers.

The converse of p ⇒ q is q ⇒ p. E.g. the converse of "If I am Barack Obama then I am a Democrat" is "If I am a Democrat then I am Barack Obama." The converse of a statement is always logically equivalent to the inverse. Often in proving a biconditional (e.g., "I am Barack Obama if and only if I am a Democrat"), one proceeds by proving first the implication in one direction and then either the inverse or the converse.

The contrapositive of p ⇒ q is ¬q ⇒ ¬p; it is logically equivalent to the original implication ("If I am not a Democrat then I am not Barack Obama"). A proof by contraposition proves p implies q by assuming ¬q and then proving ¬p; it is similar but not indentical to an IndirectProof, which assumes ¬p and derives a contradiction.

2.5. Normal forms

A compound proposition is in conjuctive normal form (CNF for short) if it is obtained by ANDing together ORs of one or more variables or their negations (an OR of one variable is just the variable itself). So for example P, (P∨Q)∧R, (P∨Q)∧(Q∨R)∧(¬P), and (P∨Q)∧(P∨¬R)∧(¬P∨Q∨S∨T∨¬U) are in CNF, but (P∨Q)∧(P∨¬R)∧(¬P∧Q),(P∨Q)∧(P⇒R)∧(¬P∨Q), and (P∨(Q∧R))∧(P∨¬R)∧(¬P∨Q) are not. Using the equivalence P⇒Q ≡ ¬P∨Q, De Morgan's laws, and the distributive law, it is possible to rewrite any compound proposition in CNF.

CNF formulas are particularly useful because they support resolution (see Inference rules below). Using the tautology (P∨Q)∧(¬P∨R) ⇒ Q∨R, we can construct proofs from CNF formulas by looking for occurrences of some simple proposition and its negation and resolving them, which generates a new clause we can add to the list. For example, we can compute (P∨Q)∧(P∨¬R)∧(¬P∨Q)∧(¬Q∨R) ⊢ (P∨Q)∧(P∨¬R)∧(¬P∨Q)∧(¬Q∨R)∧Q ⊢ (P∨Q)∧(P∨¬R)∧(¬P∨Q)∧(¬Q∨R)∧Q∧R ⊢ (P∨Q)∧(P∨¬R)∧(¬P∨Q)∧(¬Q∨R)∧Q∧R∧P ⊢ P. This style of proof is called a resolution proof. Because of its simplicity it is particularly well-suited for mechanical theorem provers. Such proofs can also encode traditional proofs based on modus ponens: the inference P∧(P⇒Q) ⊢ Q can be rewritten as resolution by expanding ⇒ to get P∧(¬P∨Q) ⊢ Q.

Similarly, a compound proposition is in disjunctive normal form (DNF) if it consists of an OR of ANDs, e.g. (P∧Q)∨(P∧¬R)∨(¬P∧Q). Just as any compound proposition can be transformed into CNF, it can similarly be transformed into DNF.

Note that conjunctive normal forms are not unique; for example, P∨Q and (P∧¬Q)∨(P∧Q)∨(¬P∧Q) are both in conjunctive normal form and are logically equivalent to each other. So while CNF can be handy as a way of reducing the hairiness of a formula (by eliminating nested parentheses or negation of non-variables, for example), it doesn't necessarily let us see immediately if two formulas are really the same.

3. Predicate logic

Using only propositional logic, we can express a simple version of a famous argument:

This is an application of the inference rule called modus ponens, which says that from p and p ⇒ q you can deduce q. The first two statements are axioms (meaning we are given them as true without proof), and the last is the conclusion of the argument.

What if we encounter Socrates's infinitely more logical cousin Spocrates? We'd like to argue

Unfortunately, the second step depends on knowing that humanity implies mortality for everybody, not just Socrates. If we are unlucky in our choice of axioms, we may not know this. What we would like is a general way to say that humanity implies mortality for everybody, but with just propositional logic, we can't write this fact down.

3.1. Variables and predicates

The solution is to extend our language to allow formulas that involve variables. So we might let x, y, z, etc. stand for any element of our universe of discourse or domain—essentially whatever things we happen to be talking about at the moment. We can now write statements like:

These are not propositions because they have variables in them. Instead, they are predicates; statements whose truth-value depends on what concrete object takes the place of the variable. Predicates are often abbreviated by single capital letters followed by a list of arguments, the variables that appear in the predicate, e.g.:

We can also fill in specific values for the variables, e.g. H(Spocrates) = "Spocrates is human." If we fill in specific values for all the variables, we have a proposition again, and can talk about that proposition being true (e.g. H(2) and H(-1) are true) or false (H(0) is false).

3.2. Quantifiers

What we really want though is to be able to say when H or P or Q is true for many different values of their arguments. This means we have to be able to talk about the truth or falsehood of statements that include variables. To do this, we bind the variables using quantifiers, which state whether the claim we are making applies to all values of the variable (universal quantification), or whether it may only apply to some (existential quantification).

3.2.1. Universal quantifier

The universal quantifier ∀ (pronounced "for all") says that a statement must be true for all values of a variable within some universe of allowed values (which is often implicit). For example, "all humans are mortal" could be written ∀x: Human(x) ⇒ Mortal(x) and "if x is positive then x+1 is positive" could be written ∀x: x > 0 ⇒ x+1 > 0.

If you want to make the universe explicit, use set membership notation, e.g. ∀x∈Z: x > 0 ⇒ x+1 > 0. This is logically equivalent to writing ∀x: x∈Z ⇒ (x > 0 ⇒ x+1 > 0) or to writing ∀x: (x∈Z ∧ x > 0) ⇒ x+1 > 0, but the short form makes it more clear that the intent of x∈Z is to restrict the range of x.3

The statement ∀x: P(x) is equivalent to a very large AND; for example, ∀x∈N: P(x) could be rewritten (if you had an infinite amount of paper) as P(0) ∧ P(1) ∧ P(2) ∧ P(3) ∧ ... . Normal first-order logic doesn't allow infinite expressions like this, but it may help in visualizing what ∀x: P(x) actually means. Another way of thinking about it is to imagine that x is supplied by some adversary and you are responsible for showing that P(x) is true; in this sense, the universal quantifier chooses the worst case value of x.

3.2.2. Existential quantifier

The existential quantifier ∃ (pronounced "there exists") says that a statement must be true for at least one value of the variable. So "some human is mortal" becomes ∃x: Human(x) ∧ Mortal(x). Note that we use AND rather than implication here; the statement ∃x: Human(x) ⇒ Mortal(x) makes the much weaker claim that "there is some thing x, such that if x is human, then x is mortal", which is true in any universe that contains an immortal purple penguin—since it isn't human, Human(penguin) ⇒ Mortal(penguin) is true.

As with ∀, ∃ can be limited to an explicit universe with set membership notation, e.g. ∃x∈Z: x = x2. This is equivalent to writing ∃x: x∈Z ∧ x = x2.

The formula ∃x: P(x) is equivalent to a very large OR, so that ∃x∈N: P(x) could be rewritten as P(0) ∨ P(1) ∨ P(2) ∨ P(3) ∨ ... . Again, you can't generally write an expression like this if there are infinitely many terms, but it gets the idea across.

3.2.3. Negation and quantifiers

These are essentially the quantifier version of De Morgan's laws: the first says that if you want to show that not all humans are mortal, it's equivalent to finding some human that is not mortal. The second says that to show that no human is mortal, you have to show that all humans are not mortal.

3.2.4. Restricting the scope of a quantifier

Sometimes we want to limit the universe over which we quantify to some restricted set, e.g. all positive integers or all baseball teams. We can do this explicitly using implication, e.g.

or we can abbreviate by including the restriction in the quantifier expression itself

Similarly

can be written as

Note that constraints on ∃ get expressed using AND rather than implication.

The use of set membership notation to restrict a quantifier is a special case of this. Suppose we want to say that 79 is not a perfect square, by which we mean that there is no integer whose square is 79. If we are otherwise talking about real numbers (two of which happen to be square roots of 79), we can exclude the numbers we don't want by writing

which is interpreted as

or, equivalently

(Here ℤ = { ..., -2, -1, 0, 1, 2, ... } is the standard set of integers. See SetTheory for more about ∈.)

3.2.5. Nested quantifiers

It is possible to nest quantifiers, meaning that the statement bound by a quantifier itself contains quantifiers. For example, the statement "there is no largest prime number" could be written as

i.e., "there does not exist an x that is prime and any y greater than x is not prime." Or in a shorter (though not strictly equivalent) form:

"for any x there is a bigger y that is prime."

To read a statement like this, treat it as a game between the ∀ player and the ∃ player. Because the ∀ comes first in this statement, the for-all player gets to pick any x it likes. The ∃ player then picks a y to make the rest of the statement true. The statement as a whole is true if the ∃ player always wins the game. So if you are trying to make a statement true, you should think of the universal quantifier as the enemy (the adversary in algorithm analysis) who says "nya-nya: try to make this work, bucko!", while the existential quantifier is the friend who supplies the one working response.

Naturally, such games can go on for more than two steps, or allow the same player more than one move in a row. For example

is a kind of two-person challenge version of the Pythagorean_theorem where the universal player gets to pick x and y and the existential player has to respond with a winning z. (Whether the statement itself is true or false depends on the range of the quantifiers; it's false, for example, if x, y, and z are all natural numbers or rationals but true if they are all real or complex. Note that the universal player only needs to find one bad (x,y) pair to make it false.)

One thing to note about nested quantifiers is that we can switch the order of two universal quantifiers or two existential quantifiers, but we can swap a universal quantifier for an existential quantifier or vice versa. So for example ∀x∀y (x = y ⇒ x+1 = y+1) is logically equivalent to ∀y∀x (x = y ⇒ y+1 = x+1), but ∀x∃y y < x is not logically equivalent to ∃y∀x y < x. This is obvious if you think about it in terms of playing games: if I get to choose two things in a row, it doesn't really matter which order I choose them in, but if I choose something and then you respond it might make a big difference if we make you go first instead.

One measure of the complexity of a mathematical statement is how many layers of quantifiers it has, where a layer is a sequence of all-universal or all-existential quantifiers. Here's a standard mathematical definition that involves three layers of quantifiers, which is about the limit for most humans:

\begin{eqnarray*}
& &\lim_{x \rightarrow \infty} f(x) = y  \\
&\equiv& \forall \epsilon > 0: \exists N: \forall x > N: |f(x)-y| < \epsilon.
\end{eqnarray*}

Now that we know how to read nested quantifiers, it's easy to see what the right-hand side means:

  1. The adversary picks ε, which must be greater than 0.
  2. We pick N.
  3. The adversary picks x, which must be greater than N.
  4. We win if f(x) is within ε of y.

So, for example, a proof of

\[
\lim_{x \rightarrow \infty} 1/x = 0
\]

would follow exactly this game plan:

  1. Choose some ε > 0.

  2. Let N > 1/ε. (Note that we can make our choice depend on previous choices.)

  3. Choose any x > N.

  4. Then x > N > 1/ε > 0, so 1/x < 1/N < ε ⇒ |1/x - 0| < ε. QED!

3.2.6. Examples

Here are some more examples of translating English into statements in predicate logic:

English

Logic

Other variations

All crows are black.

∀x Crow(x) ⇒ Black(x)

Equivalent to ¬∃x Crow(x) ∧ ¬Black(x) or ∀x ¬Black(x) ⇒ ¬Crow(x). The latter is the core of a classic "paradox of induction" in philosophy: if seeing a black crow makes me think it's more likely that all crows are black, shouldn't seeing a logically equivalent non-black non-crow (e.g. a banana yellow AMC_Gremlin) also make me think all non-black objects are non-crows, i.e., that all crows are black? (I think this mostly illustrates that logical equivalence only works for true/false and not for probabilities.)

Some cows are brown.

∃x Cow(x) ∧ Brown(x)

No cows are blue.

¬∃x Cow(x) ∧ Blue(x)

≡ ∀x ¬(Cow(x) ∧ Blue(x) ≡ ∀x (¬Cow(x) ∨ ¬Blue(x)) ≡ ∀x Cow(x) ⇒ ¬Blue(x) ≡ ∀x Blue(x) ⇒ ¬Cow(x). (But see Paul_Bunyan.)

All that glitters is not gold.

¬∀x Glitters(x) ⇒ Gold(x)

≡ ∃x Glitters(x) ∧ ¬Gold(x). Note that the English syntax is a bit ambiguous: a literal translation might look like ∀x Glitters(x) ⇒ ¬Gold(x), which is not logically equivalent. This is why predicate logic is often safer than natural language.

No shirt, no service.

∀x ¬Shirt(x) ⇒ ¬Served(x)

Every event has a cause.

∀x∃y Causes(y,x)

Every even number greater than 2 can be expressed as the sum of two primes.

∀x (Even(x) ∧ x > 2) ⇒ (∃p ∃q Prime(p) ∧ Prime(q) ∧ x = p+q)

Note: the truth value of this statement is currently unknown. See Goldbach's conjecture.

3.3. Functions

A function symbol looks like a predicate but instead of computing a truth value it returns a new object. So for example the successor function S in the Peano axioms for the natural numbers returns x+1 when applied as S(x). Sometimes when there is only a single argument we omit the parentheses, e.g. Sx = S(x), SSSx = S(S(S(x))).

3.4. Equality

Often we include a special equality predicate =, written x=y. The interpretation of x=y is that x and y are the same element of the domain. It satisfies the reflexivity axiom ∀x x=x and the substitution axiom schema

where P is any predicate; this immediately gives a substitution rule that says x = y, P(x) ⊢ P(y). It's likely that almost every proof you ever wrote down in high school algebra consisted only of many applications of the substitution rule.

Example

We'll prove ∀x∀y (x=y ⇒ y=x) from the above axioms (this property is known as symmetry). Apply substitution to the predicate Px ≡ y=x to get ∀x∀y (x=y ⇒ (y=x ⇔ x=x)). Use reflexivity to rewrite this as ∀x∀y (x=y ⇒ (y=x ⇔ T)) or ∀x∀y (x=y ⇒ y=x) as claimed.

Exercise

Prove ∀x∀y∀z (x=y ∧ y=z ⇒ x=z). (This property is known as transitivity.)

3.4.1. Uniqueness

An occasionally useful abbreviation is ∃!x P(x), which stands for "there exists a unique x such that P(x)." This is short for

An example is ∃!x x+1 = 12. To prove this we'd have to show not only that there is some x for which x+1 = 12 (11 comes to mind), but that if we have any two values x and y such that x+1 = 12 and y+1 = 12, then x=y (this is not hard to do). So the exclamation point encodes quite a bit of extra work, which is why we usually hope that ∃x x+1 = 12 is good enough.

3.5. Models

In propositional logic, we can build truth tables that describe all possible settings of the truth-values of the literals. In predicate logic, the analogous concept to an assignment of truth-values is a structure. A structure consists of a set of objects or elements (see SetTheory), together with a description of which elements fill in for the constant symbols, which predicates hold for which elements, and what the value of each function symbol is when applied to each possible list of arguments (note that this depends on knowing what constant, predicate, and function symbols are available—this information is called the signature of the structure). A structure is a model of a particular theory (set of statements), if each statement in the theory is true in the model.

In general we can't hope to find all possible models of a given theory. But models are useful for two purposes: if we can find some model of a particular theory, then the existence of this model demonstrates that the theory is consistent; and if we can find a model of the theory in which some additional statement S doesn't hold, then we can demonstrate that there is no way to prove S from the theory (i.e. it is not the case that T ⊢ S, where T is the list of axioms that define the theory).

3.5.1. Examples

4. Proofs

A proof is a way to derive statements from other statements. It starts with axioms (statements that are assumed in the current context always to be true), theorems or lemmas (statements that were proved already), and premises (P), and uses inference rules to derive Q. The axioms, theorems, and premises are in a sense the starting position of a game whose rules are given by the inference rules. The goal of the game is to apply the inference rules until Q pops out. We refer to anything that isn't proved in the proof itself (ie., an axiom, theorem, lemma, or premise) as a hypothesis; the result Q is the conclusion.

When a proof exists of Q from some premises P1, P2, ..., we say that Q is deducible or provable from P1, P2, ..., which is written as

Note that axioms are usually not included in list on the left-hand side. So we can write just

if we can prove Q directly from our axioms without any additional premises.

The turnstile symbol ⊢ has the specific meaning that we can derive the conclusion Q by applying inference rules to the premises. This is not quite the same thing as saying P ⇒ Q. If our inference rules are particularly weak, it may be that P ⇒ Q is true but we can't prove Q starting with P. Conversely, if our inference rules are too strong (maybe they can prove anything, even things that aren't true) we might have P ⊢ Q but P ⇒ Q is false. But most of the time we will use inference rules that are just right, meaning that P ⊢ Q implies P ⇒ Q (soundness) and P ⇒ Q implies P ⊢ Q (completeness). The distinction between ⊢ and ⇒ is then whether we want to talk about the existence of a proof (the first case) or about the logical relation between two statements (the second). As a consequence, you probably won't see ⊢ again after you stop reading this page.

4.1. Inference Rules

The main source of inference rules is tautologies of the form P ⇒ Q; given such a tautology, there is a corresponding inference rule that allows us to assert Q once we have P (either because it's an axiom/theorem/premise or because we proved it already while doing the proof). The most important inference rule is modus ponens, based on the tautology (p ∧ (p ⇒ q)) ⇒ q; this lets us, for example, write the following famous argument:4

  1. If it doesn't fit, you must acquit. [Axiom]
  2. It doesn't fit. [Premise]
  3. You must acquit. [Modus ponens applied to 1+2]

There are many named inference rules in classical propositional logic. We'll list some of them below. You don't need to remember the names of anything except modus ponens, and most of the rules are pretty much straightforward applications of modus ponens plus some convenient tautology that can be proved by truth tables or stock logical equivalences. (For example, the "addition" rule below is just the result of applying modus ponens to p and the tautology p ⇒ (p ∨ q).)

Inference rules are often written by putting the premises above a horizontal line and the conclusion below. In text, the horizontal line is often replaced by the symbol ⊢, which means exactly the same thing. Premises are listed on the left-hand side separated by commas, and the conclusion is placed on the right. We can then write

Addition
p ⊢ p ∨ q.
Simplification
p ∧ q ⊢ p.
Conjunction
p, q ⊢ p ∧ q.
Modus ponens
p, p ⇒ q ⊢ q.
Modus tollens
¬q, p ⇒ q ⊢ ¬p.
Hypothetical syllogism
p ⇒ q, q ⇒ r ⊢ p ⇒ r.
Disjunctive syllogism
p ∨ q, ¬p ⊢ q.
Resolution
p ∨ q, ¬p ∨ r ⊢ q ∨ r.

Of these rules, Addition, Simplification, and Conjunction are mostly used to pack and unpack pieces of arguments. Modus ponens (and its reversed cousin modus tollens) let us apply implications. You only need to remember modus ponens if you can remember the contraposition rule (p ⇒ q) ≡ (¬q ⇒ ¬p). Hypothetical syllogism just says that implication is transitive; it lets you paste together implications if the conclusion of one matches the premise of the other. Disjunctive syllogism is again a disguised version of modus ponens (via the logical equivalence (p ∨ q) ≡ (¬p ⇒ q)); you don't need to remember it if you can remember this equivalence. Resolution is almost never used by humans but is very popular with computer theorem provers.

An argument is valid if the conclusion is true whenever the hypotheses are true. Any proof constructed using the inference rules is valid. It does not necessarily follow that the conclusion is true; it could be that one or more of the hypotheses is false:

  1. If you give a mouse a cookie, he's going to ask for a glass of milk. [Axiom]
  2. If he asks for a glass of milk, he will want a straw. [Axiom]
  3. You gave a mouse a cookie. [Premise]
  4. He asks for a glass of milk. [Modus ponens applied to 1 and 3.]
  5. He will want a straw. [Modus ponens applied to 2 and 4.]

Will the mouse want a straw? No: Mice can't ask for glasses of milk, so axiom 1 is false.

4.2. More on proofs vs implication

Recall that P ⊢ Q means there is a proof of Q by applying inference rules to the axioms and P, while P ⇒ Q says that Q holds whenever P does. These are not the same thing: provability (⊢) is outside the theory (it's a statement about whether a proof exists or not) while implication (⇒) is inside (it's a logical connective for making compound propositions). But most of the time they mean almost the same thing.

For example, suppose that we can prove P ⇒ Q directly from whatever axioms we are currently assuming, i.e., that

Since we can always ignore extra premises, we get

and thus

and finally

by applying modus ponens to the right-hand side.

So we can go from

to

This means that provability is in a sense weaker than implication: it holds (assuming modus ponens) whenever implication does. But we usually don't use this fact much, since P ⇒ Q is a much more useful statement than P ⊢ Q. Can we go the other way?

4.2.1. The Deduction Theorem

Yes, using the Deduction Theorem.

Often we want to package the result of a proof as a Theorem (a proven statement that is an end in itself) or Lemma (a proven statement that is intended mostly to be used in other proofs). Typically a proof shows that if certain premises P1, P2, ... Pn hold, then some conclusion Q holds (with various axioms or previously-established theorems assumed to be true from context). To use this result later, it is useful to be able to package it as an implication P1 ∧ P2 ∧ ... Pn ⇒ Q. In other words, we want to go from

to

The statement that we can do this, for a given set of axioms and inference rules, is the

Deduction Theorem

If there is a proof of Q from premises P1, P2, ..., Pn, then there is a proof of P1 ∧ P2 ∧ ... ∧ Pn ⇒ Q.

The actual proof of the theorem depends on the particular set of axioms and inference rules we start with, but the basic idea is that there exists a mechanical procedure for extracting a proof of the implication from the proof of Q assuming P1 etc.

Caveat: In predicate logic, the deduction theorem only applies if none of the premises contain any free variables (i.e. a variable that isn't bound by a containing quantifier). Usually you won't run into this, but see e.g. Deduction theorem for some bad cases that arise without this restriction.

4.3. Inference rules for equality

The equality predicate is special, in that it allows for the substitution rule

This can also be represented as an axiom schema: ∀x ∀y ((x = y ∧ P(x)) ⇒ P(y)).

4.4. Inference rules for quantified statements

Universal generalization

If P ⊢ Q(x) and x does not appear in P, then P ⊢ ∀x Q(x). Typical use: We want to show that there is no biggest natural number, i.e. that ∀x∈ℕ ∃y∈ℕ y > x. Let x be any element of ℕ. Let y = x+1. Then y > x. (Note: there is also an instance of existential generalization here.)

Universal instantiation
∀x Q(x) ⊢ Q(c). Typical use: Given that all humans are mortal, it follows that Spocrates is mortal.
Existential generalization
Q(c) ⊢ ∃x Q(x). Typical use: We are asked to prove that there exists an even prime number. Consider 2: it's an even prime number. QED.
Existential instantiation
∃x Q(x) ⊢ Q(c) for some particular c. (The real rule is more complicated and says ((∀x (Q(x) ⇒ P)) ∧ ∃y Q(y)) ⇒ P; but the intent in both cases is that once you have proven that at least one c satisfying Q exists, you can give it a name and treat it as an object you actually have in your hands.) Typical use: Suppose we are told that there exists a prime number greater than 7. Let p be some such prime number greater than 7.

5. Proof techniques

A proof technique is a template for how to go about proving particular classes of statements: this template guides you in the choice of inference rules (or other proof techniques) to write the actual proof. This doesn't substitute entirely for creativity (there is no efficient mechanical procedure for generating even short proofs unless P=NP), but it can give you some hints for how to get started.

In the table below, we are trying to prove A ⇒ B for particular statements A and B. The techniques are mostly classified by the structure of B. Before applying each technique, it may help to expand any definitions that appear in A or B.5

If you want to prove A ⇔ B, the usual approach is to prove A ⇒ B and A ⇐ B seperately; proving A ⇒ B and ¬A ⇒ ¬B also works (because of contraposition).

Strategy

When to use it

What to assume

What to conclude

What to do/why it works

Direct proof

Try it first

A

B

Apply inference rules to work forward from A and backward from B; when you meet in the middle, pretend that you were working forward from A all along.

Contraposition

B = ¬Q

¬B

¬A

Apply any other technique to show ¬B ⇒ ¬A and then apply the contraposition rule. Sometimes called an indirect proof although the term indirect proof is often used instead for proofs by contradiction (see below).

Contradiction

When B = ¬Q, or when you are stuck trying the other techniques.

A ∧ ¬B

False

Apply previous methods to prove both P and ¬P for some P. Note: this can be a little dangerous, because you are assuming something that is (probably) not true, and it can be hard to detect as you prove further false statements whether the reason they are false is that they follow from your false assumption, or because you made a mistake. Direct or contraposition proofs are preferred because they don't have this problem.

Construction

B = ∃x P(x)

A

P(c) for some specific object c.

Pick a likely-looking c and prove that P(c) holds.

Counterexample

B = ¬∀x P(x)

A

¬P(c) for some specific object c.

Pick a likely-looking c and show that ¬P(c) holds. This is identical to a proof by construction, except that we are proving ∃x ¬P(x), which is equivalent to ¬∀x P(x).

Choose

B = ∀x (P(x) ⇒ Q(x))

A and P(c) where c is chosen arbitrarily.

Q(c)

Choose some c and assume A and P(c). Prove Q(c). Note: c is a placeholder here. If P(c) is "c is even" you can write "Let c be even" but you can't write "Let c = 12", since in the latter case you are assuming extra facts about c.

Instantiation

A = ∀x P(x)

A

B

Pick some particular c and prove that P(c) ⇒ B. Here you can get away with saying "Let c = 12." (If c=12 makes B true).

Proof by elimination

B = C ∨ D

A ∧ ¬C

D

The reason this works is that A ∧ ¬C ⇒ D is equivalent to ¬(A ∧ ¬C) ⇒ D ≡ ¬A ∨ C ∨ D ≡ A ⇒ (C ∨ D). Of course, it works equally well if you start with A ∧ ¬D and prove C.

Case analysis

A = C ∨ D

C, D

B

Here you write two separate proofs: one that assumes C and proves B, and one that assumes D and proves B. A special case is when D = ¬C. You can also consider more cases, as long as A implies at least one of the cases holds.

Induction

B = ∀x∈ℕ P(x)

A

P(0) and ∀x∈ℕ (P(x) ⇒ P(x+1)).

If P(0) holds, and P(x) implies P(x+1) for all x, then for any specific natural number n we can consider constructing a sequence of proofs P(0) ⇒ P(1) ⇒ P(2) ⇒ ... ⇒ P(n). (This is actually a defining property of the natural numbers; see the example below.)

Some others that are mentioned in SolowBook: Direct Uniqueness, Indirect Uniqueness, various max/min arguments. We will also cover InductionProofs in more detail later.

6. Example: The natural numbers

Here we give an example of how we can encode simple mathematics using predicate logic, and then prove theorems about the resulting structure. Our goal is to represent the natural numbers: 0, 1, 2, etc.6 The method is to use the Peano axioms, a modern version of similar axioms developed by Giuseppe Peano.

6.1. The Peano axioms

Peano represents natural numbers using a special 0 constant and a function symbol S (for "successor"; think of it as +1). Repeatedly applying S to 0 generates increasingly large natural numbers: S0 = 1, SS0 = 2, SSS0 = 3, etc. (Note that 1, 2, 3, etc., are not part of the language, although we might use them sometimes as shorthand for long strings of S's.) For convenience, we don't bother writing parentheses for the function applications unless we need to do so to avoid ambiguity: read SSSS0 as S(S(S(S(0)))).

The usual interpretation of function symbols implies that 0 and its successors exist, but it doesn't guarantee that they aren't all equal to each other. The first Peano axiom prevents this:

In English, 0 is not the successor of any number.

This still allows for any number of nasty little models in which 0 is nobody's successor, but we still stop before getting all of the naturals. For example, let SS0 = S0; then we only have two elements in our model (0 and S0, because once we get to S0, any further applications of S keep us where we are.

To avoid this, we need to prevent S from looping back round to some number we've already produced. It can't get to 0 because of the first axiom, and to prevent it from looping back to a later number, we take advantage of the fact that they already have one successor:

If we take the contrapositive in the middle, we get x ≠ y ⇒ Sx ≠ Sy. In other words, we can't have a single number z that is the successor of two different numbers x and y.

Now we get all of ℕ, but we may get some extra elements as well. There is nothing in the first two axioms that prevents us from having something like this:

The hard part of coming up with the Peano axioms was to prevent the model from sneaking in extra bogus values (that still have successors and at most one predecessor each). This is (almost) done using the third Peano axiom, which in modern terms is actually an axiom schema: an axiom that holds when substituting any predicate for its placeholder predicate P:

This is known as the induction schema, and says that, for any predicate P, if we can prove that P holds for 0, and we can prove that P(x) implies P(x+1), then P holds for all x in ℕ. The intuition is that even though we haven't bothered to write out a proof of, say P(1337), we know that we can generate one by starting with P(0) and modus-pwning our way out to P(1337) using P(0) ⇒ P(1), then P(1) ⇒ P(2), then P(2) ⇒ P(3), etc. Since this works for any number (eventually), there can't be some number that we missed.

In particular, this lets us throw out the bogus numbers in the bad example above. Let bogus(x) be true if x is bogus (i.e., it's equal to Bogus or one of the other values in its chain of successors). Let P(x) ≡ ¬Bogus(x). Then P(0) holds (0 is not bogus), and if P(x) holds (x is not bogus) then so does P(Sx). It follows from the induction axiom that ∀x P(x): there are no bogus numbers.7

6.2. A simple proof

Let's use the Peano axioms to prove something that we know to be true about the natural numbers we learned about in grade school but that might not be obvious from the axioms themselves. (This will give us some confidence that the axioms are not bogus.) We want to show that 0 is the only number that is not a successor:

Claim
∀x x ≠ 0 ⇒ ∃y x = Sy.

To find a proof of this, we start by looking at the structure of what we are trying to prove. It's a universal statement about elements of ℕ (implicitly, the ∀x is really ∀x∈ℕ, since our axioms exclude anything that isn't in ℕ), so our table of proof techniques suggests using an induction argument, which in this case means finding some predicate we can plug into the induction schema.

If we strip off the ∀x, we are left with

Here a direct proof is suggested: assuming x ≠ 0, and try to prove ∃y x = Sy. But our axioms don't tell us much about numbers that aren't 0, so it's not clear what to do with the assumption. This turns out to be a dead end.

Recalling that A ⇒ B is the same thing as ¬A ∨ B, we can rewrite our goal as

This seems like a good candidate for P (our induction hypothesis), because we do know a few things about 0. Let's see what happens if we try plugging this into the induction schema:

Since we showed P(0) and ∀x P(x) ⇒ P(Sx), the induction schema tells us ∀x P(x). This finishes the proof.

Having figured the proof out, we might go back and clean up any false starts to produce a compact version. A typical mathematican might write the preceding argument as:

Claim
∀x x ≠ 0 ⇒ ∃y x = Sy.
Proof
By induction on x. For x = 0, the premise fails. For Sx, let y = x. ∎

(A black rectangle—known as a tombstone—is often used to mark the end of a proof.)

A really lazy mathematician would write:

Claim
∀x x ≠ 0 ⇒ ∃y x = Sy.
Proof
Induction on x. ∎

Though laziness is generally a virtue, you probably shouldn't be quite this lazy when writing up homework assignments.

6.3. Defining addition

Because of our restricted language, we do not yet have the ability to state valuable facts like 1+1=2 (which we would have to write as S0 + S0 = SS0). Let's fix this, by adding a two-argument function symbol + which we will define using the axioms

(We are omitting some ∀ quantifiers, since unbounded variables are implicitly universally quantified.)

This definition is essentially a recursive program for computing x+y using only successor, and there are some programming languages (e.g. Haskell) that will allow you to define addition using almost exactly this notation. If the definition works for all inputs to +, we say that + is well-defined. Not working would include giving different answers depending on which parts of the definitions we applied first, or giving no answer for some particular inputs. These bad outcomes correspond to writing a buggy program. Though we can in principle prove that this particular definition is well-defined (using induction on y), we won't bother. Instead, we will try to prove things about our new concept of addition that will, among other things, tell us that the definition gives the correct answers.

We start with a lemma, which is Greek for a result that is not especially useful by itself but is handy for proving other results.8

Lemma
0+x = x.
Proof
By induction on x. When x = 0, we have 0+0=0, which is true from the first case of the definition. Now suppose 0+x = x and consider what happens with Sx. We want to show 0+Sx = Sx. Rewrite 0+Sx as S(0+x) [second case of the definition], and use the induction hypothesis to show S(0+x) = S(x). ∎

(We could do a lot of QED-ish jumping around in the end zone there, but it is more refined—and lazier—to leave off the end of the proof once it's clear we've satisifed all of our obligations.)

Here's another lemma, which looks equally useless:

Lemma
x+Sy = Sx+y.
Proof
By induction on y. If y = 0, then x+S0 = S(x+0) = Sx = Sx+0. Now suppose the result holds for y and show x+SSy = Sx+Sy. We have x+SSy = S(x+Sy) = S(Sx+y) [ind. hyp.] = Sx+Sy. ∎

Now we can prove a theorem: this is a result that we think of as useful in its own right. (In programming terms, it's something we export from a module instead of hiding inside it as an internal procedure.)

Theorem
x+y = y+x. (Commutativity of addition.)
Proof
By induction on x. If x = 0, then 0+y = y+0 (see previous lemma). Now suppose x+y = y+x, and we want to show Sx+y = y+Sx. But y+Sx = S(y+x) [axiom] = S(x+y) [induction hypothesis] = x+Sy [axiom] = Sx+y [lemma]. ∎

This sort of definition-lemma-lemma-theorem structure is typical of written mathematical proofs. Breaking things down into small pieces (just like breaking big subroutines into small subroutines) makes debugging easier, since you can check if some intermediate lemma is true or false without having to look through the entire argument at once.

Question: How do you know which lemmas to prove? Answer: As when writing code, you start by trying to prove the main theorem, and whenever you come across something you need and can't prove immediately, you fork it off as a lemma. Conclusion: The preceding notes were not originally written in order.

6.3.1. Other useful properties of addition

So far we have shown that x+y = y+x, also known as commutativity of addition. Another familiar property is associativity of addition: x+(y+z) = (x+y)+z. This is easily proven by induction (try it!)

We don't have subtraction in ℕ (what's 3-5?)9 The closest we can get is cancellation:

We can define ≤ for ℕ directly from addition: Let x ≤ y ≡ ∃z x+z = y. Then we can easily prove each of the following (possibly using our previous results about addition having commutativity, associativity, and cancellation).

(The actual proofs will be left as an exercise for the reader.)

6.4. A scary induction proof involving even numbers

Let's define the predicate Even(x) ≡ ∃y x = y+y. (The use of ≡ here signals that Even(x) is syntactic sugar, and we should think of any occurrence of Even(x) as expanding to ∃y x = y+y.)

It's pretty easy to see that 0 = 0+0 is even. Can we show that S0 is not even?

Lemma
¬Even(S0).
Proof
Expand the claim as ¬∃y S0 = y+y ≡ ∀y S0 ≠ y+y. Since we are working over ℕ, it's tempting to try to prove the ∀y bit using induction. But it's not clear why S0 ≠ y+y would tell us anything about S0 ≠ Sy+Sy. So instead we do a case analysis, using our earlier observation that every number is either 0 or Sz for some z.
Case 1
y = 0. Then S0 ≠ 0+0 since 0+0 = 0 (by the definition of +) and 0 ≠ S0 (by the first axiom).
Case 2

y = Sz. Then y+y = Sz+Sz = S(Sz+z) = S(z+Sz) = SS(z+z).10 Suppose S0 = SS(z+z) [Note: "Suppose" usually means we are starting a proof by contradiction]. Then 0 = S(z+z) [second axiom], violating ∀x 0 ≠ Sx [first axiom]. So S0 ≠ SS(z+z) = y+y.

Since we have S0 ≠ y+y in either case, it follows that S0 is not even. ∎

Maybe we can generalize this lemma! If we recall the pattern of non-even numbers we may have learned long ago, each of them (1, 3, 5, 7, ...) happens to be the successor of some even number (0, 2, 4, 6, ...). So maybe it holds that:

Theorem
Even(x) ⇒ ¬Even(Sx).
Proof
Expanding the definitions gives (∃y x = y+y) ⇒ (¬∃z Sx = z+z). This is an implication at top-level, which calls for a direct proof. The assumption we make is ∃y x = y+y. Let's pick some particular y that makes this true (in fact, there is only one, but we don't need this). Then we can rewrite the right-hand side as ¬∃z S(y+y) = z+z. There doesn't seem to be any obvious way to show this (remember that we haven't invented subtraction or division yet, and we probably don't want to). We are rescued by showing the stronger statement ∀y ¬∃z S(y+y) = z+z: this is something we can prove by induction (on y, since that's the variable inside the non-disguised universal quantifier). Our previous lemma gives the base case ¬∃z S(0+0) = z+z, so we just need to show that ¬∃z S(y+y) = z+z implies ¬∃z S(Sy+Sy) = z+z. Suppose that S(Sy+Sy) = z+z for some z ["suppose" = proof by contradiction again: we are going to drive this assumption into a ditch]. Rewrite S(Sy+Sy) to get SSS(y+y) = z+z. Now consider two cases:
Case 1
z = 0. Then SSS(y+y) = 0+0 = 0, contradicting our first axiom.
Case 2
z = Sw. Then SSS(y+y) = Sw+Sw = SS(w+w). Applying the second axiom twice gives S(y+y) = w+w. But this contradicts the induction hypothesis.
Since both cases fail, our assumption must have been false. It follows that S(Sy+Sy) is not even, and the induction goes through. ∎

6.5. Defining more operations

Let's define multiplication (⋅) by the axioms:

Some properties of multiplication:

(Note we are using 1 as an abbreviation for S0.)

The first few of these are all proved pretty much the same way as for addition. Note that we can't divide in ℕ any more than we can subtract, which is why we have to be content with multiplicative cancellation.

Exercise: Show that the Even(x) predicate, defined previously as ∃y y = x+x, is equivalent to Even'(x) ≡ ∃y x = 2⋅y, where 2 = SS0. Does this definition make it easier or harder to prove ¬Even'(S0)?


CategoryMathNotes

  1. The symbol is a stylized V, intended to represent the Latin word vel, meaning "or." (Thanks to Noel McDermott for pointing this out.) (1)

  2. This symbol is a stylized A, short for the latin word atque, meaning "and." (2)

  3. Programmers will recognize this as a form of SyntacticSugar. (3)

  4. OK, maybe not as famous as it once was. (4)

  5. These strategies are largely drawn from SolowBook, particularly the summary table on pages 156-159 (which is the source of the order and organization of the table and the names of most of the techniques). RosenBook describes proof strategies of various sorts in §§1.5–1.7 and BiggsBook describes various proof techniques in Chapters 1, 3, and 4; both descriptions are a bit less systematic, but also include a variety of specific techniques that are worth looking at. (5)

  6. Some people define the natural numbers as starting at 1. Those people are generally either (a) wrong, (b) number theorists, (c) extremely conservative, or (d) citizens of the United Kingdom of Great Britain and Northern Ireland. As computer scientists, we will count from 0 as the gods intended. (6)

  7. There is a complication here. Peano's original axioms were formulated in terms of second-order logic, which allows quantification over all possible predicates (you can write things like ∀P P(x) ⇒ P(Sx)). So the bogus predicate we defined is implicitly included in that for-all. In first-order logic, which is what everybody uses now, you can't quantify over predicates, and in particular if there is no predicate that distinguishes bogus numbers from legitimate ones, the induction axiom won't kick them out. This means that the Peano axioms (in their modern form) actually do allow bogus numbers to sneak in somewhere around infinity. But they have to be very polite bogus numbers that never do anything different from ordinary numbers. This is probably not a problem except for philosophers. (Similar problems show up for any model with infinitely many elements, due to something called the Löwenheim-Skolem theorem.) (7)

  8. It really means fork. (8)

  9. This actually came up on a subtraction test I got in the first grade from the terrifying Mrs Garrison at Mountain Park Elementary School in Berkeley Heights, New Jersey. She insisted that -2 was not the correct answer, and that we should have recognized it as a trick question. She also made us black out the arrow the left of the zero on the number-line stickers we had all been given to put on the top of our desks. Mrs Garrison was, on the whole, a fine teacher, but she did not believe in New Math. (9)

  10. What justifies that middle step? (10)


2014-06-17 11:58