1. Rolling dice
Recall that a standard six-sided die is labeled with the numbers from 1 through 6, each occurring with equal probability. When rolling two dice, their sum can range from 2 up to 12, and the probability that the sum equals k depends on k. We'd like to relabel these dice so that the distribution of their sum is uniform.
- Prove or disprove: It is possible to relabel two six-sided dice so that all integers x with 1 ≤ x ≤ 12 are equally likely to appear as the sum of the dice.
- Prove or disprove: It is possible to relabel two six-sided dice so that all integers x with 2 ≤ x ≤ 12 are equally likely to appear as the sum of the dice.
- Proof: Label the first die with 0,0,0,6,6,6 and the second with 1,2,3,4,5,6. Let X be the value of the first die and Y the value of the second die. Then Pr[X+Y = k] = (1/2)(1/6) = 1/12 for 1 ≤ k ≤ 6, since we have to roll 0 on the first die and then k on the second die. Similarly we get Pr[X+Y = k] = (1/2)(1/6) = 1/12 for 7 ≤ k ≤ 12, where here we roll 6 on the first die.
- Disproof: Consider the underlying probability space. We have 36 possible outcomes for the dice, each of which occurs with probability 1/36. Any event on this probability space (e.g., the event that [X+Y=2]) occurs with probability m/36 for some integer m. For a uniform distribution on X+Y, we need Pr[X+Y = 2] = 1/11 (since there are 11 cases). But m/36 = 11 only if 36 = 11m, which can't happen since 11 doesn't divide 36.
2. Flipping coins
Alice flips m independent fair coins and Bob flips n independent fair coins. Give a closed-form expression for the probability that Alice gets the same number of heads as Bob.
There are two ways I can think of to do this. In each case we let X be the number of heads Alice gets and Y be the number of heads Bob gets. Note that both X and Y have binomial distributions.
Here is the brute-force approach:
Here the last step uses Vandermonde's identity.
The second way does essentially the same argument, but looks at the structure of the problem directly instead of just manipulating binomial coefficients. Let Z = n-Y be the number of tails Bob gets. Then X=Y if and only if X+Z = X+(n-Y) = n+(X-Y) = n. We can write X+Z as the sum of m+n independent uniform 0-1 random variables (each Xi = 1 if Alice gets heads on her i-th flip, each Zi = 1 if Bob gets tails on his i-th flip), so Pr[X+Z = n] = . This pretty much just buries the proof of Vandermonde's identity in the proof of the result, but it may make it more clear what is going on.
By symmetry of binomial coefficients, also works.
3. Gamma coding
The Elias gamma code is a method for encoding positive integers as strings of bits so that no encoding is a prefix of any other encoding. The method is to first write the target integer as a sequence of bits 1xk-1xk-2...x0 in binary, then encode it as 1k-11xk-1xk-2...x0. For example, 6 = 110 is encoded as 00110 while 34 = 10010 is encoded as 000010010.
Because of the prefix-free property, any infinite sequence of bits can be uniquely decoded as an infinite sequence of positive integers. For example, the sequence 000110100011000100110011110101... decodes as:
0001101 0001100 010 011 00111 1 010 1 ... 13 12 2 3 7 1 2 1 ...
Here we've chopped the original sequence into a sequence of codewords, each of which decodes to a positive integer.
Suppose we generate an infinite sequence by choosing 1 independently at each position with probability p and 0 with probability q = 1-p.
- Let X be the length of the first codeword in the sequence. Whats is E[X]?
Let Y be the value of the first codeword. Let An be the event that the sequence starts with exactly n zeros. What is E[Y|An]?
- What is E[Y]?
(Give a closed-form expression for each case.)
Let Z be the number of leading zeros, then Z is a geometric random variable with Pr[Z = k] = qkp, and X = 2Z+1. Since E[Z] = (1/p)-1, we have E[X] = 2E[Z]+1 = 2(1/p - 1) + 1 = 2/p - 1.
Here we can express Y as a sum of the values of its bits: Y = . Since each xk except xn is equal to 1 with probability p, we have E[Y] = = 2n + p (2n-1) = 2n(1+p) - p.
To compute E[Y], use E[Y] = ∑ E[Y|An] Pr[An] = ∑ (2n(1+p)-p) qnp = -p + (1+p)p ∑ 2nqn = -p + (1+p)p/(1-2q), provided 2q < 1 (otherwise the sum diverges). A slightly tidier version might be (p+p2)/(2p-1) - p, provided p > 1/2. When p ≤ 1/2, Y doesn't have an expectation. (As a quick check, p = 1 gives (1+1)/(2-1) - 1 = 1, which is what we'd expect.)