# 1. A destructive RSA key

Let p and q be distinct primes, and let e = 2(p-1)(q-1). Show that the set A = { x^{e} mod pq | x ∈ ℕ } has exactly four elements.

## 1.1. Solution

The 2 is bit of a red herring: the solution would work without it (or with 2 replaced by any positive integer).

Using the Chinese remainder theorem, there is a bijection ℤ_{pq} ↔ ℤ_{p}×Z_{q} such if x maps to (x_{1},x_{2}), then x^{e} maps to (x_{1}^{e},x_{2}^{e}). Euler's theorem says that if x is relatively prime to $p$; similarly, x_{2}^{e} = 1 (mod q) if x_{2} is relatively prime to q. So for most values (x_{1},x_{2}), we get (x_{1}^{e},x_{2}^{e}) = (1,1).

The exception is when x_{1} = 0 (mod p) or x_{2} = 0 (mod q). In this case, because 2(p-1)(q-1) ≠ 0, we have 0^{e} = 0. This gives additional possibilities (0,0), (0,1), and (1,0). Running the Chinese remainder theorem in the other direction, each of these four pairs gives exactly one element of ℤ_{pq}, for four elements in total.

# 2. Zero matrices

Recall that a zero matrix, usually just written as 0, is a matrix all of whose entries are zero.

- Prove or disprove: If A and B are square matrices of the same size, and AB = 0, then either A = 0 or B = 0.
- Prove or disprove: If A and B are square invertible matrices of the same size, then AB ≠ 0.

## 2.1. Solution

1. Disproof: Here are two nonzero 2-by-2 matrices whose product is zero:

2. Proof: If A and B are invertible, then (AB)^{-1} = B^{-1}A^{-1} implies AB is also invertible. So it can't be 0, which isn't.

# 3. Xorshift formulas

On a typical machine, a `signed char` value in C consists of 8 bits, which we can imagine as the entries of an 8-dimensional vector x over ℤ_{2}. Some operations that can be performed on `signed char`s include:

- Left shift
Given a

`signed char`x, return a new`signed char`y, where y_{0}= 0, y_{1}= x_{0}, y_{2}= x_{1}, ..., y_{7}= x_{6}. (This is written as`y = x<<1`.)- Right shift with sign extension
Given a

`signed char`x, return a new`signed char`y, where y_{7}= x_{7}, y_{6}= x_{7}, y_{5}= x_{6}, ..., y_{0}= x_{1}. (This is written as`y = x >> 1`.)- XOR
Given two

`signed chars`x and y, return a new`signed char`z, where z_{i}= x_{i}+ y_{i}(mod 2). (This is written as`z = x^y`.)

Note: By convention, the bits in a character are numbered from 7 (most significant) to 0 (least significant), e.g. x = x_{7}x_{6}x_{5}x_{4}x_{3}x_{2}x_{1}x_{0}. This is backwards from the way we usually write vectors, and also ends at 0. For the purposes of jthis problem, let us meet this convention half-way: imagine that vectors and matrices are written in the usual order (indices go left-to-right and top-to-bottom), but that we start counting at 0.

Suppose we represent x as a column vector. Show that there exist matrices L and R with entries in ℤ

_{2}such that Lx = x`<<`1 and Rx = x`>>`1.Consider the following rules for building

*xorshift*formulas in some variable`x`:`x`is a formula.If f is a formula, so is

`(`f`<<1)`and`(`f`>>1)`.If f and g are formulas, so is

`(`f`^`g`)`.

Show that for any formula f in this class, there exists a matrix A over ℤ

_{2}, such that the result of evaluating f for a given x is the same as the result of computing Ax. (Hint: Use induction on the length of f.)

## 3.1. Solution

Let L be the matrix with L

_{ij}= 1 if j = i-1 and L_{ij}= 0 otherwise. Then (Lx)_{i}= ∑j L_{ij}x_{j}= x_{i-1}(when i > 0). Let R be the matrix with L_{ij}= 1 if j = i+1 or j = i = 7. Then (Lx)_{7}= ∑j L_{ij}x_{j}= x_{7}, and for i < 7, (Lx)_{i}= ∑j L_{ij}x_{j}= x_{i+1}.Define the length l(f) of a formula f as the number of symbols needed to write it: 1 for

`x`, 5+l(f) for`(`f`<<1)`and`(`f`>>1)`, and 3+l(f)+l(g) for`(`f`^`g`)`(the exact definition doesn't matter, as long as the length of a formula is bigger than the length of its subformulas). We now proceed by induction on l(f). If l(f) = 1, then f =`x`, and f(x) = Ix. For a longer formula, if f is`(`g`<<1)`, then by the induction hypothesis there exists some matrix B such that g(x) = Bx, and f(x) = LBx, where L is as defined above. Similarly, we can express`(`g`>>1)`as RBx. When f =`(`g`^`h`)`, let g(x) = Ax and h(x) = Bx. Then f(x) = Ax + Bx = (A+B)x. In each case, we have expressed f as matrix multiplication, as claimed.

Another way to show part 2 is to show that f is linear, i.e. that f(ax) = af(x) (easy, since one need only consider two cases a=0 and a=1) and that f(x+y) = f(x)+f(y) (harder, and essentially corresponds to the argument above). If f is linear, then there exists some matrix A such that f(x) = Ax, since we can obtain the columns of A by feeding basis vectors to f.