# 1. Diagonal matrices

A matrix A is a **diagonal matrix** if it is a square matrix with A_{ij}=0 whenever i≠j.

- Prove or disprove: If A and B are diagonal matrices of the same size, so is AB.
Let p(A)=Π

_{i}A_{ii}. Prove or disprove: If A and B are diagonal matrices as above, then p(AB) = p(A)p(B).

## 1.1. Solution

We need to show that (AB)

_{ij}=0 for i≠j (we don't care what happens when i=j). Let i≠j, and compute (AB)_{ij}= ∑_{k}A_{ik}B_{kj}= A_{ii}B_{ij}+ A_{ij}B_{jj}= 0, where the first simplification uses the fact that A_{ik}=0 unless i=k (and similarly for B_{kj}), and the second uses the assumption that i≠j.Now we care what happens to (AB)

_{ii}. Compute (AB)_{ii}= ∑_{k}A_{ik}B_{ki}= A_{ii}B_{ii}. So p(AB) = Π_{i}(AB)_{ii}= ∏_{i}(A_{ii}B_{ii}) = (∏_{i}A_{ii})(Π_{i}B_{ii}) = p(A)p(B).

# 2. Matrix square roots

- Show that there exists a matrix A such that A≠0 but A²=0.
- Show that if A²=0, there exists a matrix B such that B²=I+A. Hint: What is (I+A)²?

## 2.1. Solution

Here is a simple example of a nonzero matrix whose square is 0:

For the second part, the hint suggests looking at (I+A)² = I² + IA + AI + A² = I + 2A (since IA=AI=A and it is given that A²=0). So I+A is almost right, but there is that annoying 2 there. We can get rid of the 2 by setting B instead to I+½A, which gives B² = (I+½A)² = I+A+¼A² = I+A.

# 3. Dimension reduction

Let A be an n×m **random matrix** obtained by setting each entry A_{ij} independently to ±1 with equal probability.

Let x be an arbitrary vector of dimension m.

Compute E[||Ax||²], as a function of ||x||, n, and m, where ||x|| = (x⋅x)^{1/2} is the usual Euclidean length.

## 3.1. Solution

Mostly this is just expanding definitions.

The second-to-last step follows because E[A_{ij}A_{ik}] = 0 when A_{ij} and A_{ik} are independent (i.e., when j≠k) and E[A_{ij}A_{ij}] = E[(±1)²) = 1.

# 4. Non-invertible matrices

Let A be a square matrix.

Prove that if Ax=0 for some column vector x≠0, then A

^{-1}does not exist.Prove that if the columns of A are not linearly independent, then A

^{-1}does not exist.Prove that if the rows of A are not linearly independent, then A

^{-1}does not exit.

## 4.1. Solution

Suppose A

^{-1}exists and that Ax=0 for some nonzero x. Then x = (A^{-1}A)x = A^{-1}(Ax) = A^{-1}0 = 0, a contradiction.Let A

_{⋅i}represent the i-th column of A. If the columns of a are not linearly independent, there exist coefficients x_{i}, not all zero, such that ∑ x_{i}A_{⋅i}= 0. But then Ax = ∑ x_{i}A_{⋅i}= 0, where x is the (nonzero) vector of these coefficients. It follows from the previous case that A is not invertible.Observe that if A has an inverse, then so does its transpose A', since if A

^{-1}exists we have (A^{-1})'A' = (AA^{-1})' = I and A'(A^{-1})' = (A^{-1}A)' = I. If the rows of A are not linearly independent, then neither are the columns of A'; it follows that A' has no inverse, and thus neither does A.