Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

# 1. Diagonal matrices

A matrix A is a diagonal matrix if it is a square matrix with Aij=0 whenever i≠j.

1. Prove or disprove: If A and B are diagonal matrices of the same size, so is AB.
2. Let p(A)=Πi Aii. Prove or disprove: If A and B are diagonal matrices as above, then p(AB) = p(A)p(B).

## 1.1. Solution

1. We need to show that (AB)ij=0 for i≠j (we don't care what happens when i=j). Let i≠j, and compute (AB)ij = ∑k AikBkj = AiiBij + AijBjj = 0, where the first simplification uses the fact that Aik=0 unless i=k (and similarly for Bkj), and the second uses the assumption that i≠j.

2. Now we care what happens to (AB)ii. Compute (AB)ii = ∑k AikBki = AiiBii. So p(AB) = Πi (AB)ii = ∏i (AiiBii) = (∏i Aii)(Πi Bii) = p(A)p(B).

# 2. Matrix square roots

1. Show that there exists a matrix A such that A≠0 but A²=0.
2. Show that if A²=0, there exists a matrix B such that B²=I+A. Hint: What is (I+A)²?

## 2.1. Solution

Here is a simple example of a nonzero matrix whose square is 0:

For the second part, the hint suggests looking at (I+A)² = I² + IA + AI + A² = I + 2A (since IA=AI=A and it is given that A²=0). So I+A is almost right, but there is that annoying 2 there. We can get rid of the 2 by setting B instead to I+½A, which gives B² = (I+½A)² = I+A+¼A² = I+A.

# 3. Dimension reduction

Let A be an n×m random matrix obtained by setting each entry Aij independently to ±1 with equal probability.

Let x be an arbitrary vector of dimension m.

Compute E[||Ax||²], as a function of ||x||, n, and m, where ||x|| = (x⋅x)1/2 is the usual Euclidean length.

## 3.1. Solution

Mostly this is just expanding definitions.

The second-to-last step follows because E[AijAik] = 0 when Aij and Aik are independent (i.e., when j≠k) and E[AijAij] = E[(±1)²) = 1.

# 4. Non-invertible matrices

Let A be a square matrix.

1. Prove that if Ax=0 for some column vector x≠0, then A-1 does not exist.

2. Prove that if the columns of A are not linearly independent, then A-1 does not exist.

3. Prove that if the rows of A are not linearly independent, then A-1 does not exit.

## 4.1. Solution

1. Suppose A-1 exists and that Ax=0 for some nonzero x. Then x = (A-1A)x = A-1(Ax) = A-10 = 0, a contradiction.

2. Let A⋅i represent the i-th column of A. If the columns of a are not linearly independent, there exist coefficients xi, not all zero, such that ∑ xiA⋅i = 0. But then Ax = ∑ xiA⋅i = 0, where x is the (nonzero) vector of these coefficients. It follows from the previous case that A is not invertible.

3. Observe that if A has an inverse, then so does its transpose A', since if A-1 exists we have (A-1)'A' = (AA-1)' = I and A'(A-1)' = (A-1A)' = I. If the rows of A are not linearly independent, then neither are the columns of A'; it follows that A' has no inverse, and thus neither does A.

2014-06-17 11:57