The **Johnson-Lindenstrauss theorem** says that it is possible to project a set of n vectors in a space of arbitrarily high dimension onto an O(log n)-dimensional subspace, such that the distances between the vectors are approximately preserved. There are several versions of the theorem, depending on how the projection is done, but in each case we are looking at showing that for sufficiently large k as a function of n there is some k×d matrix A such that for any two d-dimensional vectors u and v in our set, (1-ε)‖u-v‖^{2} ≤ ‖Au-Av‖^{2} ≤ (1+ε)‖u-v‖^{2}. Typical choices for A are:

- A projection matrix for a uniform random k-dimensional subspace. (Original Johnson-Lindenstrauss paper, Dasgupta-Gupta proof described below.)
- A matrix whose elements are i.i.d. normal variables. (Indyk-Motwani, probably others.)
- A matrix whose elements are i.i.d. ±1 variables. (Achlioptas. An appropriate distribution on {-1,0,+1} also works.)

In each case, we multiply the matrix by a fixed scale factor so that E[‖Au‖^{2}] = ‖u‖^{2}.

# 1. Reduction to single-vector case

Most proofs of the theorem reduce to the case of a single vector, by finding a value of k such that Pr[(1-ε)‖u‖^{2} ≤ ‖Au‖^{2} ≤ (1+ε)‖u‖^{2}] ≥ 1-2/n^{2}, and then using the union bound to show that this same property holds for all (n choose 2) vectors u-v with nonzero probability. This shows the existence of a good matrix, and we can generate matrices and test them until we find one that actually works.

# 2. A relatively simple proof of the theorem

This proof is due to Dasgupta and Gupta (see this paper), and is somewhat simpler than many proofs of the theorem. The basic idea is that if A projects onto an uniform random k-dimensional subspace, then its effect on the length of an arbitrary nonzero vector u is the same as its effect on the length of a unit vector u/‖u‖, which is the same as the effect of some specific fixed matrix B applied to a random unit vector Y drawn uniformly from the surface of the d-dimensional sphere S^{d}. An easy choice for B is just the matrix that extracts the first k coordinates from Y.

So how do we get a random unit vector Y? The usual trick is to use the fact that the multivariate normal distribution is radially symmetric: if we generate d independent normally-distributed random variables X_{i}, the vector Y = X/‖X‖ is uniformly distributed over S^{d}. Then Z = BY = (X_{1}...X_{k})/‖X‖, and ‖Z‖^{2} = (X_{1}^{2} + X_{2}^{2} + ... + X_{k}^{2}) / (X_{1}^{2} + ... X_{d}^{2}). If each value X_{i}^{2} were exactly equal to its expectation 1 (the variance of a standard N(0,1) normal random variable), then we would get ‖Z‖^{2} exactly equal to k/d, and we could make ‖Z‖^{2} exactly equal to 1 by multiplying Z by a correction factor of √(d/k). But we can't expect each X_{i}^{2} to equal 1; instead, we will use a Chernoff bound argument to show that it lies between (1-ε)(k/d) and (1+ε)(k/d) with high probability.

To avoid writing a lot of (1-ε)'s and (1+ε)'s we'll use β instead. For the lower bound, we want to show that it is unlikely that

‖Z‖^{2} ≤ β(k/d)

or

(X_{1}^{2} + X_{2}^{2} + ... + X_{k}^{2}) / (X_{1}^{2} + ... X_{d}^{2}) ≤ β(k/d).

Having a ratio between two sums is a pain, but we can multiply out the denominators to turn it into something we can Chernoff-ify.

where t can be any value greater than 0, the shift from Pr to E uses Markov's inequality, and in the last step X is a standard normal random variable (since the X_{i} are i.i.d.).

Now we just need to be able to compute the moment generating function E[exp(sX^{2})] for X^{2}. The quick way to do this is to notice that X^{2} has a chi-squared distribution with one degree of freedom, and look up the m.g.f. (1-2s)^{-1/2} (for s < 1/2). If we are less happy about looking things up, we can use the fact that X^{2}+Y^{2} has an exponential distribution when X and Y are both unit normals to show that E[exp(sX^{2})]^{2} = 1/(1-2s), and then take the square root of both sides. In either case, we can substitute in the m.g.f. in the formula above to get

The rest is just the usual trick of finding the minimum value of this expression over all t (that don't produce negative denominators in the m.g.f.!) by differentiating and setting to 0. This is easiest if we maximize 1/g(t)^{2} where g(t) is the above expression; see the Dasgupta-Gupta paper for details. The result is that for β<1:

The same argument applied to g(-t) gives essentially the same bound for β>1:

After further manipulations (see paper), this gives the existence of a good map at k ≥ 4(ε^{2}/2 - ε^{3}/3)^{-1} ln n, and in general we can get a polynomially-small probability of having distortion >ε for polynomially-many points for some k = O(ε^{-2} log n). Note that this depends on n but not d.

Constructing a projection matrix for a random k-dimensional space is mildly painful; the easiest way to do it may be to generate a random k×d matrix of independent N(0,1) variables and then apply Gram-Schmidt orthogonalization, which takes O(k^{2} d) time. A faster approach is to use a matrix of independent ±1 random variables, which doesn't produce an orthogonal projection but can be shown (with much more effort) to still work pretty well: see this paper of Achlioptas.