[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

Notes on solving recurrences. These are originally from CS365, and emphasize asymptotic solutions; for CS202 we recommend also looking at GeneratingFunctions.

1. The problem

A recurrence or recurrence relation defines an infinite sequence by describing how to calculate the n-th element of the sequence given the values of smaller elements, as in:

In principle such a relation allows us to calculate T(n) for any n by applying the first equation until we reach the base case. To solve a recurrence, we will find a formula that calculates T(n) directly from n, without this recursive computation.

Not all recurrences are solvable exactly, but in most of the cases that arises in analyzing recursive algorithms, we can usually get at least an asymptotic (i.e. big-Theta) solution (see AsymptoticNotation).

By convention we only define T by a recurrence like this for integer arguments, so the T(n/2) by convention represents either T(floor(n/2)) or T(ceiling(n/2)). If we want an exact solution for values of n that are not powers of 2, then we have to be precise about this, but if we only care about a big-Theta solution we will usually get the same answer no matter how we do the rounding.1 From this point on we will assume that we only consider values of n for which the recurrence relation does not produce any non-integer intermediate values.

2. Guess but verify

As when solving any other mathematical problem, we are not required to explain where our solution came from as long as we can prove that it is correct. So the most general method for solving recurrences can be called "guess but verify". Naturally, unless you are very good friends with the existential quantifier you may find it had to come up with good guesses. But sometimes it is possible to make a good guess by iterating the recurrence a few times and seeing what happens.

2.1. Forward substitution

Let's consider the recurrence

The method of forward substitution proceeds by generating the first half-dozen or so terms in the sequence described by the recurrence, in the hope that it will turn out to be a sequence we recognize. In this case, we can calculate

and at this point we can shout "Aha! This example was carefully rigged to give T(n)=n²!". Unfortunately, there are infinitely many sequences that start with these six numbers, so we can't be fully sure that this is the right answer until we prove it. So let's do so.

We will show that T(n) = n² satisfies the above recurrence for all n, by induction on n. The base case is n = 0; here T(0) = 0 = 0². For n > 0, we have

and we are done.

(Here we used the induction hypothesis where we replace T(n-1) by (n-1)². We will usually not be very explicit about this.)

Here's a less rigged example:

Computing small cases gives

which doesn't really tell us much about the behavior for large values. We can easily add a few powers of 2 to get an idea of what happens later:

From this we might guess that the solution satisfies T(n) <= 2n (or perhaps T(n) <= 2n - 1 for n > 0). So let's see if we can prove it:

Base case

T(0) = 0 <= 2n.

Induction step

For n > 0, T(n) = T(floor(n/2) + n <= 2 floor(n/2) + n <= 2(n/2) + n = 2n.

We might be able to prove a slightly tighter bound with more work, but this is enough to sho T(n) = O(n). Showing that T(n) = Omega(n) is trivial (since T(n) >= n), so we get T(n) = Θ(n) and we are done.

Applying the method of forward substitution requires a talent for recognizing sequences from their first few elements. If you are not born with this talent, you can borrow it from the mathematician Neal J. A. Sloane, or at least from his on-line Encyclopedia of Integer Sequences.

2.2. Backward substitution

Backward substitution, like forward substitution, tries to find a pattern from which we can guess a solution that we then prove using other techniques---but now we start with T(n) and expand it recursively using the recurrence.

For example, if we consider the same T(n) = T(n/2) + n recurrence from above, and assume for simplicity that n is a power of 2, then we get

From this we might reasonably guess that T(n) is bounded by

\[
\sum_{i=0}^{\infty} n/2^i 
= n \sum_{i=0}^{\infty} 2^{-i}
= 2n.
\]

(See ComputingSums for how to compute this sum.) This is the same guess that we got from the method of forward substitution, so we prove that it works in the same way.

Note that in both methods we can omit how we got the guess from the final proof, though some readers might consider a refusal to explain a particularly good guess a form of teasing.

3. Converting to a sum

Some common recurrences appear often enough in AlgorithmAnalysis that it makes sense solve them once in general form, and then just apply the general solution as needed. You can either memorize these solutions, or, better yet, remember enough of how they are derived to be able to reconstruct them.

3.1. When T(n) = T(n-1) + f(n)

This is the easiest case, which usually appears in DecreaseAndConquer algorithms. Here it is easy to see (and can be proved by induction) that

\[T(n) = \sum_{i=1}^{n} f(i) + T(0).\]

Example: for T(n) = T(n-1) + n², we immediately have

\[T(n) = \sum_{i=1}^{n} i^2 + T(0),\]

which we can quickly show is Θ(n³) in any number of ways (see ComputingSums).

3.2. When T(n) = aT(n-1) + f(n)

This is a little trickier, because as the arguments to the f's drop, they are multiplied by more and more a's. After some backward substitution it is not hard to recognize the pattern

\[T(n) = \sum_{i=0}^{n-1} a^i f(n-i) + a^nT(0).\]

Example: T(n) = 2T(n-1) + n. Then from the formula

\[T(n) = \sum_{i=0}^{n-1} 2^i (n-i) + 2^n T(0).\]

This turns out to be a rather painful sum to solve exactly, but we can reasonably guess that it's somewhere between 2n and 2nn, and try guess-but-verify to whittle the range down further.

3.3. When T(n) = aT(n/b) + f(n)

This form of recurrence tends to arise from DivideAndConquer algorithms. For n = bk, n/b = bk-1, which makes this recurrence a thinly-disguised version of the last one. We thus have

\[T(n)
= \sum_{i=0}^{\log_b n-1} a^i f(n/b^i) + a^{\log_b n} T(1)
= \sum_{i=0}^{\log_b n-1} a^i f(n/b^i) + n^{\log_b a} T(1),
\]

where logb x = log x / log b, the base-b logarithm of x.

These sums can get ugly fast. Here's an unusually clean case:

Then

\[T(n)
= \sum_{i=0}^{\lg n - 1} 2^i (n/2^i) + 2^{\lg n} T(1)
= \sum_{i=0}^{\lg n - 1} n + nT(1)
= n \lg n + n T(1)),
\]

provided, of course, that n is a power of 2. For values of n that are not powers of 2, or for less conveniently constructed recurrences of this form, it is often better to skip directly to the Master Theorem.

4. The Master Theorem

The Master Theorem provides instant asymptotic solutions for many recurrences of the form T(n) = aT(n/b) + f(n), that apply for all values of n (not just powers of b). It is based on applying the analysis of the preceding section to various broad families of functions f, and then extending the results using a monotonicity argument to values of n that are not powers of b. Here we sketch out the proof; see LevitinBook Appendix B for a more detailed argument.

If f(n) = 0, then the recurrence is simply T(n) = aT(n/b). This has solution T(n) = nlog[b] a T(1) = Θ(nlog[b] a). (Example: T(n) = 4T(n/2) has solution Θ(nlg 4) = Θ(n²).) We classify different cases of the Master Theorem based on how f(n) compares to this default solution.

Recall that the general solution is

We assume that T(1) = Θ(1) throughout.

Suppose that f(x) = xc. Then ai f(n/bi) = ai nc / bic = nc (a/bc)i. The sum is then a geometric series with ratio (a/bc), and its behavior depends critically on whether (a/bc) is less than 1, equal to 1, or greater than 1.

If (a/bc) is less than 1, then Sigmai=0 to infinity nc (a/bc)i = nc/(1-(a/bc)) = O(nc). This case arises when log(a/bc) = log a - c log b is less than zero, which occurs precisely when c > log a / log b = logb a. So if f(n) = nc, the f(n) term in the sum dominates both the rest of the sum and the nlog[b] a term, and we get T(n) = Θ(f(n)). If f(n) is Omega(nc), but satisfies the additional technical requirement that af(n/b) <= (1-delta) f(n) for all n and some fixed delta > 0, then the geometric series argument still works with factor (1-delta), and we still get T(n) = Θ(f(n)). This covers the case where f(n) = Omega(nlog[b] a + epsilon).

If (a/bc) is equal to 1, then every term in the sum is the same, and the total is f(n) logb n. In this case c = logb a, so f(n) = nlog[b] a and f(n) logb n dominates (barely) the T(1) term. An extended version of this analysis shows that the solution is T(n) = Θ(f(n) log n) when f(n) = Θ(nlog[b] a).

Finally, if (a/bc) is greater than 1, we have a geometric series whose sum is proportional to its last term, which can be shown to be asymptotically smaller than the T(1) term. This case gives T(n) = Θ(nlog[b] a) for any f(n) = O(nlog[b] a - epsilon).

Summarizing, we have

  1. If f(n) = O(nlog[b] a - epsilon), then T(n) = Θ(nlog[b] a).

  2. If f(n) = Θ(nlog[b] a), then T(n) = Θ(f(n) log n).

  3. If f(n) = Omega(nlog[b] a + epsilon), and there exists c < 1 such that a f(n/b) <= c f(n), then T(n) = Θ(f(n)).

These three cases do not cover all possibilities (consider T(n) = 2T(n/2) + n log n), but they will handle most recurrences of this form you are likely to encounter.


CategoryAlgorithmNotes CategoryMathNotes

  1. The technical requirements for this to work are stated in Theorem 4 on page 427 of LevitinBook. (1)


2014-06-17 11:58