Notes on solving recurrences. *These are originally from CS365, and emphasize asymptotic solutions; for CS202 we recommend also looking at GeneratingFunctions.*

# 1. The problem

A **recurrence** or **recurrence relation** defines an infinite sequence by describing how to calculate the n-th element of the sequence given the values of smaller elements, as in:

- T(n) = T(n/2) + n, T(0) = T(1) = 1.

In principle such a relation allows us to calculate T(n) for any n by applying the first equation until we reach the base case. To *solve* a recurrence, we will find a formula that calculates T(n) directly from n, without this recursive computation.

Not all recurrences are solvable exactly, but in most of the cases that arises in analyzing recursive algorithms, we can usually get at least an asymptotic (i.e. big-Theta) solution (see AsymptoticNotation).

By convention we only define T by a recurrence like this for integer arguments, so the T(n/2) by convention represents either T(floor(n/2)) or T(ceiling(n/2)). If we want an exact solution for values of n that are not powers of 2, then we have to be precise about this, but if we only care about a big-Theta solution we will usually get the same answer no matter how we do the rounding.^{1} From this point on we will assume that we only consider values of n for which the recurrence relation does not produce any non-integer intermediate values.

# 2. Guess but verify

As when solving any other mathematical problem, we are not required to explain where our solution came from as long as we can prove that it is correct. So the most general method for solving recurrences can be called "guess but verify". Naturally, unless you are very good friends with the existential quantifier you may find it had to come up with good guesses. But sometimes it is possible to make a good guess by iterating the recurrence a few times and seeing what happens.

## 2.1. Forward substitution

Let's consider the recurrence

- T(n) = T(n-1) + 2n - 1
- T(0) = 0

The **method of forward substitution** proceeds by generating the first half-dozen or so terms in the sequence described by the recurrence, in the hope that it will turn out to be a sequence we recognize. In this case, we can calculate

n

T(n)

0

0

1

1

2

4

3

9

4

16

5

25

and at this point we can shout "Aha! This example was carefully rigged to give T(n)=n²!". Unfortunately, there are infinitely many sequences that start with these six numbers, so we can't be fully sure that this is the right answer until we prove it. So let's do so.

We will show that T(n) = n² satisfies the above recurrence for all n, by induction on n. The base case is n = 0; here T(0) = 0 = 0². For n > 0, we have

- T(n) = T(n-1) + 2n - 1 = (n-1)² + 2n - 1 = n² - 2n + 1 + 2n - 1 = n²,

and we are done.

(Here we used the induction hypothesis where we replace T(n-1) by (n-1)². We will usually not be very explicit about this.)

Here's a less rigged example:

- T(n) = T(floor(n/2)) + n, T(0) = 0.

Computing small cases gives

n

T(n)

0

0

1

1

2

3

3

4

4

7

5

8

6

10

7

11

8

15

which doesn't really tell us much about the behavior for large values. We can easily add a few powers of 2 to get an idea of what happens later:

16

31

32

63

64

127

From this we might guess that the solution satisfies T(n) <= 2n (or perhaps T(n) <= 2n - 1 for n > 0). So let's see if we can prove it:

- Base case
T(0) = 0 <= 2n.

- Induction step
For n > 0, T(n) = T(floor(n/2) + n <= 2 floor(n/2) + n <= 2(n/2) + n = 2n.

We might be able to prove a slightly tighter bound with more work, but this is enough to sho T(n) = O(n). Showing that T(n) = Omega(n) is trivial (since T(n) >= n), so we get T(n) = Θ(n) and we are done.

Applying the method of forward substitution requires a talent for recognizing sequences from their first few elements. If you are not born with this talent, you can borrow it from the mathematician Neal J. A. Sloane, or at least from his on-line Encyclopedia of Integer Sequences.

## 2.2. Backward substitution

Backward substitution, like forward substitution, tries to find a pattern from which we can guess a solution that we then prove using other techniques---but now we start with T(n) and expand it recursively using the recurrence.

For example, if we consider the same T(n) = T(n/2) + n recurrence from above, and assume for simplicity that n is a power of 2, then we get

- T(n) = n + T(n/2) = n + n/2 + T(n/4) = n + n/2 + n/4 + T(n/8) = n + n/2 + n/4 + n/8 + T(n/16) = ...

From this we might reasonably guess that T(n) is bounded by

(See ComputingSums for how to compute this sum.) This is the same guess that we got from the method of forward substitution, so we prove that it works in the same way.

Note that in both methods we can omit how we got the guess from the final proof, though some readers might consider a refusal to explain a particularly good guess a form of teasing.

# 3. Converting to a sum

Some common recurrences appear often enough in AlgorithmAnalysis that it makes sense solve them once in general form, and then just apply the general solution as needed. You can either memorize these solutions, or, better yet, remember enough of how they are derived to be able to reconstruct them.

## 3.1. When T(n) = T(n-1) + f(n)

This is the easiest case, which usually appears in DecreaseAndConquer algorithms. Here it is easy to see (and can be proved by induction) that

Example: for T(n) = T(n-1) + n², we immediately have

which we can quickly show is Θ(n³) in any number of ways (see ComputingSums).

## 3.2. When T(n) = aT(n-1) + f(n)

This is a little trickier, because as the arguments to the f's drop, they are multiplied by more and more a's. After some backward substitution it is not hard to recognize the pattern

Example: T(n) = 2T(n-1) + n. Then from the formula

This turns out to be a rather painful sum to solve exactly, but we can reasonably guess that it's somewhere between 2^{n} and 2^{n}n, and try guess-but-verify to whittle the range down further.

## 3.3. When T(n) = aT(n/b) + f(n)

This form of recurrence tends to arise from DivideAndConquer algorithms. For n = b^{k}, n/b = b^{k-1}, which makes this recurrence a thinly-disguised version of the last one. We thus have

where log_{b} x = log x / log b, the base-b logarithm of x.

These sums can get ugly fast. Here's an unusually clean case:

- T(n) = 2T(n/2) + n.

Then

provided, of course, that n is a power of 2. For values of n that are not powers of 2, or for less conveniently constructed recurrences of this form, it is often better to skip directly to the Master Theorem.

# 4. The Master Theorem

The Master Theorem provides instant asymptotic solutions for many recurrences of the form T(n) = aT(n/b) + f(n), that apply for all values of n (not just powers of b). It is based on applying the analysis of the preceding section to various broad families of functions f, and then extending the results using a monotonicity argument to values of n that are not powers of b. Here we sketch out the proof; see LevitinBook Appendix B for a more detailed argument.

If f(n) = 0, then the recurrence is simply T(n) = aT(n/b). This has solution T(n) = n^{log[b] a} T(1) = Θ(n^{log[b] a}). (Example: T(n) = 4T(n/2) has solution Θ(n^{lg 4}) = Θ(n²).) We classify different cases of the Master Theorem based on how f(n) compares to this default solution.

Recall that the general solution is

T(n) = Sigma,,i=0 to log[b] n - 1 a

^{i}f(n/b^{i}) + n^{log[b] a}T(1).

We assume that T(1) = Θ(1) throughout.

Suppose that f(x) = x^{c}. Then a^{i} f(n/b^{i}) = a^{i} n^{c} / b^{ic} = n^{c} (a/b^{c})^{i}. The sum is then a geometric series with ratio (a/b^{c}), and its behavior depends critically on whether (a/b^{c}) is less than 1, equal to 1, or greater than 1.

If (a/b^{c}) is less than 1, then Sigma_{i=0 to infinity} n^{c} (a/b^{c})^{i} = n^{c}/(1-(a/b^{c})) = O(n^{c}). This case arises when log(a/b^{c}) = log a - c log b is less than zero, which occurs precisely when c > log a / log b = log_{b} a. So if f(n) = n^{c}, the f(n) term in the sum dominates both the rest of the sum and the n^{log[b] a} term, and we get T(n) = Θ(f(n)). If f(n) is Omega(n^{c}), but satisfies the additional technical requirement that af(n/b) <= (1-delta) f(n) for all n and some fixed delta > 0, then the geometric series argument still works with factor (1-delta), and we still get T(n) = Θ(f(n)). This covers the case where f(n) = Omega(n^{log[b] a + epsilon}).

If (a/b^{c}) is equal to 1, then every term in the sum is the same, and the total is f(n) log_{b} n. In this case c = log_{b} a, so f(n) = n^{log[b] a} and f(n) log_{b} n dominates (barely) the T(1) term. An extended version of this analysis shows that the solution is T(n) = Θ(f(n) log n) when f(n) = Θ(n^{log[b] a}).

Finally, if (a/b^{c}) is greater than 1, we have a geometric series whose sum is proportional to its last term, which can be shown to be asymptotically smaller than the T(1) term. This case gives T(n) = Θ(n^{log[b] a}) for any f(n) = O(n^{log[b] a - epsilon}).

Summarizing, we have

If f(n) = O(n

^{log[b] a - epsilon}), then T(n) = Θ(n^{log[b] a}).If f(n) = Θ(n

^{log[b] a}), then T(n) = Θ(f(n) log n).If f(n) = Omega(n

^{log[b] a + epsilon}), and there exists c < 1 such that a f(n/b) <= c f(n), then T(n) = Θ(f(n)).

These three cases do not cover all possibilities (consider T(n) = 2T(n/2) + n log n), but they will handle most recurrences of this form you are likely to encounter.

CategoryAlgorithmNotes CategoryMathNotes

The technical requirements for this to work are stated in Theorem 4 on page 427 of LevitinBook. (1)