[FrontPage] [TitleIndex] [WordIndex

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

1. Definitions

O(f(n))

A function g(n) is in O(f(n)) ("big O of f(n)") if there exist constants c and N such that |g(n)| ≤ c |f(n)| for n > N.

Ω(f(n))

A function g(n) is in Ω(f(n)) ("big Omega of f(n)") if there exist constants c and N such that |g(n)| >= c |f(n)| for n > N.

Θ(f(n))

A function g(n) is in Θ(f(n)) ("big Theta of f(n)") if there exist constants c1, c2, and N such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for n > N.

o(f(n))

A function g(n) is in o(f(n)) ("little o of f(n)") if for every c > 0 there exists an N such that |g(n)| ≤ c |f(n)| for n > N. This is equivalent to saying that limn → ∞ g(n)/f(n) = 0.

ω(f(n))

A function g(n) is in ω(f(n) ("little omega of f(n)") if for every c > 0 there exists an N such that |g(n)| >= c |f(n)| for n > N. This is equivalent to saying that limn → ∞ |g(n)|/|f(n)| diverges to infinity.

2. Motivating the definitions

Why would we use this notation?

3. Proving asymptotic bounds

Most of the time when we use asymptotic notation, we compute bounds using stock theorems like O(f(n)) + O(g(n)) = O(max(f(n), g(n)) or O(c f(n)) = O(f(n)). But sometimes we need to unravel the definitions to see whether a given function fits in a give class, or to prove these utility theorems to begin with. So let's do some examples of how this works.

4. Asymptotic notation hints

4.1. Remember the difference between big-O, big-Ω, and big-Θ

For the others, use little-o and ω when one function becomes vanishingly small relative to the other, e.g. new gorillas arrived rarely and with declining frequency, so there were o(t) gorillas at the zoo in year t. These are not used as much as big-O, big-Ω, and big-Θ in the algorithms literature.

4.2. Simplify your asymptotic terms as much as possible

4.3. Remember the limit trick

If you are confused whether e.g. log n is O(f(n)), try computing the limit as n goes to infinity of (log n)/n, and see if it's a constant (zero is ok). You may need to use L'Hôpital's Rule to evaluate such limits if they aren't obvious.

5. Variations in notation

As with many tools in mathematics, you may see some differences in how asymptotic notation is defined and used.

5.1. Absolute values

Some authors leave out the absolute values. For example, BiggsBook defines f(n) as being in O(g(n)) if f(n) ≤ c g(n) for sufficiently large n. If f(n) and g(n) are non-negative, this is not an unreasonable definition. But it produces odd results if either can be negative: for example, by this definition, -n1000 is in O(n2). (Some authors define O(), Ω(), Θ() only for non-negative functions, avoiding this problem.)

The most common definition (which we will use) says that f(n) is in O(g(n)) if |f(n)| ≤ c |g(n)| for sufficiently large n; by this definition -n1000 is not in O(n2), though it is in O(n1000). This definition was designed for error terms in asymptotic expansions of functions, where the error term might represent a positive or negative error.

You can usually assume that algorithm running times are non-negative, so dropping the absolute value signs is generally harmless in algorithm analysis, but you should remember the absolute value definition in case you run into O() in other contexts.

5.2. Abusing the equals sign

Formally, we can think of O(g(n)) as a predicate on functions, which is true of all functions f(n) that satisfy f(n) ≤ c g(n) for some c and sufficiently large n. This requires writing that n2 is O(n2) where most computer scientists or mathematicians would just write n2 = O(n2). Making sense of the latter statement involves a standard convention that is mildly painful to define formally but that greatly simplifies asymptotic analyses.

Let's take a statement like the following:

What we want this to mean is that the left-hand side can be replaced by the right-hand side without causing trouble. To make this work formally, we define the statement as meaning:

In general, any appearance of O(), Ω(), or Θ() on the left-hand side gets a universal quantifier (for all) and any appearance of O(), Ω(), or Θ() on the right-hand side gets an existential quantifier (there exists). So

becomes

and

becomes

The nice thing about this definition is that as long as you are careful about the direction the equals sign goes in, you can treat these complicated pseudo-equations like ordinary equations. For example, since O(n2) + O(n3) = O(n3), we can write

which is much simpler than what it would look like if we had to talk about particular functions being elements of particular sets of functions.

This is an example of abuse of notation, the practice of redefining some standard bit of notation (in this case, equations) to make calculation easier. It's generally a safe practice as long as everybody understands what is happening. But beware of applying facts about unabused equations to the abused ones. Just because O(n2) = O(n3) doesn't mean O(n3) = O(n2)—the big-O equations are not reversible the way ordinary equations are.

More discussion of this can be found in CormenEtAl.

6. More information


CategoryAlgorithmNotes CategoryMathNotes CategoryProgrammingNotes


2014-06-17 11:57