1. Definitions
- O(f(n))
A function g(n) is in O(f(n)) ("big O of f(n)") if there exist constants c and N such that |g(n)| ≤ c |f(n)| for n > N.
- Ω(f(n))
A function g(n) is in Ω(f(n)) ("big Omega of f(n)") if there exist constants c and N such that |g(n)| >= c |f(n)| for n > N.
- Θ(f(n))
A function g(n) is in Θ(f(n)) ("big Theta of f(n)") if there exist constants c1, c2, and N such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for n > N.
- o(f(n))
A function g(n) is in o(f(n)) ("little o of f(n)") if for every c > 0 there exists an N such that |g(n)| ≤ c |f(n)| for n > N. This is equivalent to saying that limn → ∞ g(n)/f(n) = 0.
- ω(f(n))
A function g(n) is in ω(f(n) ("little omega of f(n)") if for every c > 0 there exists an N such that |g(n)| >= c |f(n)| for n > N. This is equivalent to saying that limn → ∞ |g(n)|/|f(n)| diverges to infinity.
2. Motivating the definitions
Why would we use this notation?
Constant factors vary from one machine to another. The c factor hides this. If we can show that an algorithm runs in O(n2) time, we can be confident that it will continue to run in O(n2) time no matter how fast (or how slow) our computers get in the future.
- For the N threshold, there are several excuses:
Any problem can theoretically be made to run in O(1) time for any finite subset of the possible inputs (e.g. all inputs expressible in 50 MiB or less), by prefacing the main part of the algorithm with a very large TableLookup. So it's meaningless to talk about the relative performance of different algorithms for bounded inputs.
If f(n) > 0 for all n, then we can get rid of N (or set it to zero) by making c larger enough. But some f(n) take on zero—or undefined—values for interesting n (e.g., f(n) = n2 is zero when n is zero, and f(n) = log(n) is undefined for n = 0 and zero for n = 1). Allowing the minimum N lets us write O(n2) or O(log n) for classes of functions that we would otherwise have to write more awkwardly as something like O(n2+1) or O(log (n+2)).
Putting the n > N rule in has a natural connection with the definition of a limit, where the limit as n goes to infinity of g(n) is defined to be x if for each epsilon > 0 there is an N such that |g(n)-x| < epsilon for n > N. Among other things, this permits the limit test that says g(n) = O(f(n)) if the limit as n goes to infinity of g(n)/f(n) exists and is finite.
3. Proving asymptotic bounds
Most of the time when we use asymptotic notation, we compute bounds using stock theorems like O(f(n)) + O(g(n)) = O(max(f(n), g(n)) or O(c f(n)) = O(f(n)). But sometimes we need to unravel the definitions to see whether a given function fits in a give class, or to prove these utility theorems to begin with. So let's do some examples of how this works.
The function n is in O(n3).
- Proof
We must find c, N such that for all n > N, |n| < c|n3|. Since n3 is much bigger than n for most values of n, we'll pick c to be something convenient to work with, like 1. So now we need to choose N so that for all n > N, |n| < |n3|. It is not the case that |n| < |n3| for all n (try plotting n vs n3 for n < 1) but if we let N = 1, then we have n > 1, and we just need to massage this into n3 > n. There are a couple of ways to do this, but the quickest is probably to observe that squaring and multiplying by n (a positive quantity) are both increasing functions, which means that from n > 1 we can derive n2 > 12 = 1 and then n2 * n = n3 > 1 * n = n.
The function n3 is not in O(n).
- Proof
Here we need to negate the definition of O(n), a process that turns all existential quantifiers into universal quantifiers and vice versa. So what we need to show is that for all c, N, there exists some n > N for which |n3| is not less than c |n|. We can ignore any c ≤ 0 since |n| is always positive. So fix some c > 0 and N. We must find an n > N for which n3 > c n. Solving for n in this inequality gives n > c1/2; so setting n > max(N, c1/2) finishes the proof.
If f1(n) is in O(g(n)) and f2(n) is in O(g(n)), then f1(n)+f2(n) is in O(g(n)).
- Proof
Since f1(n) is in O(g(n)), there exist constants c1, N1 such that for all n > N1, |f1(n)| < c |g(n)|. Similarly there exist c2, N2 such that for all n > N2, |f2(n)| < c |g(n)|. To show f1(n)+f2(n) in O(g(n)), we must find constants c and N such that for all n > N, |f1(n)+f2(n)| < c |g(n)|. Let's let c = c1+c2. Then if n is greater than max(N1, N2), it is greater than both N1 and N2, so we can add together |f1| < c1|g| and |f2| < c2|g| to get |f1+f2| ≤ |f1| + |f2| < (c1+c2) |g| = c |g|.
4. Asymptotic notation hints
4.1. Remember the difference between big-O, big-Ω, and big-Θ
- Use big-O when you have an upper bound on a function, e.g. the zoo never got more than O(1) new gorillas per year, so there were at most O(t) gorillas at the zoo in year t.
- Use big-Ω when you have a lower bound on a function, e.g. every year the zoo got at least one new gorilla, so there were at least Ω(t) gorillas at the zoo in year t.
- Use big-Θ when you know the function exactly to within a constant-factor error, e.g. every year the zoo got exactly five new gorillas, so there were Θ(t) gorillas at the zoo in year t.
For the others, use little-o and ω when one function becomes vanishingly small relative to the other, e.g. new gorillas arrived rarely and with declining frequency, so there were o(t) gorillas at the zoo in year t. These are not used as much as big-O, big-Ω, and big-Θ in the algorithms literature.
4.2. Simplify your asymptotic terms as much as possible
- O(f(n)) + O(g(n)) = O(f(n)) when g(n) = O(f(n)). If you have an expression of the form O(f(n) + g(n)), you can almost always rewrite it as O(f(n)) or O(g(n)) depending on which is bigger. The same goes for Ω or Θ.
O(c f(n)) = O(f(n)) if c is a constant. You should never have a constant inside a big O. This includes bases for logarithms: since loga x = logb x / logb a, you can always rewrite O(lg n), O(ln n), or O(log1.4467712 n) as just O(log n).
But watch out for exponents and products: O(3n n3.1178 log1/3 n) is already as simple as it can be.
4.3. Remember the limit trick
If you are confused whether e.g. log n is O(f(n)), try computing the limit as n goes to infinity of (log n)/n, and see if it's a constant (zero is ok). You may need to use L'Hôpital's Rule to evaluate such limits if they aren't obvious.
5. Variations in notation
As with many tools in mathematics, you may see some differences in how asymptotic notation is defined and used.
5.1. Absolute values
Some authors leave out the absolute values. For example, BiggsBook defines f(n) as being in O(g(n)) if f(n) ≤ c g(n) for sufficiently large n. If f(n) and g(n) are non-negative, this is not an unreasonable definition. But it produces odd results if either can be negative: for example, by this definition, -n1000 is in O(n2). (Some authors define O(), Ω(), Θ() only for non-negative functions, avoiding this problem.)
The most common definition (which we will use) says that f(n) is in O(g(n)) if |f(n)| ≤ c |g(n)| for sufficiently large n; by this definition -n1000 is not in O(n2), though it is in O(n1000). This definition was designed for error terms in asymptotic expansions of functions, where the error term might represent a positive or negative error.
You can usually assume that algorithm running times are non-negative, so dropping the absolute value signs is generally harmless in algorithm analysis, but you should remember the absolute value definition in case you run into O() in other contexts.
5.2. Abusing the equals sign
Formally, we can think of O(g(n)) as a predicate on functions, which is true of all functions f(n) that satisfy f(n) ≤ c g(n) for some c and sufficiently large n. This requires writing that n2 is O(n2) where most computer scientists or mathematicians would just write n2 = O(n2). Making sense of the latter statement involves a standard convention that is mildly painful to define formally but that greatly simplifies asymptotic analyses.
Let's take a statement like the following:
O(n2) + O(n3) + 1 = O(n3).
What we want this to mean is that the left-hand side can be replaced by the right-hand side without causing trouble. To make this work formally, we define the statement as meaning:
For any f in O(n2) and any g in O(n3), there exists an h in O(n3) such that f(n) + g(n) + 1 = h(n).
In general, any appearance of O(), Ω(), or Θ() on the left-hand side gets a universal quantifier (for all) and any appearance of O(), Ω(), or Θ() on the right-hand side gets an existential quantifier (there exists). So
- f(n) + o(f(n)) = Θ(f(n))
becomes
- For any g in o(f(n)), there exists an h in Θ(f(n)) such that f(n)+g(n)=h(n).
and
- O(f(n))+O(g(n))+1 = O(max(f(n),g(n)))+1
becomes
- For any r in O(f(n)) and s in O(g(n)), there exists t in O(max(f(n),g(n)) such that r(n)+s(n)+1=t(n)+1.
The nice thing about this definition is that as long as you are careful about the direction the equals sign goes in, you can treat these complicated pseudo-equations like ordinary equations. For example, since O(n2) + O(n3) = O(n3), we can write
n2/2 + n(n+1)(n+2)/6 = n2/2 + O(n3) = O(n2) + O(n3) = O(n3),
which is much simpler than what it would look like if we had to talk about particular functions being elements of particular sets of functions.
This is an example of abuse of notation, the practice of redefining some standard bit of notation (in this case, equations) to make calculation easier. It's generally a safe practice as long as everybody understands what is happening. But beware of applying facts about unabused equations to the abused ones. Just because O(n2) = O(n3) doesn't mean O(n3) = O(n2)—the big-O equations are not reversible the way ordinary equations are.
More discussion of this can be found in CormenEtAl.
6. More information
It may help to read ProvingInequalities.
Fictional giant robot version: The_Big_O.
CategoryAlgorithmNotes CategoryMathNotes CategoryProgrammingNotes