The basic idea
f we have a sequence of operations p1, p2, ... pk on some data structure, it may not make sense to compute the worst-case cost of individual operations, as infrequent operations might be unreasonably expensive (e.g. the last insert that forces us to double the size of our hash table at cost O(n)). Instead, we want to compute the aggregate cost of all operations put together. But we would still like to express this cost in terms of single operations---we want an amortized cost per operation that smooths out the cost of occasional expensive operations over the more common cheap ones.
We can choose to assign whatever amortized costs we like to each operation, as long as the total amortized cost pays for the total real cost. Formally, if C*(pi) is the amortized cost of operation pi and C(pi) is the real cost, we want
∑i C*(pi) ≥ ∑i C(pi).
If there is only one kind of operation, we can simply define the amortized cost for each operation in the sequence p1, p2, ... pk by computing the total cost of the sequence and averaging:
C*(pi) = (1/k) ∑i C(pi).
It's not hard to see that this gives the appropriate bound. The worst-case amortized cost is then the maximum amortized cost obtained by averaging in this way when the maximum is taken over all possible sequences of operations.
It's 700 BC, and every vernal equinox you and your fellow druids count the years by incrementing CounterHenge, a huge binary counter in which each bit is represented by a 28,000-pound menhir. Changing a zero to a one requires raising one of these menhirs from a horizontal to a vertical position; changing a one to a zero requires lowering it back down carefully. Either operation consumes 732 druid-months of hard labor.
If you tell your fellow druids that the worst-case cost of an increment is 732n druid-months, where n is the number of menhirs in Counterhenge, they will ritually strangle you with your own intestines and toss you in a nearby peat bog. Instead, let's try using amortized analysis. On average the ones menhir will move every increment. The twos menhir will move only every other increment. The fours menhir will move every fourth increment, and in general menhir number i will move only once in every 2i increments. We thus have that the total number of menhir-shifts involved in counting up to N is bounded by
- N + N/2 + N/4 + N/8 + ... = 2N,
and thus the amortized cost of an increment is only 2 menhir-shifts (or 1464 druid-months). If you can recruit ten thousand druids, this should take hardly any time at all.
Another way to get the same result is to assign a surcharge to each operation that is then associated with a particular element of the data structure. This is a special case of the more general potential function method that will be described below, but sometimes it is easier to work out. In the case of CounterHenge, observe that the expensive operations all consist of turning many ones into zeroes (because we are carrying from 011111111 to 10000000). So if we store up amortized cost on the cheap operations that don't carry much, we can pay for the expensive operations that do. A natural way to think about this is that we will charge two units of amortized cost for each increment, but when we change a zero to a one we will save the extra unit to pay for turning it back into a zero. Though intuitively this covers the large-carry case (we pay for each one turned into a zero out of our savings, and use the two units of amortized cost to pay for changing the single zero into a one and banking its endowment for being changed back), to carry out the analysis formally it helps to have a more explicit description of what charges are being saved from one operation to another.
A potential function Φ gives the contents of the algorithm's bank account for each state s. The idea is that any operation p for which the real cost C(p) is less than the amortized cost C*(p) will put is in a state with higher Φ than before, and any operation for which the real cost is greater than the amortized cost will be paid for by a reduction in Φ. Formally, we demand that if p is any operation that changes the state from s to s',
C(p) + Φ(s') - Φ(s) ≤ C*(p). (*)
How does having a potential function help us prove low amortized cost? Suppose we know that Φ(s0) is zero in the initial state s0 and that it is non-negative in all states (these properties are usually easy to check. Consider the sequence s0p0s1p1s2...pk-1sk where each operation pi carries si to si+1. If the bound (*) above holds for all operations in the sequence, then we have
∑i=0 to k-1 (C(pi) + Φ(si+1) - Φ(si)) ≤ ∑i= 0 to k-1 C*(pi).
The left-hand side of the inequality expands out to
∑i=0 to k-1 C(pi) + ∑i=0 to k-1 Φ(si+1) - ∑i=0 to k-1 Φ(si)) ≤ ∑i= 0 to k-1 C*(pi)
= ∑i=0 to k-1 C(pi) + ∑i=1 to k Φ(si) - ∑i=0 to k-1 Φ(si)) ≤ ∑i= 0 to k-1 C*(pi)
= ∑i=0 to k-1 C(pi) + Φ(sk) - Φ(s0)
≥ ∑i=0 to k-1 C(pi).
We thus have
∑i=0 to k-1 C(pi) ≤ ∑i= 0 to k-1 C*(pi),
and our total amortized costs C* do in fact dominate the real costs in any execution of the algorithm.
For CounterHenge, a state s describes which stones are standing and which are not, and Φ(s) is just the number of one bits in s. It is easy to verify that (*) holds when C*(increment) is 2.
Example: Growing and shrinking a hash table
- Double m at cost am when we hit n = m.
- Halve m at cost bm when n = m/4.
Need a potential function Φ that has value 0 when n = m/2 (after growing or shrinking), am when n=m, and bm when n=m/4. Can choose any function we like, but C*(insert) and C*(delete) are minimized when it's a straight line between n=m/2 and n=m and between n=m/2 and n=m/4. So we get:
Φ(n,m) = 2a(n-m/2) when n ≥ m/2, 4b(m/2-n) when n ≤ m/2.
From this we can compute C*(insert) = C(insert) + 2a, C*(delete) = C(delete) + 4b.
Example: Lazy deletion in BalancedTrees
Like the hash table case; we'll delete nodes by marking them as deleted without actually removing them from the data structure, then regenerate the tree from scratch by doing n insertions when the number of live nodes n drops to half the total number of nodes m. To pay for the O(n lg n) cost of rebuilding the tree, we have to charge O(lg n) extra per deletion.
With a little more cleverness we can drop the cost of rebuilding to O(n), and with a little more sneakiness we can charge the extra O(lg n) when the element is first inserted and provide deletion for free, but since the normal cost of deletion is O(lg n) we don't actually need to be this clever or sneaky to match the usual worst-case bounds.