[FrontPage] [TitleIndex] [WordIndex]

Note: You are looking at a static copy of the former PineWiki site, used for class notes by James Aspnes from 2003 to 2012. Many mathematical formulas are broken, and there are likely to be other bugs as well. These will most likely not be fixed. You may be able to find more up-to-date versions of some of these notes at http://www.cs.yale.edu/homes/aspnes/#classes.

The insertion sort algorithm processes processes its input one element at a time, and can be seen as an example of the DecreaseByConstant technique. It maintains a sorted output array, and at each iteration it inserts one new element in this array. The usual implementation runs the algorithm in place, using the initial segment of the input array to hold the output.

{{{InsertionSort(A)

• for i = 1 to length(A):
• pointer = i

while pointer > 1 and A[pointer] < A[pointer-1]:

• swap A[pointer] and A[pointer-1]

}}}

The worst-case running time of insertion sort is Theta(n2), but it can perform faster when the input is already close to being sorted. The formal way to state this is that it performs a number of swaps equal to the number of inversions in the original array, where an inversion is defined as a pair of locations i and j with i < j but A[i] > A[j]. The proof is that each swap eliminates exactly one inversion.

Moreover, it is a stable, in-place algorithm with a small constant, which makes it the fastest sorting algorithm for small (e.g., n < 10) inputs. For this reason, it is often used as the bottom level of recursive sorting algorithms like QuickSort or MergeSort.

2014-06-17 11:58