This paper analyzes the behavior of the quicksort variant that always uses the first element as the pivot, and proves that this algorithm works well under the following smoothed model of inputs: given an advarsarially chosen input a_1, ..., a_n, randomly choose a set S of \eps n inputs, and randomly permute the inputs in that subset. Unfortunately, this variant of quicksort is know to be a bad algorithm as it performs poorly on the already-sorted input. The standard implementation used the median of the first, last, and middle elements as its pivot.
David Eppstein proposed the following refinement of the smoothed model in which the first-element-as-pivot algorithm will perform poorly, but in which it still might be possible to prove that the actually implemented algorithm performs well: let \delta be the number of elements of the input that must be moved to make the input sorted or reverse-sorted. Randomly choose a set S of \delta \eps n inputs, and randomly permute them. Measure the complexity in terms of \eps and n.
In this model, we do not perturb the already-sorted input or the reverse-sorted input at all, and the perturbations of other inputs depend on their distance to these inputs. This definition is inspired by the definition of smoothed analysis for problems that take inputs from R^n: we don't perturb the zero vector, and perturb other vectors in proportion to their norm. For sorting, one may view the already-sorted input as a zero, and distance-to-sorted as a norm.