A **combinatorial optimization problem** generally involves choosing an input from an exponentially-large domain (the combinatorial part) to maximize or minimize some *objective function* (the optimization part). A simple example is finding the shortest path between two points in a graph. There may be exponentially many possible paths, and from this haystack we need to pick out a good one.

Given an algorithm for computing the objective function and an algorithm for generating inputs, it is always possible to solve a combinatorial optimization problem by BruteForce enumeration of all possible solutions. However, this is likely to be expensive, and so there are many algorithms and AlgorithmDesignTechniques in the literature for solving combinatorial optimization problems more cleverly. Standard approaches that yield efficient algorithms for some problems are:

TransformAndConquer approaches, particularly reductions to general-purpose optimization problems like LinearProgramming, ShortestPath, MaxFlow, or the AssignmentProblem.

Of these approaches, TransformAndConquer is usually easiest to apply when it works, because the hard work of designing a good algorithm and proving its correctness and performance has already been done. DynamicProgramming is usually effective when a problem is solvable at all, but it requires some creativity to find a working algorithm. The GreedyMethod produces algorithm very quickly; sadly, most of them don't work, and for those that do it may be hard to prove that they work.

For many problems, none of these techniques work. Many (most?) combinatorial optimization problems do not appear to have efficient solutions. We can prove that a particular problem A is hard (under the assumption that P≠NP) by showing that solving A quickly would give us a quick solution to some other known hard problem B. We will talk about this at length when we discuss the PvsNp question.