Algorithms for Linear Programming

Motivation

Linear programming is in P, but the polynomial-time algorithms for it are far more interesting than a typical \(O(n^3)\) recipe. The two main families — the simplex method and interior-point methods — represent fundamentally different geometric strategies (walk vertices vs. walk through the interior), and each remains an active research area decades after first appearing.

Understanding how LPs are actually solved is essential for using them as a subroutine: the choice of algorithm affects which LP formulations are practical, how warm starts work, and how a fractional solution can be recovered for LP relaxation (Dantzig 1963; Karmarkar 1984; Boyd and Vandenberghe 2004).

The simplex method

The simplex method is covered in detail separately; a summary follows.

Geometric idea. The optimum of an LP is at a vertex of the feasible polyhedron. The simplex method starts at one vertex and walks along edges of the polyhedron, always to a neighbouring vertex with a better objective value, until no improving neighbour exists.

Algebraic implementation. Each vertex corresponds to a basic feasible solution: a choice of \(m\) linearly independent active constraints (the basis). A simplex step is a pivot: swap one basis variable for a non-basis variable, moving to a neighbouring vertex. The reduced cost of a non-basic variable measures how much the objective improves per unit increase; a variable with positive reduced cost is a candidate to enter the basis.

Cycling and degeneracy. When several constraints are tight at a vertex (degeneracy), some pivots leave the objective unchanged, and the algorithm can cycle. Standard pivot rules (Bland’s rule (Bland 1977), lexicographic rule) eliminate cycling.

Complexity. In the worst case, simplex visits an exponential number of vertices (Klee and Minty 1972). But on real-world instances and on random instances, simplex typically takes \(O(\text{poly}(n, m))\) pivots. Smoothed analysis explained the gap: simplex is polynomial under mild perturbations of the input.

Warm start. A great strength of simplex: if the LP changes slightly (one constraint added, one cost coefficient perturbed), the previous optimum is usually still feasible or near-feasible, and simplex resumes from it cheaply. This is why solvers used in branch-and-bound for ILP prefer simplex.

The ellipsoid method

Geometric idea. Maintain an ellipsoid containing the optimum. At each step, query a separating hyperplane (a constraint violated by the centre) and shrink the ellipsoid to a smaller one still containing the optimum (Khachiyan 1979).

Why it matters historically. Ellipsoid was the first algorithm proven to solve LP in polynomial time (1979), settling a long-standing open problem.

Why it’s not used in practice. The polynomial bound is loose, the constants are huge, and each iteration is expensive. The method is mostly useful in theory for proving polynomial-time solvability of LPs with exponentially many constraints, given a polynomial-time separation oracle.

Interior-point methods

Geometric idea. Stay strictly inside the feasible region and follow a smooth path (the central path) to the optimum, controlled by a barrier function that diverges at the boundary.

For the LP \(\min c^\top x\) s.t. \(A x = b, x \ge 0\), the logarithmic barrier is \(\phi(x) = -\sum_i \log x_i\), and the central path is \[ x^*(\mu) = \arg\min_x \; c^\top x + \mu \, \phi(x), \quad A x = b, x > 0. \] As \(\mu \to 0\), \(x^*(\mu)\) tends to the optimum.

Algorithm sketch. Newton’s method finds an approximate \(x^*(\mu)\), then \(\mu\) is reduced by a constant factor and the process repeats. Each Newton step solves a linear system in the \(n \times n\) KKT matrix.

Complexity. \(O(\sqrt{n} \log(1/\epsilon))\) Newton iterations, each \(O(n^{2.37})\) via fast matrix multiplication, giving total \(O(n^{2.87} \log(1/\epsilon))\) — better than simplex on dense LPs, especially for very large instances.

Karmarkar’s algorithm (Karmarkar 1984) (1984) was the first practical polynomial-time algorithm for LP, kicking off the modern interior-point era.

Choosing an algorithm

Situation Preferred
Sparse LP with thousands of variables Simplex (warm starts, low memory)
Very large dense LP Interior point (single solve, fewer iterations)
Inside branch-and-bound, many similar LPs Simplex (warm starts)
LP with exponentially many constraints, separation oracle Ellipsoid or cutting-plane simplex
Conic / quadratic / semidefinite extensions Interior point (only family that generalizes)

Modern industrial solvers (CPLEX, Gurobi, HiGHS) implement both simplex and interior-point and choose between them automatically based on problem size and structure.

Special-structure algorithms

LPs that arise as network flow problems can be solved much faster than general LPs by exploiting their structure. Maximum flow is solvable in \(\tilde O(m^{4/3})\) or even \(\tilde O(m)\) with recent breakthroughs [Chen et al. 2022], minimum-cost flow in \(\tilde O(m^{3/2})\) — both far better than the general LP bound. The lesson: when the constraint matrix has structure (totally unimodular, network, banded), specialized algorithms beat general-purpose LP solvers.

Connection to LP duality

Both simplex and interior-point methods can be described in primal-dual form: maintain a feasible primal solution and a feasible dual solution that converge to a complementary slackness pair. Primal-dual interior-point methods exploit this symmetry directly and are the most numerically robust class. The theory of LP duality is essential for understanding why these algorithms work.

References

Bland, Robert G. 1977. “New Finite Pivoting Rules for the Simplex Method.” Mathematics of Operations Research 2 (2): 103–7. https://doi.org/10.1287/moor.2.2.103.
Boyd, Stephen, and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge University Press. https://www.cambridge.org/core/books/convex-optimization/E413CEF23D00BD463DCCE0600810D3FA.
Dantzig, George B. 1963. Linear Programming and Extensions. Princeton University Press. https://press.princeton.edu/books/paperback/9780691059136/linear-programming-and-extensions.
Karmarkar, N. 1984. “A New Polynomial-Time Algorithm for Linear Programming.” Proceedings of the Sixteenth Annual ACM Symposium on Theory of Computing - STOC ’84, STOC ’84, 302–11. https://doi.org/10.1145/800057.808695.
Khachiyan, Leonid G. 1979. “A Polynomial Algorithm in Linear Programming.” Soviet Mathematics Doklady 20: 191–94. https://www.mathnet.ru/eng/dan42319.
Klee, Victor, and George J. Minty. 1972. “How Good Is the Simplex Algorithm?” In Inequalities III, edited by Oved Shisha. Academic Press.