Interval Scheduling

Motivation

Suppose a single resource — a meeting room, a CPU, a piece of equipment — receives a list of requests, each with a fixed start and end time. Two overlapping requests cannot both be granted. Which subset should we accept to maximize the number of fulfilled requests?

This interval scheduling problem is the canonical example of a problem solved optimally by a greedy algorithm: at each step, commit to the locally most attractive choice and never revisit it. The right local choice — earliest finishing time — gives an algorithm that is correct, runs in \(O(n \log n)\), and admits a clean exchange argument (Kleinberg and Tardos 2005).

The lesson generalizes: most greedy algorithms succeed because there is some invariant (“the greedy solution is at least as good as the optimal one on every prefix”) that an exchange argument can establish.

Problem

Input: \(n\) intervals \([s_i, f_i)\) with \(s_i < f_i\). Two intervals conflict if they overlap, i.e., they share any point in time.

Output: a maximum-size subset of pairwise non-conflicting intervals.

Examples

Several plausible greedy heuristics fail. Each one is wrong because it optimizes for the wrong feature of an interval.

  • Earliest start time. An early-starting but very long interval can block many short later ones. On \(\{[0, 10), [1, 3), [4, 6), [7, 9)\}\), earliest start commits to \([0, 10)\) and accepts only one interval; the optimum is three.
  • Shortest duration. A short interval can sit between two non-conflicting longer ones, blocking both. On \(\{[0, 5), [4, 6), [5, 10)\}\), the shortest is \([4, 6)\) which conflicts with both others; the optimum is two.
  • Fewest conflicts. A more subtle rule that also fails on careful constructions: an interval with few conflicts can still occupy a position that displaces a larger optimum elsewhere.

These misconceptions show that the greedy choice rule matters: optimizing for the obvious-looking feature of an interval is not enough.

Key Ideas

The greedy rule that succeeds is earliest finish time: among all compatible candidates, accept the interval whose finish time is smallest.

The reason is that finishing early frees the resource as soon as possible, leaving the most room for future commitments. Every other rule above trades this freedom for some other criterion that turns out not to be aligned with the objective. Earliest finish time is the unique choice that guarantees the resource is available again at the earliest possible moment.

This local rule extends to a global guarantee through an exchange argument: at every step, the greedy choice finishes no later than the corresponding choice in any optimal schedule, so the greedy never falls behind. The formal proof appears under Correctness.

Algorithm

sort intervals so that f_1 <= f_2 <= ... <= f_n
A = []
last_finish = -infinity
for i = 1 to n:
    if s_i >= last_finish:
        A.append(i)
        last_finish = f_i
return A

Walkthrough

Seven intervals

Seven requests on a timeline, sorted by finish time. The greedy algorithm scans left to right and accepts the first compatible interval at each step.

1 ✓ 2 ✗ 3 ✗ 4 ✓ 5 ✗ 6 ✗ 7 ✓ 0 1 2 3 4 5 6 7 8 9 10 11 12 time accepted: 1, 4, 7. Total = 3 (optimal). Each accepted interval ends as early as possible.

After accepting interval \(1\) (which ends at \(t = 3\)), intervals \(2\) and \(3\) are skipped because they start before \(t = 3\). Interval \(4\) starts exactly at \(t = 3\) so it is accepted. After \(4\) ends at \(t = 7\), intervals \(5\) and \(6\) start too early; \(7\) starts at \(t = 8 \ge 7\), so it is accepted. The greedy output has \(|A| = 3\), which matches every other valid choice — for example \(\{2, 5, 7\}\) also has size \(3\).

Correctness

Let \(A = \{a_1, a_2, \dots, a_k\}\) be the greedy output, indexed in selection order, and let \(O = \{o_1, o_2, \dots, o_m\}\) be any optimal solution, sorted by finish time.

Lemma. For every \(r \le k\), \(f(a_r) \le f(o_r)\).

Proof by induction on \(r\). For \(r = 1\), the greedy picks the globally earliest finish, so \(f(a_1) \le f(o_1)\). For \(r > 1\), by induction \(f(a_{r-1}) \le f(o_{r-1})\). Since \(o_r\) is compatible with \(o_{r-1}\), we have \(s(o_r) \ge f(o_{r-1}) \ge f(a_{r-1})\), so \(o_r\) is one of the candidates the greedy considers when choosing \(a_r\). The greedy picks the one with smallest finish, so \(f(a_r) \le f(o_r)\).

Theorem. \(|A| = |O|\).

Proof. Suppose for contradiction \(k < m\). Then \(o_{k+1}\) exists in \(O\). By the lemma, \(f(a_k) \le f(o_k)\), so \(s(o_{k+1}) \ge f(o_k) \ge f(a_k)\), meaning \(o_{k+1}\) is compatible with \(a_k\) — yet the greedy stopped, contradicting the choice of \(k\).

Complexity and Tradeoffs

Sorting by finish time: \(O(n \log n)\). Single linear pass: \(O(n)\). Total: \(\Theta(n \log n)\).

The bottleneck is the sort. If intervals already arrive sorted by finish time (for example, from a streaming source), the algorithm runs in \(\Theta(n)\) time and \(\Theta(1)\) extra memory beyond the output.

Important tradeoffs:

  • Equal weights only. Greedy assumes every interval is worth the same. With per-interval values, the technique fails outright and dynamic programming is required (weighted-interval-scheduling).
  • Pairwise non-conflicting. Greedy maximizes a count; if the goal changes (e.g., minimizing resources rather than maximizing accepted requests), a different greedy choice is needed.
  • Unique optima. Multiple maximum-size schedules can exist; the algorithm returns one valid choice.

When to Use It

Earliest-finish-time greedy is the right tool when a single resource serves a fixed set of equal-value requests and the goal is to maximize how many are honored. The broader pattern is the same wherever a “do the locally most attractive thing, never undo it” rule can be justified by an exchange argument.

Variant Technique
Equal-value intervals, single resource Earliest finish time greedy — \(\Theta(n \log n)\)
Per-interval values Weighted interval scheduling (DP) — \(\Theta(n \log n)\)
Schedule every interval, minimize resources Interval partitioning: sort by start time, assign to any free resource — \(\Theta(n \log n)\). The answer equals the depth (maximum number of intervals overlapping any point).
Jobs with deadlines, minimize maximum lateness Minimizing maximum lateness: earliest deadline first

References

Kleinberg, Jon, and Éva Tardos. 2005. Algorithm Design. Pearson/Addison-Wesley. https://www.pearson.com/en-us/subject-catalog/p/algorithm-design/P200000003259.