Single Machine Scheduling
Motivation
A single processor must run a fixed batch of jobs back-to-back, each with its own processing time and possibly a deadline, weight, or release time. A print queue, a CPU running batch tasks, a manufacturing line with one bottleneck — all share this shape. The single-machine setting is also the cleanest place to study scheduling theory: which objectives are tractable, which require exchange arguments to crack, and which become NP-hard once even modest structure is added (Kleinberg and Tardos 2005).
This article fixes the shared problem framework and surveys the standard objectives. Each objective leads to a different optimization problem, and each is solved (or proven hard) by techniques specific to that objective.
Problem
- \(n\) jobs, where job \(i\) has processing time \(t_i \ge 0\). Other per-job data — deadline \(d_i\), weight \(w_i\), release time \(r_i\) — appear as the variant requires.
- A schedule assigns each job a contiguous time interval \([s_i, f_i)\) with \(f_i = s_i + t_i\), and the intervals are pairwise disjoint.
- Once started, a job runs to completion (no preemption), unless explicitly stated otherwise.
- The completion time of job \(i\) is \(f_i\). Other useful quantities are lateness \(\ell_i = \max(0, f_i - d_i)\) and the indicator \(\mathbf{1}[f_i > d_i]\) that job \(i\) misses its deadline.
- The objective is a function of these per-job quantities. Different objectives lead to different problems and different algorithms.
Key Insights
Idle time never helps. For every objective considered below, an optimal schedule has no idle time: compressing any gap moves later jobs earlier, which can only reduce each finish time and thus each per-job penalty. So the search reduces to choosing a permutation of the \(n\) jobs.
This single observation — verified separately for each objective — turns the problem into a sequencing question and is the foundation for every greedy result below.
Objectives
The standard single-machine objectives, with the rule that solves them when there are no release times:
| Objective | Optimal rule | Technique | Time |
|---|---|---|---|
| Maximum lateness \(\max_i \ell_i\) | Earliest deadline first | Minimizing maximum lateness — exchange argument | \(\Theta(n \log n)\) |
| Total completion time \(\sum_i f_i\) | Shortest processing time first | Exchange argument on inversions | \(\Theta(n \log n)\) |
| Weighted total completion \(\sum_i w_i f_i\) | Sort by \(t_i / w_i\) ascending | Exchange argument | \(\Theta(n \log n)\) |
| Number of late jobs \(\sum_i \mathbf{1}[f_i > d_i]\) | Moore’s algorithm | Greedy with replacement | \(\Theta(n \log n)\) |
| Total tardiness \(\sum_i \max(0, f_i - d_i)\) | NP-hard | Pseudo-polynomial DP | — |
| Weighted total tardiness | NP-hard | Branch-and-bound, heuristics | — |
Two patterns stand out. First, min-max and min-sum objectives behave differently: max lateness yields to a clean exchange argument, while summing the same penalty across jobs becomes NP-hard once weights are added. Second, the sort key depends on the objective — deadline for max lateness, processing time for total completion, ratio \(t_i / w_i\) for weighted completion — so there is no single “right” greedy rule for the family as a whole.
Complications
- Release times. If each job has a release time \(r_i\) before which it cannot start, even minimizing maximum lateness becomes NP-hard. Preemptive variants sometimes recover tractability — for example, preemptive max lateness with release times is solvable by an EDF variant that switches whenever a tighter deadline is released.
- Precedence constraints. When jobs depend on one another, run topological sort first to obtain a valid linear extension; the per-objective greedy then operates within that constraint.
- Multiple machines. With two or more identical parallel machines, nearly every objective becomes NP-hard. Approximation algorithms and list-scheduling heuristics dominate the literature.
When to Use It
Reach for single-machine scheduling whenever a single resource serves a finite batch of jobs and the question is what order to run them in. The right objective drives the right algorithm:
- Worst-case deadline performance? See minimizing maximum lateness.
- Average-case completion time? Sort by processing time (shortest first).
- Maximum count of compatible jobs without deadlines? That is a different problem — see interval scheduling.
- Weighted selection of jobs with values? See weighted interval scheduling.