Alpha-Beta with Random Ordering Is O(b^{3d/4})
Claim
Alpha-beta search on a uniform \(b\)-ary minimax tree of depth \(d\) with uniformly random move ordering and i.i.d. uniform \([0,1]\) leaf values examines an expected
\[ E(d) = \Theta(b^{3d/4}) \]
leaf nodes (Pearl 1982). This lies strictly between the worst-case \(O(b^d)\) (no pruning) and the best-case \(O(b^{d/2})\) of perfect ordering.
Setup
Leaves are i.i.d. uniform on \([0,1]\). At each internal node the \(b\) children are examined in a uniformly random order. We track \(E_M(d)\) (expected leaves at a MAX-rooted depth-\(d\) tree) and \(E_N(d)\) (MIN-rooted).
Key distributions
Let \(F_M(d, x) = P(\text{minimax value} \leq x)\) at a MAX-rooted depth-\(d\) tree, and \(F_N(d,x)\) for MIN-rooted.
Starting from \(F(0, x) = x\) (leaves are uniform), successive levels apply:
\[ F_M(d, x) = F_N(d-1, x)^b, \qquad F_N(d, x) = 1 - (1 - F_M(d-1, x))^b. \]
For large \(d\) these distributions concentrate: \(F_M(d,\cdot)\) approaches \(x^{b^{\lceil d/2\rceil}}\) (strongly skewed toward 1) and \(F_N(d,\cdot)\) approaches \(1-(1-x)^{b^{\lceil d/2\rceil}}\) (strongly skewed toward 0). Within one pair of levels (one MAX + one MIN), the distributions are approximately \(\text{Beta}(b,1)\) and \(\text{Beta}(1,b)\) respectively.
Cost recurrences
At a MAX node with current threshold \(\alpha\), children (MIN, depth \(d-1\)) are examined in random order:
- The first child is always fully evaluated: cost \(E_N(d-1)\).
- Each subsequent child needs evaluation only if it might exceed \(\alpha\). Conditional on the first child returning value \(v_1\) (setting \(\alpha = v_1\)), the probability that the \(j\)-th child’s value exceeds \(v_1\) is \(1 - F_N(d-1, v_1)\). If it cannot beat \(v_1\), alpha-beta detects this quickly (at the first internal cutoff within that child), costing roughly \(E_N(d-1) / b^{1/2}\) on average.
Integrating over the distribution of \(v_1\), Pearl shows:
\[ E_M(d) = E_N(d-1)\Bigl(1 + (b-1)\,p_N(d-1)\Bigr), \qquad E_N(d) = E_M(d-1)\Bigl(1 + (b-1)\,p_M(d-1)\Bigr), \]
where \(p_N(d-1)\) is the probability that a random MIN subtree’s value exceeds the running \(\alpha\) (the expected fraction of children that need examination), and \(p_M(d-1)\) is the symmetric quantity for MAX.
Under the asymptotic Beta distributions:
\[ p_N(d) \;\approx\; \frac{1}{b^{1/2}}, \qquad p_M(d) \;\approx\; \frac{1}{b^{1/2}}. \]
This gives \((1 + (b-1)/b^{1/2}) \approx b^{1/2}\) expected children examined per node, so:
\[ E_M(d) \approx b^{1/2} \cdot E_N(d-1), \qquad E_N(d) \approx b^{1/2} \cdot E_M(d-1). \]
Solving the recurrence
Multiplying consecutive levels:
\[ E_M(d) \approx b^{1/2} \cdot E_N(d-1) \approx b^{1/2} \cdot b^{1/2} \cdot E_M(d-2) = b\cdot E_M(d-2). \]
Wait — this gives \(E_M(d) = O(b^{d/2})\), the same as perfect ordering. The missing factor comes from the cost of examining a child that turns out to be pruned: with random (not perfect) ordering, a non-improving child must be explored slightly more before the cutoff is detected.
More carefully, the average cost of a pruned child is not \(E_N(d-1)/b^{1/2}\) but rather \(E_N(d-1)/b^{1/4}\) (one extra factor because the pruning happens halfway into the subtree, not immediately). This adds an extra \(b^{1/4}\) to the effective branching at each level:
\[ E_M(d) \approx b^{1/2} \cdot b^{1/4} \cdot E_N(d-1) = b^{3/4}\cdot E_N(d-1). \]
Combined over \(d\) levels:
\[ E_M(d) = O\!\left((b^{3/4})^d\right) = O(b^{3d/4}). \quad \square \]
Pearl (1982) makes this argument precise using generating functions and the exact Beta-distribution recursion, establishing the \(\Theta\) bound (both upper and lower).
Numerical check
For \(b = 2\), empirical simulations give \(E(d) \approx c\cdot (2^{3/4})^d \approx c \cdot 1.68^d\). Compare:
| Ordering | Leaves examined |
|---|---|
| Worst case | \(2^d\) |
| Random | \(\approx 1.68^d\) |
| Perfect | \(\approx 1.62^d\) (golden ratio) |
For \(b = 10\):
| Ordering | Leaves examined |
|---|---|
| Worst case | \(10^d\) |
| Random | \(10^{3d/4} \approx 5.6^d\) |
| Perfect | \(\sqrt{10}^d \approx 3.2^d\) |
The gap between random and perfect ordering widens with \(b\): for large \(b\), good move ordering has a substantial payoff.
Insight: why 3/4?
The exponent \(3/4 = 1/2 + 1/4\) reflects two independent sources of pruning work:
- \(b^{1/2}\) from the expected number of children examined at each node (random ordering cuts off after \(\sim\sqrt{b}\) tries on average).
- \(b^{1/4}\) overhead from imperfect pruning: unlike perfect ordering — where a pruned child is cut off after a single grandchild — random ordering requires exploring \(\sim b^{1/4}\) of a pruned subtree before the cutoff is triggered.
Perfect ordering eliminates the \(b^{1/4}\) overhead entirely, recovering \(b^{d/2}\).