Consistent Heuristics Are Admissible and Prevent Node Re-Expansion

Claims

Let \(h\) be a consistent heuristic: for every edge \((n, n')\) with cost \(c(n, n') \ge 0\),

\[ h(n) \le c(n, n') + h(n'), \]

with \(h(g) = 0\) for every goal \(g\). Then:

  1. \(h\) is admissible: \(h(n) \le h^*(n)\) for every node \(n\).
  2. \(f\) is non-decreasing along paths: for any edge \((n, n')\), \(f(n) \le f(n')\).
  3. A* expands each node at most once: when A* first pops a node \(n\), \(g(n) = g^*(n)\).

(Hart et al. 1968; Russell and Norvig 2020)

Claims 2 and 3 are specific to A* graph search with a closed set; admissibility alone is not sufficient for them.

Claim 1: Consistency Implies Admissibility

Fix any node \(n\) and let \(n = p_0, p_1, \ldots, p_k = g\) be any path from \(n\) to a goal \(g\). Applying the consistency condition along successive edges and summing:

\[ h(p_0) \le c(p_0, p_1) + h(p_1) \le c(p_0, p_1) + c(p_1, p_2) + h(p_2) \le \cdots \le \sum_{i=0}^{k-1} c(p_i, p_{i+1}) + h(g). \]

Since \(h(g) = 0\), the right side equals the cost of the path. Taking the minimum over all paths to all goals:

\[ h(n) \le h^*(n). \qquad \square \]

Claim 2: \(f\) Is Non-Decreasing Along Paths

For any edge \((n, n')\), suppose A* reaches \(n'\) via \(n\), so \(g(n') = g(n) + c(n, n')\). Then:

\[ f(n') = g(n') + h(n') = g(n) + c(n, n') + h(n') \ge g(n) + h(n) = f(n), \]

where the inequality uses \(c(n, n') + h(n') \ge h(n)\) from consistency. \(\square\)

This means A* encounters nodes in non-decreasing \(f\)-value order as it traces any path from the start.

Claim 3: A* Expands Each Node at Most Once

Theorem: When A* pops a node \(n\) from the open list, \(g(n) = g^*(n)\).

Since A* only expands nodes it has popped, and once popped a node is moved to the closed set and not re-added (because \(g\) can never improve after the theorem is established), this implies each node is expanded at most once.

Proof by induction on the order in which nodes are expanded.

Base case. The first node expanded is the start \(s\). \(g(s) = 0 = g^*(s)\). \(\checkmark\)

Inductive step. Assume every node expanded before step \(t\) had optimal \(g\). Suppose A* pops node \(n\) at step \(t\) with \(g(n) > g^*(n)\); we derive a contradiction.

Let \(\pi^* = s = p_0, p_1, \ldots, p_k = n\) be an optimal path to \(n\). Since \(g(n) > g^*(n) = g^*(p_k)\), not all of \(\pi^*\) has been traced by A*. Let \(p_j\) be the last node on \(\pi^*\) that was expanded before step \(t\) (at minimum \(j = 0\), since \(s\) is always expanded first). Then \(p_{j+1}\) has not yet been expanded; it is on the open list.

When \(p_j\) was expanded (step \(< t\)), the inductive hypothesis gives \(g(p_j) = g^*(p_j)\). Expanding \(p_j\) generated \(p_{j+1}\) (or updated its \(g\)-value), so:

\[ g(p_{j+1}) \le g(p_j) + c(p_j, p_{j+1}) = g^*(p_j) + c(p_j, p_{j+1}) = g^*(p_{j+1}). \]

Combined with admissibility (\(g(p_{j+1}) \ge g^*(p_{j+1})\) is not immediate, but \(g\) is an actual path cost so \(g(p_{j+1}) \ge g^*(p_{j+1})\); thus \(g(p_{j+1}) = g^*(p_{j+1})\)):

\[ f(p_{j+1}) = g(p_{j+1}) + h(p_{j+1}) \le g^*(p_{j+1}) + h(p_{j+1}). \]

Since \(f\)-values are non-decreasing along \(\pi^*\) (Claim 2 applied with optimal path costs):

\[ g^*(p_{j+1}) + h(p_{j+1}) \le g^*(p_k) + h(p_k) = g^*(n) + h(n) < g(n) + h(n) = f(n). \]

So \(f(p_{j+1}) < f(n)\). But \(p_{j+1}\) is on the open list and A* pops by minimum \(f\) — it would pop \(p_{j+1}\) before \(n\). Contradiction. \(\square\)

Relationship to the Admissibility Proof

The admissibility proof (proof) establishes optimality for tree search or graph search with re-opening. Consistency gives the stronger graph-search guarantee: the closed set is permanent. In practical implementations this is the important case — tree search is rarely used on graphs due to memory cost — and consistency is the condition that makes the closed set correct.

References

Hart, Peter, Nils Nilsson, and Bertram Raphael. 1968. “A Formal Basis for the Heuristic Determination of Minimum Cost Paths.” IEEE Transactions on Systems Science and Cybernetics 4 (2): 100–107. https://doi.org/10.1109/tssc.1968.300136.
Russell, Stuart, and Peter Norvig. 2020. Artificial Intelligence: A Modern Approach. 4th ed. Pearson. https://www.pearson.com/en-us/subject-catalog/p/artificial-intelligence-a-modern-approach/P200000003500/9780137505135.