Matrix Rank

Motivation

A matrix has dimensions, but those dimensions overstate how much information it actually carries. A \(3 \times 3\) matrix can have \(9\) entries and still describe only a one-dimensional transformation if its columns are all multiples of a single vector. The rank of a matrix is the count of its truly independent dimensions — the dimension of the span of its columns, equivalently of its rows (Strang 2016).

Rank determines whether a linear system \(A\mathbf{x} = \mathbf{b}\) has a solution, whether a matrix is invertible, how many independent constraints it encodes, and the dimension of the image of the linear map it represents. It is the single most informative summary statistic of a matrix’s structure.

Definition

The rank of a matrix \(A \in \mathbb{R}^{m \times n}\) is the dimension of its column space:

\[ \operatorname{rank}(A) = \dim \operatorname{Col}(A) = \dim \operatorname{span}(\text{columns of } A). \]

Equivalently, it is the maximum number of linearly independent columns of \(A\). Always \(0 \le \operatorname{rank}(A) \le \min(m, n)\).

A matrix is said to have full rank when \(\operatorname{rank}(A) = \min(m, n)\), and rank-deficient otherwise.

A rank-1 matrix

\[ A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 3 & 6 & 9 \end{pmatrix}. \]

All columns are multiples of \((1, 2, 3)^\top\), so the column space is one-dimensional and \(\operatorname{rank}(A) = 1\). The matrix has \(9\) entries but encodes only the one direction \((1, 2, 3)^\top\).

A full-rank \(3 \times 3\) matrix

\[ B = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}. \]

The columns are the standard basis, which are linearly independent and span \(\mathbb{R}^3\). So \(\operatorname{rank}(B) = 3\).

Row Rank Equals Column Rank

A foundational theorem: for any matrix \(A\),

\[ \dim \operatorname{Col}(A) = \dim \operatorname{Row}(A). \]

The column rank and the row rank are always equal. This is far from obvious: the column space lives in \(\mathbb{R}^m\) and the row space in \(\mathbb{R}^n\), and these can be very different ambient spaces. The shared dimension is the rank.

One quick way to see it: row-reducing \(A\) to row echelon form preserves both the row space and the row-dependence structure. The number of pivots in the echelon form equals the number of independent rows and the number of pivot columns, which are an independent spanning set of the column space.

Equivalent Characterizations

Several formulations of rank coincide for the same matrix, each useful in a different context.

  • Column rank. Dimension of the column space.
  • Row rank. Dimension of the row space.
  • Pivot count. Number of pivots in any row echelon form of \(A\).
  • Nonzero-singular-value count. Number of nonzero singular values in the singular value decomposition \(A = U \Sigma V^\top\).
  • Maximum minor size. Size of the largest square submatrix of \(A\) with nonzero determinant.

The first four are routinely computed; the last is mostly of theoretical interest.

Rank-Nullity Theorem

The kernel (or null space) of \(A\) is

\[ \ker(A) = \{\mathbf{x} \in \mathbb{R}^n : A\mathbf{x} = \mathbf{0}\}. \]

It is a subspace of \(\mathbb{R}^n\). The rank-nullity theorem states

\[ \operatorname{rank}(A) + \dim \ker(A) = n, \]

where \(n\) is the number of columns. The rank counts the dimensions \(A\) preserves (sends to independent images); the nullity \(\dim \ker(A)\) counts the dimensions \(A\) collapses to zero. Together they account for every input dimension.

This is the bookkeeping principle behind essentially every dimension-counting argument in linear algebra.

How to Compute Rank

Gaussian elimination. Row-reduce \(A\) to echelon form. The number of nonzero rows (equivalently the number of pivots) is the rank.

SVD. Compute \(A = U \Sigma V^\top\). The rank is the number of nonzero singular values. Numerically, “nonzero” usually means “above a tolerance”; this is the numerical rank, more robust than elimination in the presence of round-off.

Worked Example

\[ A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 7 \\ 3 & 6 & 10 \end{pmatrix}. \]

Row-reduce:

\[ \xrightarrow{R_2 - 2R_1, R_3 - 3R_1} \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{pmatrix} \xrightarrow{R_3 - R_2} \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}. \]

Two pivots, so \(\operatorname{rank}(A) = 2\). The nullity is \(n - \operatorname{rank}(A) = 3 - 2 = 1\), agreeing with the fact that \((2, -1, 0)^\top\) spans the kernel (the second column equals twice the first).

Rank and Linear Systems

For \(A \in \mathbb{R}^{m \times n}\) and \(\mathbf{b} \in \mathbb{R}^m\), the linear system \(A\mathbf{x} = \mathbf{b}\) behaves as follows:

  • Solvability. Solutions exist if and only if \(\mathbf{b} \in \operatorname{Col}(A)\). This is captured by the test \(\operatorname{rank}([A \mid \mathbf{b}]) = \operatorname{rank}(A)\) — appending \(\mathbf{b}\) to \(A\) does not enlarge the column space.
  • Uniqueness. When solutions exist, they are unique if and only if \(\ker(A) = \{\mathbf{0}\}\), equivalently \(\operatorname{rank}(A) = n\).
  • Solution-set dimension. If a solution exists, the set of all solutions is an affine translate of \(\ker(A)\), so its dimension is \(\dim \ker(A) = n - \operatorname{rank}(A)\).

The familiar “exactly one solution / no solutions / infinitely many solutions” trichotomy is a rank statement in disguise.

Properties

Subadditivity. \(\operatorname{rank}(A + B) \le \operatorname{rank}(A) + \operatorname{rank}(B)\).

Submultiplicativity. \(\operatorname{rank}(AB) \le \min(\operatorname{rank}(A), \operatorname{rank}(B))\). Multiplying by another matrix cannot increase the rank.

Transpose. \(\operatorname{rank}(A^\top) = \operatorname{rank}(A)\), restating row rank = column rank.

Outer-product expansion. A rank-\(r\) matrix can be written as a sum of \(r\) rank-1 matrices: \(A = \sum_{i=1}^r \mathbf{u}_i \mathbf{v}_i^\top\). The singular value decomposition supplies one such expansion with mutually orthogonal \(\mathbf{u}_i\) and \(\mathbf{v}_i\).

Invariance under invertible multiplication. For any invertible \(P\) and \(Q\) of appropriate sizes, \(\operatorname{rank}(PAQ) = \operatorname{rank}(A)\). Rank is a property of the underlying linear map, not of the chosen basis.

References

Strang, Gilbert. 2016. Introduction to Linear Algebra. 5th ed. Wellesley-Cambridge Press.