Linear Independence

Motivation

When working with a collection of vectors, a basic question is whether any of them is redundant — that is, whether one can be expressed in terms of the others. A set of vectors is linearly independent when no such redundancy exists (Strang 2016). Linear independence is the condition that lets us count “true degrees of freedom” rather than “number of vectors written down,” and it underlies the definitions of bases and coordinates, matrix rank, and dimension.

Without independence, the same point can be described in more than one way, systems of equations admit infinitely many solutions instead of one, and matrices fail to be invertible. The concept is the gatekeeper between “many vectors” and “an honest coordinate system.”

Definition

A finite set of vectors \(\mathbf{v}_1, \ldots, \mathbf{v}_k \in \mathbb{R}^n\) is linearly independent if the only scalars \(c_1, \ldots, c_k\) satisfying

\[ c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_k \mathbf{v}_k = \mathbf{0} \]

are \(c_1 = c_2 = \cdots = c_k = 0\). Otherwise the set is linearly dependent.

In a dependent set, at least one \(c_i\) is nonzero. Solving for \(\mathbf{v}_i\) in that case gives

\[ \mathbf{v}_i = -\frac{1}{c_i} \sum_{j \neq i} c_j \mathbf{v}_j, \]

so \(\mathbf{v}_i\) is a linear combination of the others — the “redundancy” promised by the name.

Two vectors

Two nonzero vectors are linearly independent if and only if neither is a scalar multiple of the other. Geometrically, in \(\mathbb{R}^2\) this means they point along different lines through the origin.

\[ \mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \quad \text{independent}, \]

\[ \mathbf{v}_1 = \begin{pmatrix} 1 \\ 2 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 3 \\ 6 \end{pmatrix} \quad \text{dependent (since } \mathbf{v}_2 = 3\mathbf{v}_1\text{)}. \]

Three vectors in the plane

Any three vectors in \(\mathbb{R}^2\) are linearly dependent. There simply is not enough room: three lines in the plane through the origin must allow some nontrivial combination summing to zero. For example,

\[ \mathbf{v}_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \quad \mathbf{v}_3 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} \]

satisfy \(\mathbf{v}_1 + \mathbf{v}_2 - \mathbf{v}_3 = \mathbf{0}\).

This generalizes: any \(k > n\) vectors in \(\mathbb{R}^n\) are linearly dependent.

Equivalent Characterizations

Several restatements of “linearly independent” are useful in different contexts.

Unique representation. The set \(\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}\) is linearly independent if and only if every vector in its span has a unique expansion as a linear combination of the \(\mathbf{v}_i\). If two expansions exist, their difference is a nontrivial dependence relation among the \(\mathbf{v}_i\).

Matrix kernel. Stacking the vectors as columns of a matrix \(V = [\mathbf{v}_1 \;\cdots\; \mathbf{v}_k]\), the dependence equation becomes \(V\mathbf{c} = \mathbf{0}\) for \(\mathbf{c} = (c_1, \ldots, c_k)^\top\). The columns are independent if and only if the only solution is \(\mathbf{c} = \mathbf{0}\), i.e., \(V\) has trivial kernel. Equivalently, \(V\) has rank equal to \(k\).

No redundant vector. A set is independent if and only if removing any vector strictly shrinks the span. If the span is unchanged after removing \(\mathbf{v}_i\), then \(\mathbf{v}_i\) was already a combination of the rest.

Properties

Subsets of independent sets are independent. If \(\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}\) is linearly independent, so is every subset. A dependence among fewer vectors is in particular a dependence among all of them (with zero coefficients on the omitted ones).

Supersets of dependent sets are dependent. Adding more vectors to a dependent set keeps it dependent.

The zero vector breaks independence. Any set containing \(\mathbf{0}\) is dependent: take \(c_i = 1\) on the zero vector and \(c_j = 0\) on the rest.

Pivots and Gaussian elimination. Row-reducing \(V\) to echelon form, the columns of \(V\) corresponding to pivot positions are linearly independent and span the column space. The non-pivot columns are linear combinations of the pivots, with coefficients read directly from the echelon form.

Example

Let

\[ \mathbf{v}_1 = \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}, \quad \mathbf{v}_2 = \begin{pmatrix} 0 \\ 1 \\ 4 \end{pmatrix}, \quad \mathbf{v}_3 = \begin{pmatrix} 2 \\ 5 \\ 10 \end{pmatrix}. \]

Stack them as columns and row-reduce:

\[ V = \begin{pmatrix} 1 & 0 & 2 \\ 2 & 1 & 5 \\ 3 & 4 & 10 \end{pmatrix} \;\xrightarrow{R_2 - 2R_1,\; R_3 - 3R_1}\; \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 1 \\ 0 & 4 & 4 \end{pmatrix} \;\xrightarrow{R_3 - 4R_2}\; \begin{pmatrix} 1 & 0 & 2 \\ 0 & 1 & 1 \\ 0 & 0 & 0 \end{pmatrix}. \]

Only two pivot columns, so the three vectors are dependent. Reading the echelon form, \(\mathbf{v}_3 = 2\mathbf{v}_1 + \mathbf{v}_2\). Verify:

\[ 2 \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \\ 4 \end{pmatrix} = \begin{pmatrix} 2 \\ 5 \\ 10 \end{pmatrix} = \mathbf{v}_3. \checkmark \]

The set \(\{\mathbf{v}_1, \mathbf{v}_2\}\) on its own is linearly independent — these are the two pivot columns.

References

Strang, Gilbert. 2016. Introduction to Linear Algebra. 5th ed. Wellesley-Cambridge Press.