Skip to main content
Logo image

Section 4.2 Rank and Nullity

Subsection 4.2.1 Some fundamental subspaces

Let \(T:V\to W\) be a linear map between vector spaces over a field \(F.\) We have defined the kernel of \(T\), \(\ker(T)=\Null(T)\text{,}\) (also called the nullspace) and noted that it is a subspace of the domain \(V.\) The image of \(T\), \(\Im(T),\) is a subspace of the codomain \(W.\)

Subsection 4.2.2 The rank-nullity theorem

Given a linear map \(T:V\to W\text{,}\) with \(V\) finite dimensional, there is a fundamental theorem relating the dimension of \(V\) to the dimensions of \(\ker(T)\) and \(\Im(T).\)

Proof.

Let \(n = \dim V,\) and recall that if \(\{v_1, \dots, v_n\}\) is any basis for \(V\text{,}\) then \(\Im(T) = \Span(\{T(v_1), \dots, T(v_n)\}).\)
First consider the case that \(T\) is injective. This means that \(\ker(T) = \{0\}\text{,}\) so that \(\nullity(T) = 0\text{.}\) By Proposition 4.1.3, the set \(\{T(v_1), \dots, T(v_n)\}\) is linearly independent, and since this set spans \(\Im(T)\text{,}\) it is a basis for \(\Im(T)\text{,}\) so its cardinality equals the dimension of the image, i.e., \(\rank(T).\) Thus \(\rank(T)=n\text{,}\) and we see that
\begin{equation*} n = \dim V = n + 0 = \rank(T) + \nullity(T). \end{equation*}
Now consider the case where \(\ker(T) \ne \{0\}\text{.}\) Let \(\{u_1, \dots, u_k\}\) be a basis for \(\ker(T)\text{,}\) hence \(\nullity(T) = k.\) Since \(\{u_1, \dots, u_k\}\) is a linearly independent set, by [provisional cross-reference: prop-extend-independent-set-to-basis], it can be extended to a basis for \(V\text{:}\)
\begin{equation*} \{u_1, \dots, u_k, u_{k+1}, \dots, u_n\} \end{equation*}
To establish the theorem, we need only show that \(\rank(T) = n-k.\) Since \(\{u_1, \dots, u_k, u_{k+1}, \dots, u_n\}\) is a basis for \(V\text{,}\) \(\Im(T) = \Span(\{T(u_1), \dots, T(u_n)\})\text{,}\) but we recall that \(u_1, \dots, u_k \in \ker(T)\text{,}\) so that \(\Im(T) = \Span(\{T(u_{k+1}), \dots, T(u_n)\})\text{.}\) Thus we know \(\rank T \le n-k.\) To obtain an equality, we need only show that the set \(\{T(u_{k+1}), \dots, T(u_n)\}\) is linearly independent.
Suppose to the contrary, that the set is linearly independent. Then there exists scalars \(a_i \in F\text{,}\) not all zero, so that
\begin{equation*} \sum_{i=k+1}^n a_i T(u_i) = 0. \end{equation*}
By linearity, this says \(T(\sum_{i=k+1}^r a_i u_i) = 0,\) which means \(\sum_{i=k+1}^r a_i u_i \in \ker(T).\) But this in turn says that \(\sum_{i=k+1}^r a_i u_i \in \Span(\{u_1, \dots, u_k\})\) implying the full set \(\{u_1, \dots, u_n\}\) is linearly dependent, contradicting that it is a basis for \(V.\) This completes the proof.
Let’s do a simple example.

Example 4.2.2.

Consider the linear map \(T:P_3(\R) \to P_3(\R)\) given by \(T(f) = f'' + f'\text{,}\) where \(f'\) and \(f''\) are the first and second derivatives of \(f.\)
The domain has dimension 4 with standard basis \(\{1, x, x^2, x^3\}\text{,}\) so
\begin{equation*} \Im(T) = \Span\{T(1), T(x), T(x^2), T(x^3)\} \end{equation*}
One easily checks that \(\Im(T) = \Span\{0, 1, 2+2x, 6x+3x^2\}= \Span\{1,x,x^2\}\text{.}\) At the very least we know that \(\rank(T) \le 3\text{,}\) and since \(T(1)=0\text{,}\) we must have \(\nullity(T) \ge 1\text{.}\) Now since \(\{1, x, x^2\}\) is a linearly independent set, we know that \(\rank(T) =3\) which means that \(\nullity(T)=1\) by Theorem 4.2.1. It follows that \(\{1\}\) is a basis for the nullspace.

Subsection 4.2.3 Computing rank and nullity

Let \(A \in M_{m\times n}(F)\) be a matrix. Then \(T(x) = Ax\) defines a linear map \(T:F^n\to F^m\text{.}\) Indeed in Subsection 4.3.2, we shall see how to translate the action of an arbitrary linear map between finite-dimensional vectors spaces into an action of a matrix on column vectors.
Let’s recall how to extract the image and kernel of the linear map \(x\mapsto Ax\text{.}\) We know that the image of any linear map is obtained by taking the span of \(T(e_1), \dots, T(e_n)\) where \(\{e_1, \dots, e_n\}\) is any basis for \(F^n\text{,}\) the domain. Indeed if we choose the \(e_i\) to be the standard basis vectors (with a 1 in the \(i\)th coordinate and zeroes elsewhere), then \(T(e_j)\) is simply the \(j\)th column of the matrix \(A.\) Thus \(\Im(T)\) is the column space of \(A\text{.}\) However to determine the rank of \(A\text{,}\) we would need to know which columns form a basis. We’ll get to that in a moment.
The nullspace of \(T\text{,}\) is the set of solutions to the homogeneous linear system \(Ax=0.\) You may recall that a standard method to deduce the solutions is to put the matrix \(A\) in reduced row-echelon form. That means that all rows of zeros are at the bottom, the leading nonzero entry of each row is a one, and in every column containing a leading 1, all other entries are zero. These leading ones play several roles.

Subsection 4.2.4 Elementary Row and Column operations

The following are a series of facts about elementary row and column operations on an \(m\times n\) matrix \(A\text{.}\)
  • The matrix \(A\) is put in reduced row echelon form by a sequence of elementary row operations.
  • Each elementary row operation can be achieved by left multiplication of \(A\) (\(A\mapsto EA\)) by an \(m\times m\) elementary matrix.
  • Each elementary column operation can be achieved by right multiplication of \(A\) (\(A\mapsto AE\)) by an \(n\times n\) elementary matrix.
  • Every elementary matrix is invertible and its inverse in again an elementary matrix of the same type.
  • The rank of an \(m\times n\) matrix is unchanged by elementary row or column operations, that is \(\rank(EA) = \rank(A)\) and \(\rank(AE)=\rank(A)\) for appropriately sized elementary matrices \(E.\)
Every invertible matrix is a product of elementary matrices, and this leads to the

Exercises Exercises

1.
Let \(A\) be an \(m\times n\) matrix and \(E\) an elementary matrix of the appropriate size.
  • Are the row spaces of \(A\) and \(EA\) the same?
  • Are the column spaces of \(A\) and \(AE\) the same?
  • If \(R\) is the reduced row-echelon form of \(A\text{,}\) are the nonzero rows of \(R\) a basis for the row space of \(A\text{?}\)
  • If \(R\) is the reduced row-echelon form of \(A\text{,}\) is the column space of \(R\) the same as the column space of \(A\text{?}\)
Answer.
yes; yes; yes (why?); no; If \(A=\begin{bmatrix} 1\amp 1\\1\amp 1 \end{bmatrix}\text{,}\) then \(R=\begin{bmatrix}1\amp1\\0\amp0 \end{bmatrix}\text{,}\) and
\begin{equation*} \Span(\begin{bmatrix}1\\1\end{bmatrix}) \ne \Span(\begin{bmatrix}1\\0\end{bmatrix}) \end{equation*}
2.
Given an \(m\times n\) matrix \(A,\) show that there exist (appropriately sized) elementary matrices \(U,V\) so that \(UAV\) has the form
\begin{equation*} UAV = \begin{bmatrix} I_r\amp \mathbf 0\\ \mathbf 0\amp\mathbf 0 \end{bmatrix}. \end{equation*}
where \(I_r\) is an \(r\times r\) identity matrix with \(r = \rank(A)\text{,}\) and the other entries are all zeros.