Skip to main content
Logo image

Section 1.4 Coordinates and Matrices

Subsection 1.4.1 Coordinate Vectors

Let \(V\) be a finite-dimensional vector space over a field \(F\) with basis \(\cB=\{v_1, \dots, v_n\}.\) Since \(\cB\) is a spanning set for \(V\text{,}\) every vector \(v\in V\) can be expressed as a linear combination of the vectors in \(\cB\text{:}\) \(v = a_1 v_1+ \cdots + a_n v_n\) with \(a_i\in F.\)
And, since \(\cB\) is a linearly independent set, the coefficients \(a_i\) are uniquely determined. We record those uniquely determined coefficients as

Definition 1.4.1.

The coordinate vector of \(v= a_1 v_1+ \cdots + a_n v_n\) with respect to the basis \(\cB=\{v_1, \dots, v_n\}\) is denoted as the column vector:
\begin{equation} [v]_\cB = \begin{bmatrix}a_1\\a_2\\\vdots\\a_n\end{bmatrix}\tag{1.4.1} \end{equation}

Subsection 1.4.2 Matrix of a linear map

Let \(V\) and \(W\) be two finite-dimensional vector spaces defined over a field \(F.\) Suppose that \(\dim V = n\) and \(\dim W = m,\) and we choose bases \(\cB=\{v_1, \dots, v_n\}\) for \(V,\) and \(\cC=\{w_1, \dots, w_m\}\) for \(W.\) By Theorem 1.1.6, any linear map \(T:V\to W\) is completely determined by the set of vectors \(\{T(v_1), \dots, T(v_n)\}\text{,}\) and since \(\cC\) is a basis for \(W,\) for each index \(j\text{,}\) there are uniquely determined scalars \(a_{ij} \in F\) with
\begin{equation*} T(v_j) = \sum_{i=1}^m a_{ij}w_i. \end{equation*}
We record that data as a matrix \(A\) with \(A_{ij}=a_{ij}.\) We define the matrix of \(T\) with respect to the bases \(\cB\) and \(\cC\), as
\begin{equation} [T]_\cB^\cC = A = [a_{ij}]\tag{1.4.2} \end{equation}

Example 1.4.2. The companion matrix of a polynomial.

Let \(f=x^n+ a_{n-1}x^{n-1}+\cdots + a_0\) be a polynomial with coefficients in a field \(F\text{.}\) Let \(V\) be a finite-dimensional vector space over the field \(F\) with basis \(\cB=\{v_1, \dots, v_n\}.\) Define a linear map \(T:V \to V\) (called an endomorphism or linear operator since the domain and codomain are the same vector space) by:
\begin{gather*} T(v_1) = v_2\\ T(v_2) = v_3\\ \vdots\\ T(v_{n-1}) = v_n\\ T(v_n) = -a_0v_1 -a_1v_2- \cdots - a_{n-1}v_{n-1}. \end{gather*}
The matrix of \(T\) with respect to the basis \(\cB\) is called the companion matrix of \(f\), and is given by
\begin{equation*} [T]_\cB := [T]_\cB^\cB = \begin{bmatrix} 0&0&0&\cdots&0&-a_0\\ 1&0&0&\cdots&0&-a_1\\ 0&1&0&\cdots&0&-a_2\\ 0&0&&\ddots&0&\vdots\\ 0&0&\cdots&0&1&-a_{n-1}\\ \end{bmatrix} \end{equation*}
One can show that both the minimal polynomial and characteristic polynomial of this companion matrix is the polynomial \(f.\) The companion matrix is an essential component in the rational canonical form of an arbitrary square matrix \(A\) where the polynomials \(f\) that occur are the invariant factors associated to \(A.\)

Observation 1.4.3.

When constructing the matrix of a linear map, it is very useful to recognize the connection with coordinate vectors. For example in constructing the matrix \([T]_{\cB}^{\cC}\) in (1.4.2), the \(j\)th column of the matrix is the coordinate vector \([T(v_j)]_{\cC}\text{.}\) Thus a mnemonic device for remembering how to construct the matrix of a linear map is that
\begin{equation} [T]_\cB^\cC = A = [a_{ij}] = \begin{bmatrix} |&|&\cdots&|\\ [T(v_1)]_{\cC}&[T(v_2)]_{\cC}&\cdots&[T(v_n)]_{\cC}\\ |&|&\cdots&| \end{bmatrix}.\tag{1.4.3} \end{equation}

Subsection 1.4.3 Matrix associated to a composition

Suppose that \(U, V,\) and \(W\) are vector spaces over a field \(F\text{,}\) and \(S:U\to V\) and \(T:V\to W\) are linear maps. The the composition \(T\circ S\) (usually denoted \(TS\)) is a linear map, \(T\circ S: U\to W.\)
Now suppose that all three vector spaces are finite-dimensional, say \(\dim U = n,\) \(\dim V = p,\) and \(\dim W = m\text{,}\) with bases \(\cB_U, \cB_V, \cB_W.\) If we consider the matrices of the corresponding linear maps, we see that the matrix sizes are
\begin{gather*} [S]_{\cB_U}^{\cB_V} \text{ is } p\times n\\ [T]_{\cB_V}^{\cB_W} \text{ is } m\times p\\ [TS]_{\cB_U}^{\cB_W} \text{ is } m\times n \end{gather*}
The fundamental result connecting these is
This result will be of critical importance when we discuss change of basis.
As more or less a special case of the above theorem, we have the corresponding result with coordinate vectors: that the coordinate vector of \(T(v)\) is the product of the matrix of \(T\) with the coordinate vector of \(v\text{.}\) More precisely,

Example 1.4.6.

Let \(V=P_4(\R)\) and \(W=P_3(\R)\) be the vector spaces of polynomials with coefficients in \(\R\) having degree less than or equal to 4 and 3 respectively. Let \(D:V\to W\) be the (linear) derivative map, \(D(f) = f'\text{,}\) where \(f'\) is the usual derivative for polynomials. Let’s take standard bases for \(V\) and \(W,\) namely \(\cB_V=\{1, x, x^2, x^3, x^4\}\) and \(\cB_W=\{1, x, x^2, x^3\}.\) One computes:
\begin{equation*} [D]_{\cB_V}^{\cB_W}= \begin{bmatrix} 0&1&0&0&0\\ 0&0&2&0&0\\ 0&0&0&3&0\\ 0&0&0&0&4\\ \end{bmatrix} \end{equation*}
Let \(f=2+3x+5x^3.\) We know of course that \(D(f) = 3+15x^2,\) but we want to see this with coordinate vectors. We know that
\begin{equation*} [f]_{\cB_V} = \begin{bmatrix}2\\3\\0\\5\\0\end{bmatrix} \text{ and } [D(f)]_{\cB_W} = \begin{bmatrix}3\\0\\15\\0\end{bmatrix} \end{equation*}
and verify that
\begin{equation*} [D(f)]_{\cB_W} = \begin{bmatrix}3\\0\\15\\0\end{bmatrix}= [D]_{\cB_V}^{\cB_W}[f]_{\cB_V}= \begin{bmatrix} 0&1&0&0&0\\ 0&0&2&0&0\\ 0&0&0&3&0\\ 0&0&0&0&4\\ \end{bmatrix}\begin{bmatrix}2\\3\\0\\5\\0\end{bmatrix}. \end{equation*}

Subsection 1.4.4 Change of basis

A change of basis or change of coordinates is an enormously useful concept. It plays a pivotal role in diagonalization, triangularization, and more generally in putting a matrix into a canonical form. It’s practical uses are easy to envision. We may think of the usual orthonormal basis of \(\R^3\) along the coordinate axes as the standard basis for \(\R^3\text{,}\) but when one want to create computer graphics which projects the image of an object onto a plane, the natural frame includes a direction parallel to the line of sight of the observer, so it defines a natural basis for this application.
First, let’s understand what we are doing intuitively. Suppose our vector space \(V = \R^3\text{,}\) and we have two bases for it with elements written as row vectors, \(\cB_1=\{e_1=(1,0,0), e_2=(0,1,0), e_3=(0,0,1)\}\) and \(\cB_2=\{v_1=(1,1,1), v_2=(0,1,1), v_3=(0,0,1)\}.\)

Checkpoint 1.4.7. Is \(\cB_2\) really a basis?

Let’s recall a useful fact that allows us to quickly verify that \(\cB_2\) is actually a basis for \(\R^3.\) While in principle we must check the set is both linearly independent and spans \(\R^3\text{,}\) since we know the dimension of \(\R^3\text{,}\) and the set has 3 elements, it follows that either condition implies the other.
Hint.
To show \(\cB_2\) spans, it is enough to show that \(\Span(\cB_2)\) contains a spanning set for \(\R^3\)
Normally when we think of a vector in \(\R^3\text{,}\) we think of it as a coordinate vector with respect to the standard basis, so that a vector we write as \(v=(a,b,c)\) is really the coordinate vector with respect to the standard basis:
\begin{equation*} v=[v]_{\cB_1} = \begin{bmatrix}a\\b\\c\\\end{bmatrix} \end{equation*}
The problem is when we want to find \([v]_{\cB_2}.\) For some vectors this is easy. For example,
\begin{equation*} [v]_{\cB_1} =\begin{bmatrix}1\\2\\3\end{bmatrix} \text{ is equivalent to } [v]_{\cB_2} = \begin{bmatrix}1\\1\\1\end{bmatrix}, \end{equation*}
or
\begin{equation*} [v]_{\cB_1} =\begin{bmatrix}1\\3\\6\end{bmatrix} \text{ is equivalent to } [v]_{\cB_2} = \begin{bmatrix}1\\2\\3\end{bmatrix}, \end{equation*}
but what is going on in general?
Recall from Corollary 1.4.5, that for a linear transformation \(T:V\to W\text{,}\) and \(v \in V\) that
\begin{equation*} [T(v)]_{\cB_W} = [T]_{\cB_V}^{\cB_W} [v]_{\cB_V}. \end{equation*}
In our current situation \(V=W\) and \(T\) is the identity transformation, \(T(v) = v\text{,}\) which we shall denote by \(I\text{,}\) so that
\begin{equation*} [v]_{\cB_2} = [I]_{\cB_1}^{\cB_2} [v]_{\cB_1}. \end{equation*}
The matrix \([I]_{\cB_1}^{\cB_2}\) is called the change of basis or change of coordinates matrix (converting \(\cB_1\) coordinates to \(\cB_2\) coordinates), and these change of basis matrices come in pairs
\begin{equation*} [I]_{\cB_1}^{\cB_2} \text{ and } [I]_{\cB_2}^{\cB_1}. \end{equation*}
Now in our case, both matrices are easy to compute:
\begin{equation*} [I]_{\cB_1}^{\cB_2}= \ba{rrr} 1&0&0\\ -1&1&0\\ 0&-1&1\\ \ea \text{ and } [I]_{\cB_2}^{\cB_1}= \begin{bmatrix} 1&0&0\\ 1&1&0\\ 1&1&1\\ \end{bmatrix}, \end{equation*}
and it should come as no surprise that the columns of the second are just the elements of the \(\cB_2\)-basis in standard coordinates. But the nice part is that the first matrix is related to the second affording a means to compute it when computations by hand are not so simple.
Using Equation (1.4.4) on the matrix of a composition
\begin{equation*} [TS]_{\cB_U}^{\cB_W} = [T]_{\cB_V}^{\cB_W}[S]_{\cB_U}^{\cB_V}, \end{equation*}
with \(V=U=W\text{,}\) and \(T=S=I\text{,}\) we arrive at
\begin{equation*} \begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1\\ \end{bmatrix}= [I]_{\cB_1}^{\cB_1} = [I]_{\cB_1}^{\cB_2}[I]_{\cB_2}^{\cB_1}, \end{equation*}
that is \([I]_{\cB_1}^{\cB_2}\) and \([I]_{\cB_2}^{\cB_1}\) are inverse matrices, and this is always the case.
Finally we apply this to the matrix of a linear map \(T:V\to V\) on a finite-dimensional vector space \(V\) with bases \(\cB_1\) and \(\cB_2\text{:}\)

Example 1.4.10.

We often express the matrix of a linear map in terms of the standard basis, but many times such a matrix is complicated and does not easily reveal what the linear map is actually doing. For example, using our bases \(\cB_1\) and \(\cB_2\) for \(\R^3\) given above, suppose we have a linear map \(T:\R^3 \to \R^3\) whose matrix with respect to the standard basis \(\cB_1\) is
\begin{equation*} [T]_{\cB_1}=\ba{rrr} 4&0&0\\ -1&5&0\\ -1&-1&6\\ \ea. \end{equation*}
It is easy enough to compute the value of \(T\) on a given vector (recall from equation (1.4.3), the columns of the above matrix are simply \(T(v_1), T(v_2),T(v_3)\) written with respect to the standard basis (\(\cB_1\)) for \(\R^3).\)
However, using Theorem 1.4.9, we compute
\begin{equation*} [T]_{\cB_2}=\begin{bmatrix} 4&0&0\\ 0&5&0\\ 0&0&6\\ \end{bmatrix}, \end{equation*}
which makes much clearer how the map \(T\) is acting on \(\R^3\) (strecthing by a factor of 4, 5, 6 in the directions of \(w_1, w_2, w_3.\)