Skip to main content
Logo image

Section 4.3 Coordinates and Matrices

While many linear transformations come to us as maps between abstract spaces, using a basis allows us to convert from the abstract setting to matrices.

Subsection 4.3.1 Coordinate Vectors

Let \(V\) be a finite-dimensional vector space over a field \(F\) with basis \(\cB=\{v_1, \dots, v_n\}.\) Since \(\cB\) is a spanning set for \(V\text{,}\) every vector \(v\in V\) can be expressed as a linear combination of the vectors in \(\cB\text{:}\) \(v = a_1 v_1+ \cdots + a_n v_n\) with \(a_i\in F.\)
And, since \(\cB\) is a linearly independent set, the coefficients \(a_i\) are uniquely determined. We record those uniquely determined coefficients as

Definition 4.3.1.

The coordinate vector of \(v= a_1 v_1+ \cdots + a_n v_n\) with respect to the ordered basis \(\cB=\{v_1, \dots, v_n\}\) is denoted as the column vector:
\begin{equation} [v]_\cB = \begin{bmatrix}a_1\\a_2\\\vdots\\a_n\end{bmatrix}\tag{4.3.1} \end{equation}

Remark 4.3.2.

It is important to note that when we talk about coordinates, we are actually fixing an order to the basis. Up to now having an ordered basis was unnecessary, but it is easy to see that it is.
For example, consider the standard basis \(\cB=\{e_1, e_2, e_3\}\) for \(\R^3.\) If we write the vector
\begin{equation*} [v]_\cB = \ba{r}1\\2\\3\ea\text{,} \end{equation*}
this means \(v = 1e_1+2e_2+3e_3,\) but if \(\cB' = \{e_2,e_3, e_1\}\text{,}\) then
\begin{equation*} [v]_{\cB'} = \ba{r}2\\3\\1\ea\text{,} \end{equation*}
and \(v = 2e_2 + 3e_3 + 1e_1\text{,}\) the same as before. So it is critical to know the order of the basis elements.
You might object and insist there is a natural order to that basis, but there are a number of arguments that suggest this is far from universally true. We give one here and one in the next section. Suppose that our vector space is \(P_n(\R).\) A standard basis consists of the elements \(1, x, x^2, \dots, x^n\text{.}\) Which is the natural order: \(\{1, x, x^2, \dots, x^n\}\) or \(\{x^n, x^{n-1}, \dots, x,1\}\text{?}\) Both choices have merit, but clearly affect how to interpret \(v\in P_n(\R)\) if
\begin{equation*} [v]_\cB = \ba{r}1\\2\\\vdots\\n\ea. \end{equation*}

Subsection 4.3.2 Matrix of a linear map

Let \(V\) and \(W\) be two finite-dimensional vector spaces defined over a field \(F.\) Suppose that \(\dim V = n\) and \(\dim W = m,\) and we choose ordered bases \(\cB=\{v_1, \dots, v_n\}\) for \(V,\) and \(\cC=\{w_1, \dots, w_m\}\) for \(W.\) By Theorem 2.6.4, any linear map \(T:V\to W\) is completely determined by the set of vectors \(\{T(v_1), \dots, T(v_n)\}\text{,}\) and since \(\cC\) is a basis for \(W,\) for each index \(j\text{,}\) there are uniquely determined scalars \(a_{ij} \in F\) with
\begin{equation*} T(v_j) = \sum_{i=1}^m a_{ij}w_i. \end{equation*}
We record that data as a matrix \(A\) with \(A_{ij}=a_{ij}.\) We define the matrix of \(T\) with respect to the bases \(\cB\) and \(\cC\), as
\begin{equation} [T]_\cB^\cC = A = [a_{ij}]\tag{4.3.2} \end{equation}

Observation 4.3.3.

When constructing the matrix of a linear map, it is very useful to recognize the connection with coordinate vectors. For example in constructing the matrix \([T]_{\cB}^{\cC}\) in (4.3.2), the \(j\)th column of the matrix is the coordinate vector \([T(v_j)]_{\cC}\text{.}\) Thus a mnemonic device for remembering how to construct the matrix of a linear map is that
\begin{equation} [T]_\cB^\cC = A = [a_{ij}] = \ba{cccc} |\amp |\amp \cdots \amp | \\ \left[T(v_1)\right]_\cC \amp [T(v_2)]_\cC \amp \cdots \amp [T(v_n)]_\cC \\ |\amp |\amp \cdots\amp |\ea.\tag{4.3.3} \end{equation}

Example 4.3.4. A standard projection.

Let’s define a map from \(T:\R^3\to \R^3\) which geometrically takes a point in three space with coordinates \((x,y,z)\) and projects orthogonally onto the \(xy-plane\) by
\begin{equation*} T(x,y,z) = (x,y,0). \end{equation*}
The matrix of \(T\) with respect to the standard ordered basis for \(\R^3\) is
\begin{equation*} [T]_\cE = \ba{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 0 \amp 0\ea. \end{equation*}

Example 4.3.5. A different projection.

Let’s define a map from \(T:\R^3\to \R^3\) which geometrically takes a point in three space with coordinates \((x,y,z)\) and projects orthogonally onto the plane \(x+y+z=0.\) What would the matrix of \(T\) look like with respect to the standard basis?
Let us build a new basis \(\cB = \{v_1, v_2, v_3\}\) with \(v_1, v_2\) in the plane and orthogonal to each other (analogous to the \(x,y\) axes), and the third vector \(v_3\) orthogonal to the plane. We can read the normal vector from the equation of the plane \(x+y+z=0\text{,}\) so we set \(v_3 = \ba{r}1\\1\\1\ea\text{.}\) Note that there are infinitely many choices for \(v_1\) and \(v_2\) just as in the previous example, we could have taken the standard basis vectors \(e_1\) and \(e_2\) and rotated that frame about the \(z\)-axis. We choose
\begin{equation*} \cB=\left\{v_1=\ba{r}1\\-1\\0\ea,\ v_2=\ba{r}1\\1\\-2\ea, \ v_3=\ba{r}1\\1\\1\ea\right\}. \end{equation*}
The matrix of \(T\) with respect to the basis \(\cB\) which is the natural basis for this problem is
\begin{equation*} [T]_\cB = \ba{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 0 \amp 0\ea, \end{equation*}
just as before. Had we insisted on using the standard basis instead we would see (an confirm in the next section) that the matrix is
\begin{equation*} [T]_\cE = \ba{rrr}2/3\amp -1/3\amp -1/3\\ -1/3\amp2/3\amp-1/3\\-1/3\amp-1/3\amp2/3\ea. \end{equation*}
Now if we had been given this matrix with no other information it would have been very difficult to figure out that it was the desired projection.
This gives us a very real reason why it is desirable to use many available bases when talking about solving a problem.

Example 4.3.6. The companion matrix of a polynomial.

Let \(f=x^n+ a_{n-1}x^{n-1}+\cdots + a_0\) be a polynomial with coefficients in a field \(F\text{.}\) Let \(V\) be a finite-dimensional vector space over the field \(F\) with basis \(\cB=\{v_1, \dots, v_n\}.\) Define a linear map \(T:V \to V\) (called an endomorphism or linear operator since the domain and codomain are the same vector space) by:
\begin{gather*} T(v_1) = v_2\\ T(v_2) = v_3\\ \vdots\\ T(v_{n-1}) = v_n\\ T(v_n) = -a_0v_1 -a_1v_2- \cdots - a_{n-1}v_{n-1}. \end{gather*}
The matrix of \(T\) with respect to the basis \(\cB\) is called the companion matrix of \(f\), and is given by
\begin{equation*} [T]_\cB := [T]_\cB^\cB = \begin{bmatrix} 0&0&0&\cdots&0&-a_0\\ 1&0&0&\cdots&0&-a_1\\ 0&1&0&\cdots&0&-a_2\\ 0&0&&\ddots&0&\vdots\\ 0&0&\cdots&0&1&-a_{n-1}\\ \end{bmatrix} \end{equation*}
Advanced comment: One can show that both the minimal polynomial and characteristic polynomial of this companion matrix is the polynomial \(f.\) The companion matrix is an essential component in the rational canonical form of an arbitrary square matrix \(A\) where the polynomials \(f\) that occur are the invariant factors associated to \(A.\)

Subsection 4.3.3 Matrix associated to a composition

Suppose that \(U, V,\) and \(W\) are vector spaces over a field \(F\text{,}\) and \(S:U\to V\) and \(T:V\to W\) are linear maps. The the composition \(T\circ S\) (usually denoted \(TS\)) is a linear map, \(T\circ S: U\to W.\)
Now suppose that all three vector spaces are finite-dimensional, say \(\dim U = n,\) \(\dim V = p,\) and \(\dim W = m\text{,}\) with bases \(\cB_U, \cB_V, \cB_W.\) If we consider the matrices of the corresponding linear maps, we see that the matrix sizes are
\begin{gather*} [S]_{\cB_U}^{\cB_V} \text{ is } p\times n\\ [T]_{\cB_V}^{\cB_W} \text{ is } m\times p\\ [TS]_{\cB_U}^{\cB_W} \text{ is } m\times n \end{gather*}
The fundamental result connecting these is
This result will be of critical importance when we discuss change of basis.
As more or less a special case of the above theorem, we have the corresponding result with coordinate vectors: that the coordinate vector of \(T(v)\) is the product of the matrix of \(T\) with the coordinate vector of \(v\text{.}\) More precisely,

Example 4.3.9.

Let \(V=P_4(\R)\) and \(W=P_3(\R)\) be the vector spaces of polynomials with coefficients in \(\R\) having degree less than or equal to 4 and 3 respectively. Let \(D:V\to W\) be the (linear) derivative map, \(D(f) = f'\text{,}\) where \(f'\) is the usual derivative for polynomials. Let’s take standard bases for \(V\) and \(W,\) namely \(\cB_V=\{1, x, x^2, x^3, x^4\}\) and \(\cB_W=\{1, x, x^2, x^3\}.\) One computes:
\begin{equation*} [D]_{\cB_V}^{\cB_W}= \begin{bmatrix} 0&1&0&0&0\\ 0&0&2&0&0\\ 0&0&0&3&0\\ 0&0&0&0&4\\ \end{bmatrix} \end{equation*}
Let \(f=2+3x+5x^3.\) We know of course that \(D(f) = 3+15x^2,\) but we want to see this with coordinate vectors. We know that
\begin{equation*} [f]_{\cB_V} = \begin{bmatrix}2\\3\\0\\5\\0\end{bmatrix} \text{ and } [D(f)]_{\cB_W} = \begin{bmatrix}3\\0\\15\\0\end{bmatrix} \end{equation*}
and verify that
\begin{equation*} [D(f)]_{\cB_W} = \begin{bmatrix}3\\0\\15\\0\end{bmatrix}= [D]_{\cB_V}^{\cB_W}[f]_{\cB_V}= \begin{bmatrix} 0&1&0&0&0\\ 0&0&2&0&0\\ 0&0&0&3&0\\ 0&0&0&0&4\\ \end{bmatrix}\begin{bmatrix}2\\3\\0\\5\\0\end{bmatrix}. \end{equation*}

Subsection 4.3.4 Change of basis

A change of basis or change of coordinates is an enormously useful concept. It plays a pivotal role in diagonalization, triangularization, and more generally in putting a matrix into a canonical form. Its practical uses are easy to envision. We may think of the usual orthonormal basis of \(\R^3\) along the coordinate axes as the standard basis for \(\R^3\text{,}\) but when one want to create computer graphics which projects the image of an object onto a plane, the natural frame includes a direction parallel to the line of sight of the observer, so it defines a natural basis for this application.
First, let’s understand what we are doing intuitively. Suppose our vector space \(V = \R^3\text{,}\) and we have two bases for it with elements written as row vectors, \(\cB_1=\{e_1=(1,0,0), e_2=(0,1,0), e_3=(0,0,1)\}\) and \(\cB_2=\{v_1=(1,1,1), v_2=(0,1,1), v_3=(0,0,1)\}.\)

Checkpoint 4.3.10. Is \(\cB_2\) really a basis?

Let’s recall a useful fact that allows us to quickly verify that \(\cB_2\) is actually a basis for \(\R^3.\) While in principle we must check the set is both linearly independent and spans \(\R^3\text{,}\) since we know the dimension of \(\R^3\text{,}\) and the set has 3 elements, it follows that either condition implies the other.
Hint.
To show \(\cB_2\) spans, it is enough to show that \(\Span(\cB_2)\) contains a spanning set for \(\R^3\)
Normally when we think of a vector in \(\R^3\text{,}\) we think of it as a coordinate vector with respect to the standard basis, so that a vector we write as \(v=(a,b,c)\) is really the coordinate vector with respect to the standard basis:
\begin{equation*} v=[v]_{\cB_1} = \begin{bmatrix}a\\b\\c\\\end{bmatrix} \end{equation*}
The problem is when we want to find \([v]_{\cB_2}.\) For some vectors this is easy. For example,
\begin{equation*} [v]_{\cB_1} =\begin{bmatrix}1\\2\\3\end{bmatrix} \text{ is equivalent to } [v]_{\cB_2} = \begin{bmatrix}1\\1\\1\end{bmatrix}, \end{equation*}
or
\begin{equation*} [v]_{\cB_1} =\begin{bmatrix}1\\3\\6\end{bmatrix} \text{ is equivalent to } [v]_{\cB_2} = \begin{bmatrix}1\\2\\3\end{bmatrix}, \end{equation*}
but what is going on in general?
Recall from Corollary 4.3.8, that for a linear transformation \(T:V\to W\text{,}\) and \(v \in V\) that
\begin{equation*} [T(v)]_{\cB_W} = [T]_{\cB_V}^{\cB_W} [v]_{\cB_V}. \end{equation*}
In our current situation \(V=W\) and \(T\) is the identity transformation, \(T(v) = v\text{,}\) which we shall denote by \(I\text{,}\) so that
\begin{equation*} [v]_{\cB_2} = [I]_{\cB_1}^{\cB_2} [v]_{\cB_1}. \end{equation*}
The matrix \([I]_{\cB_1}^{\cB_2}\) is called the change of basis or change of coordinates matrix (converting \(\cB_1\) coordinates to \(\cB_2\) coordinates), and these change of basis matrices come in pairs
\begin{equation*} [I]_{\cB_1}^{\cB_2} \text{ and } [I]_{\cB_2}^{\cB_1}. \end{equation*}
Now in our case, both matrices are easy to compute:
\begin{equation*} [I]_{\cB_1}^{\cB_2}= \ba{rrr} 1&0&0\\ -1&1&0\\ 0&-1&1\\ \ea \text{ and } [I]_{\cB_2}^{\cB_1}= \begin{bmatrix} 1&0&0\\ 1&1&0\\ 1&1&1\\ \end{bmatrix}, \end{equation*}
and it should come as no surprise that the columns of the second are just the elements of the \(\cB_2\)-basis in standard coordinates. But the nice part is that the first matrix is related to the second affording a means to compute it when computations by hand are not so simple.
Using Equation (4.3.4) on the matrix of a composition
\begin{equation*} [TS]_{\cB_U}^{\cB_W} = [T]_{\cB_V}^{\cB_W}[S]_{\cB_U}^{\cB_V}, \end{equation*}
with \(V=U=W\text{,}\) and \(T=S=I\text{,}\) we arrive at
\begin{equation*} \begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1\\ \end{bmatrix}= [I]_{\cB_1}^{\cB_1} = [I]_{\cB_1}^{\cB_2}[I]_{\cB_2}^{\cB_1}, \end{equation*}
that is \([I]_{\cB_1}^{\cB_2}\) and \([I]_{\cB_2}^{\cB_1}\) are inverse matrices, and this is always the case.
Finally we apply this to the matrix of a linear map \(T:V\to V\) on a finite-dimensional vector space \(V\) with bases \(\cB_1\) and \(\cB_2\text{:}\)

Example 4.3.13. A simple example.

We often express the matrix of a linear map in terms of the standard basis, but many times such a matrix is complicated and does not easily reveal what the linear map is actually doing. For example, using our bases \(\cB_1\) and \(\cB_2\) for \(\R^3\) given above, suppose we have a linear map \(T:\R^3 \to \R^3\) whose matrix with respect to the standard basis \(\cB_1\) is
\begin{equation*} [T]_{\cB_1}=\ba{rrr} 4&0&0\\ -1&5&0\\ -1&-1&6\\ \ea. \end{equation*}
It is easy enough to compute the value of \(T\) on a given vector (recall from equation (4.3.3), the columns of the above matrix are simply \(T(v_1), T(v_2),T(v_3)\) written with respect to the standard basis (\(\cB_1\)) for \(\R^3).\)
However, using Theorem 4.3.12, we compute
\begin{equation*} [T]_{\cB_2}=\begin{bmatrix} 4&0&0\\ 0&5&0\\ 0&0&6\\ \end{bmatrix}, \end{equation*}
which makes much clearer how the map \(T\) is acting on \(\R^3\) (strecthing by a factor of 4, 5, 6 in the directions of \(w_1, w_2, w_3.\)
We return to the example of the orthogonal projection from above and show how we computed the matrix of the transformation with respect to the standard basis.

Example 4.3.14. The details from our other orthogonal projection.

Recall that we wanted to define a map from \(T:\R^3\to \R^3\) which geometrically takes a point in three space with coordinates \((x,y,z)\) and projects orthogonally onto the plane \(x+y+z=0.\)
We constructed a basis \(\cB = \{v_1, v_2, v_3\}\) with \(v_1, v_2\) in the plane and orthogonal to each other, and the third vector \(v_3\) orthogonal to the plane. We chose
\begin{equation*} \cB=\left\{v_1=\ba{r}1\\-1\\0\ea,\ v_2=\ba{r}1\\1\\-2\ea, \ v_3=\ba{r}1\\1\\1\ea\right\}. \end{equation*}
The matrix of \(T\) with respect to the basis \(\cB\) which is the natural basis for this problem is
\begin{equation*} [T]_\cB = \ba{rrr}1\amp 0\amp 0\\0\amp 1\amp 0\\0\amp 0 \amp 0\ea, \end{equation*}
just as with the standard orthogonal projection onto the \(xy\)-plane.
So to deduce \([T]_\cE\) from \([T]_\cB\text{,}\) we need to compute the change of basis matrices \([I]_\cB^\cE\) and \([I]_\cE^\cB\text{.}\) The matrix \([I]_\cB^\cE\) is the easy one since we are just listing the new basis \(\{v_1, v_2, v_3\}\) as its columns, so
\begin{equation*} [I]_\cB^\cE = \ba{rrr}1\amp1\amp1\\-1\amp1\amp1\\0\amp -2\amp 1\ea. \end{equation*}
To compute the other we have find the inverse of \([I]_\cB^\cE\) which we can do by row reducing the augmented matrix \(\left[ [I]_\cB^\cE \mid I_3\right]\text{.}\) We obtain:
\begin{equation*} \ba{rrrrrr}1 \amp 1 \amp 1 \amp 1 \amp 0 \amp 0 \\ -1 \amp 1 \amp 1 \amp 0 \amp 1 \amp 0 \\ 0 \amp -2 \amp 1 \amp 0 \amp 0 \amp 1 \ea \mapsto \ba{rrrrrr}1 \amp 0 \amp 0 \amp \frac{1}{2} \amp -\frac{1}{2} \amp 0 \\ 0 \amp 1 \amp 0 \amp \frac{1}{6} \amp \frac{1}{6} \amp -\frac{1}{3} \\ 0 \amp 0 \amp 1 \amp \frac{1}{3} \amp \frac{1}{3} \amp \frac{1}{3}\ea \end{equation*}
Thus
\begin{equation*} [I]_\cE^\cB = \ba{rrr}\frac{1}{2} \amp -\frac{1}{2} \amp 0 \\ \frac{1}{6} \amp \frac{1}{6} \amp -\frac{1}{3} \\ \frac{1}{3} \amp \frac{1}{3} \amp \frac{1}{3}\ea \end{equation*}
We can now compute that
\begin{equation*} [T]_\cE = [I]_\cB^\cE [T]_\cB [I]_\cE^\cB = \ba{rrr}2/3\amp -1/3\amp -1/3\\ -1/3\amp2/3\amp-1/3\\-1/3\amp-1/3\amp2/3\ea. \end{equation*}