Skip to main content
Logo image

Section 1.8 Exercises (with solutions)

Exercises Exercises

1.

Let \(H\) be the subset of \(\R^4\) defined by
\begin{equation*} H = \left\{\ba{r}x_1\\x_2\\x_3\\x_4\ea : x_1 + x_2 + x_3 + x_4 = 0\right\}. \end{equation*}
Either show that \(H\) is a subspace of \(\R^4\text{,}\) or demonstrate how it fails to have a necessary property.
Solution.
The easiest way to show that \(H\) is a subspace is to note that it is the kernel of a linear map. Let \(A\) be the \(1\times 4\) matrix \(A =[1 \ 1\ 1\ 1]\text{.}\) Then
\begin{equation*} H = \{x \in \R^4\mid Ax=0\}, \end{equation*}
is the nullspace of \(A,\) which is always a subspace.
Alternatively of course you could check that 0 is in the set and that it is closed under addition and scalar multiplication.

2.

Suppose that \(T:\R^3\to \R^3\) is a linear map satisfying
\begin{equation*} T\left(\ba{r}3\\0\\0\ea\right) = \ba{r}6\\-3\\6\ea, T\left(\ba{r}1\\1\\0\ea\right) = \ba{r}2\\0\\1\ea, \text{ and } T\left(\ba{r}0\\0\\2\ea\right) = \ba{r}4\\6\\2\ea. \end{equation*}
(a)
If the standard basis for \(\R^3\) is \(\cE=\{e_1,e_2,e_3\},\) determine
\begin{equation*} T(e_1), T(e_2), \text{ and } T(e_3). \end{equation*}
Solution.
Using linearity, we are given \(T(3e_1) =3T(e_1)= \ba{r}6\\-3\\6\ea,\) so \(T(e_1)=\ba{r}2\\-1\\2\ea. \)
We are given \(T(e_1+e_2) = T(e_1)+ T(e_2)=\ba{r}2\\0\\1\ea,\) so
\begin{equation*} T(e_2) = T(e_1+e_2)-T(e_1) = \ba{r}2\\0\\1\ea - \ba{r}2\\-1\\2\ea = \ba{r}0\\1\\-1\ea. \end{equation*}
Finally, \(T(2e_3) = \ba{r}4\\6\\2\ea,\) so \(T(e_3) = \ba{r}2\\3\\1\ea.\)
(b)
Find \(T\left(\ba{r}1\\1\\1\ea\right).\)
Solution.
We compute
\begin{equation*} T\left(\ba{r}1\\1\\1\ea\right) = T(e_1)+T(e_2)+T(e_3) = \ba{r}4\\3\\2\ea. \end{equation*}

3.

Consider the upper triangular matrix
\begin{equation*} A = \ba{rrr}1\amp x\amp z\\0\amp 1\amp y\\0\amp0\amp1\ea, \end{equation*}
with \(x,y,z \in \R.\)
(a)
Give as many reasons as you can that shows the matrix \(A\) is invertible.
Solution.
We see that \(A\) is already in echelon (not RREF) form, which tells us there is a pivot in each column. Since there are only three variables the system \(Ax=0\) has only the trivial solution, to the linear map \(x\mapsto Ax\) is injective. Three pivots also means the column space is spanned by three independent vectors, so is all of \(\R^3.\) So the linear map is bijective, hence invertible.
One could also say that since the RREF of \(A\) is the identity matrix, it is invertible.
If you know about determinants, you could say the determinant equals 1, hence is nonzero, which means \(A\) is invertible.
(b)
Find the inverse of the matrix \(A.\)
Solution.
We row-reduce
\begin{align*} \ba{rrr|rrr}1\amp x\amp z\amp1\amp0\amp0\\ 0\amp 1\amp y\amp 0\amp 1\amp0\\ 0\amp 0\amp 1\amp 0\amp 0\amp 1\ea \amp\mapsto \ba{rrr|rrr}1\amp x\amp 0\amp 1\amp 0\amp -z\\ 0\amp 1\amp 0\amp 0 \amp 1 \amp -y\\ 0\amp0\amp1\amp0\amp0\amp1\ea\\ \amp\mapsto \ba{rrr|rrc} 1\amp0\amp0\amp1\amp -x\amp -z+xy\\ 0\amp 1\amp 0 \amp 0\amp 1\amp -y\\ 0\amp0\amp1\amp0\amp0\amp1\ea. \end{align*}
So
\begin{equation*} A^{-1} = \ba{rrc}1\amp -x\amp -z+xy\\ 0\amp 1\amp -y\\0\amp0\amp1\ea. \end{equation*}

4.

Consider the linear transformation \(T:\R^5\to \R^4\) given by \(T(x) = Ax\) where \(A\) and its reduced row-echelon form \(R\) are given by:
\begin{equation*} A=\ba{rrrrr} 1 \amp -1 \amp 2 \amp 6 \amp -3 \\ 2 \amp -1 \amp 0 \amp 7 \amp 10 \\ -2 \amp 3 \amp -7 \amp -15 \amp 17 \\ 2 \amp -2 \amp 2 \amp 8 \amp 5 \ea \text{ and } R= \ba{rrrrr} 1 \amp 0 \amp 0 \amp 5 \amp 0 \\ 0 \amp 1 \amp 0 \amp 3 \amp 0 \\ 0 \amp 0 \amp 1 \amp 2 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \amp 1 \ea. \end{equation*}
(a)
Determine \(\ker T\text{,}\) the kernel of \(T.\)
Solution.
The kernel of \(T\) is the nullspace of \(A,\) which we know is the same as the nullspace of \(R\) which we can read off:
\begin{equation*} \ba{r} x_1\\x_2\\x_3\\x_4\\x_5 \ea= \ba{r} -5x_4\\-3x_4\\-2x_4\\x_4\\0 \ea =x_4\ba{r} -5\\-3\\-2\\1\\0 \ea \end{equation*}
(b)
Determine \(\Im T\text{,}\) the image of \(T.\)
Solution.
Depending upon what you already know, you could observe that the RREF \(R\) has a pivot in each row which means the columns of \(A\) span all of \(\R^4.\)
Or you may know that looking at \(R\) tells us there are four pivot columns in \(A\text{,}\) meaning the column space is spanned by 4 linearly independent vectors, hence the image is all of \(\R^4\text{.}\)
Or, if you have already learned the rank-nullity theorem, then from the previous part we would know the nullity is one, and so rank-nullity says the rank is \(5-1=4\text{,}\) so the image is a dimension 4 subspace of \(\R^4,\) which is all of \(\R^4.\)

5.

Let \(K\) be the set of solutions in \(\R^5\) to the homogeneous linear system
\begin{align*} x_1+x_2+x_3+x_4\phantom{+x_5}\amp =0\\ x_5\amp =0. \end{align*}
(a)
Find a basis \(\cB_0\) for \(K.\)
Solution.
The coefficient matrix for the system is
\begin{equation*} A = \ba{rrrrr}1\amp1\amp1\amp1\amp0\\0\amp0\amp0\amp0\amp1\ea \end{equation*}
which is already in reduced row-echelon form. We see there are two pivots, hence 3 free variables, meaning \(\dim K = 3.\) By inspection (or working out the details of finding all solutions), one finds a basis can be taken to
\begin{equation*} \cB_0 = \left\{v_1=\ba{r}-1\\1\\0\\0\\0\ea, v_2=\ba{r}-1\\0\\1\\0\\0\ea, v_3=\ba{r}-1\\0\\0\\1\\0\ea\right\}. \end{equation*}
(b)
Extend the basis \(\cB_0\) from the previous part to a basis \(\cB\) for all of \(\R^5.\)
Solution.
To extend a linearly independent set, one must add something not in the original span (see Theorem 1.1.4). There are many correct answers possible, but the vectors
\begin{equation*} v_4 = \ba{r}1\\1\\1\\1\\0\ea \text{ and } v_5=\ba{r}0\\0\\0\\0\\1\ea \end{equation*}
are clearly not in \(K\) since \(v_4\) does not satisfy the first defining equation, and \(v_5\) does not satisfy the second. So thinking algorithmically, \(\cB_0\cup \{v_4\}\) is linearly independent, and \(v_5\) is certainly not in the span of those four vectors since their last coordinates are all zero. Thus we may take (as one possible solution)
\begin{equation*} \cB=\cB_0\cup\{v_4,v_5\}. \end{equation*}
(c)
Define a linear transformation \(T:\R^5\to \R^5\) with kernel \(K\) and image equal to the set of all vectors with \(x_3=x_4=x_5=0.\)
Solution.
By Theorem 1.1.6, a linear map is uniquely defined by its action on a basis. It should be clear that the desired image is defined by the standard basis vectors \(e_1\) and \(e_2.\) So with the given basis \(\cB=\{v_1, \dots, v_5\},\) we must have
\begin{equation*} T(v_i) = 0, \text{ for }i=1,2,3\text{,} \end{equation*}
and \(T(v_4), T(v_5)\) linearly independent vectors in the image, say
\begin{equation*} T(v_4) = e_1\text{ and }T(v_5) = e_2. \end{equation*}

6.

Let \(M_{2\times 2}\) be the vector space of \(2\times 2\) matrices with real entries, and fix a matrix \(A=\ba{rr}a\amp b\\c\amp d\ea \in M_{2\times 2}\text{.}\) Consider the linear transformation \(T: M_{2\times 2} \to M_{2\times 2}\) defined by \(T(X) = AX\text{,}\) which (left) multiplies an arbitrary \(2\times 2\) matrix \(X\) by the fixed matrix \(A\text{.}\) Let \(\cE = \left\{ \e_1 = \ba{rr}1\amp 0\\0\amp 0\ea, \e_2 = \ba{rr}0\amp 1\\0\amp 0\ea, \e_3= \ba{rr}0\amp 0\\1\amp 0\ea, \e_4 = \ba{rr}0\amp 0\\0\amp 1\ea \right\}\) be a basis for \(M_{2\times 2}\text{.}\)
(a)
Find the matrix of \(T\) with respect to the basis \(\cE\text{,}\) that is \([T]_\cE\text{.}\)
Solution.
\begin{align*} T(\e_1)\amp = \ba{rr}a\amp b\\c\amp d\ea \ba{rr}1\amp 0\\0\amp 0\ea= \ba{rr}a\amp 0\\c\amp 0\ea= a\e_1 + c\e_3\\ T(\e_2)\amp = \ba{rr}a\amp b\\c\amp d\ea \ba{rr}0\amp 1\\0\amp 0\ea= \ba{rr}0\amp a\\0\amp c\ea= a\e_2 + c\e_4\\ T(\e_3)\amp = \ba{rr}a\amp b\\c\amp d\ea \ba{rr}0\amp 0\\1\amp 0\ea= \ba{rr}b\amp 0\\d\amp 0\ea= b\e_1 + d\e_3\\ T(\e_4)\amp = \ba{rr}a\amp b\\c\amp d\ea \ba{rr}0\amp 0\\0\amp 1\ea= \ba{rr}0\amp b\\0\amp d\ea= b\e_2 + d\e_4 \end{align*}
We now simply record the data as coordinate vectors:
\begin{equation*} [T]_\cE = \ba{rrrr}a\amp 0\amp b\amp 0\\0\amp a\amp 0\amp b\\c\amp 0\amp d\amp 0\\0\amp c\amp 0\amp d \ea \end{equation*}
(b)
Now let \(\cB\) be the basis, \(\cB=\{\e_1, \e_3, \e_2, \e_4\}\text{,}\) that is, the same elements as \(\cE\text{,}\) but with the second and third elements interchanged. Write down the appropriate change of basis matrix, \([I]_\cB^\cE\text{,}\) and use it to compute the matrix of \(T\) with respect to the basis \(\cB,\) that is \([T]_\cB.\)
Solution.
The change of basis matrices \([I]_\cB^\cE= \ba{rrrr}1\amp 0\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 1\ea = [I]_\cE^\cB\text{,}\) so
\begin{align*} [T]_\cB\amp =[I]_\cE^\cB\,[T]_\cE\,[T]_\cB^\cE\\ \amp = \ba{rrrr}1\amp 0\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 1\ea\ba{rrrr}a\amp 0\amp b\amp 0\\0\amp a\amp 0\amp b\\c\amp 0\amp d\amp 0\\0\amp c\amp 0\amp d\ea \ba{rrrr}1\amp 0\amp 0\amp 0\\0\amp 0\amp 1\amp 0\\0\amp 1\amp 0\amp 0\\0\amp 0\amp 0\amp 1\ea =\ba{rrrr}a\amp b\amp 0\amp 0\\c\amp d\amp 0\amp 0\\0\amp 0\amp a\amp b\\0\amp 0\amp c\amp d\ea. \end{align*}
Of course it was possible to write down \([T]_\cB\) simply from the information in part (a).

7.

Write down an explicit linear transformation \(T: \R^2 \to \R^3\) that has as its image the plane \(x-4y + 5z=0\text{.}\) What is the kernel of \(T\text{?}\)
Hint.
Any linear transformation \(T:\R^n \to \R^m\) has the form \(T(x) = Ax\) where \(A\) is the matrix for \(T\) with respect to the standard bases. How is the image of \(T\) related to the matrix \(A\text{?}\)
Solution.
We know that \(T\) can be given by \(T(x) = Ax\) where \(A\) is the \(3\times 2\) matrix whose columns are \(T(e_1)\) and \(T(e_2)\text{.}\) They must span the given plane, so for example, \(A = \ba{rr}4\amp -5\\1\amp 0\\0\amp 1\ea\) will do.
By rank-nullity, the kernel must be trivial.

8.

Let \(A\in M_n(\R)\) which is invertible. Show that the columns of \(A\) form a basis for \(\R^n.\)
Solution.
Since \(A\) is invertible, we know that we can find its inverse by row reducing the augmented matrix
\begin{equation*} [A|I_n] \mapsto [I_n|A^{-1}]. \end{equation*}
In particular, this says that the RREF form of \(A\) is \(I_n.\)
One way to finish is that the information above says that \(Ax=0\) has only the trivial solution, which means that the \(n\) columns of \(A\) are linearly independent. Since there are \(n = \dim \R^n\) of them, by Theorem 1.1.3, they must be a basis.
Another approach is that the linear map \(T:\R^n\to \R^n\) given by \(T(x) = Ax\) is an isomorphism with the inverse map being given \(x\mapsto A^{-1}x\text{.}\) In particular, \(T\) is surjective and its image is the column space of \(A.\) That means that the \(n\) columns of \(A\) span all of \(\R^n,\) and hence must be a basis again by Theorem 1.1.3.

9.

Consider the vector space \(M_2(\R)\) of all \(2\times 2\) matrices with real entries. Let’s consider a number of subspaces and their bases. Let \(\cE=\{E_{11},E_{12},E_{21},E_{22}\} = \{ [\begin{smallmatrix} 1\amp0\\0\amp0 \end{smallmatrix}], [\begin{smallmatrix} 0\amp1\\0\amp0 \end{smallmatrix}], [\begin{smallmatrix} 0\amp0\\1\amp0 \end{smallmatrix}],[\begin{smallmatrix} 0\amp0\\0\amp1 \end{smallmatrix}]\}\) be the standard basis for \(M_2(\R).\)
(a)
Define a map \(T: M_2(\R) \to \R\) by
\begin{equation*} T\left(\ba{rr}a\amp b\\c\amp d\ea\right) = a+d. \end{equation*}
The quantity \(a+d\) (the sum of the diagonal entries) is called the trace of the matrix. You may assume that \(T\) is a linear map. Find a basis for its kernel, \(K.\)
Solution.
It is easy to see that \(T\) is a surjective map, so by the rank-nullity theorem, \(\dim K = 3.\) Extracting from the standard basis, we see that \(E_{12}, E_{21} \in K\) so are part of a basis for \(K.\) We just need to add one more matrix which is not in the span of the two chosen basis vectors.
Certainly, the matrix must have the form \(\ba{rr}a\amp b\\c\amp -a\ea\text{,}\) and we need \(a\ne 0\text{,}\) otherwise our matrix is in the span of the other two vectors. But once we realize that, we may as well assume that \(b=c=0,\) so that \(\ba{rr}1\amp 0\\0\amp -1\ea\) is a nice choice, and since it is not in the span of the other two, adding it still gives us an independent set.
(b)
Now let’s consider the subspace \(S\) consisting of all symmetric matrices, those for which \(A^T = A.\) It should be clear this is a proper subspace, but what is its dimension. Actually finding a basis helps answer that question.
Hint.
If you don’t like the “brute force” force of the tack of the solution, you could take the high road and consider the space of skew-symmetric matrices, those for which \(A^T = -A\text{.}\) It is pretty easy to determine its dimension and then you can use the fact that every matrix can be written as the sum of symmetric and skew-symmetric matrix to tell you the dimension of \(S.\)
\begin{equation*} A = \frac12(A + A^T) + \frac12 (A-A^T). \end{equation*}
Solution.
Once again, it is clear that some elements of the standard basis are in \(S,\) like \(E_{11}, E_{22}\text{.}\) Since it is a proper subspace, its dimension is either 2 or 3, and a few moments thought convinces you that
\begin{equation*} \ba{rr}0\amp 1\\1\amp 0\ea = E_{12}+E_{21} \end{equation*}
is symmetric, not in the span of the other two, so forms an independent set in \(S.\) So \(\dim S = 3,\) this must be a basis for \(S.\)
(c)
Now \(K\cap S\) is also a subspace of \(M_2(\R)\text{.}\) Can we find its dimension.
Solution.
Once again, it is useful to know the dimension of the space. Certainly it is at most 3, but then not every symmetric matrix has zero trace, so it is at most two. Staring at the bases for each of \(S\) and \(K\) separately, we see that both
\begin{equation*} \ba{rr}0\amp 1\\1\amp 0\ea \text { and } \ba{rr}1\amp 0\\0\amp -1\ea \end{equation*}
are in the intersection and are clearly linearly independent, so they must be a basis.
(d)
Extend the basis you found for \(K\cap S\) to bases for \(S\) and for \(K.\)
Solution.
Since \(\dim (K\cap S) = 2\text{,}\) we need only find one matrix not in their span to give a basis for either \(K\) or \(S.\) For \(K\text{,}\) we could choose \(E_{12},\) and for \(S\) we could choose \(E_{11}.\) Knowing the dimension is clearly a powerful tool since it tells you when you are done.

10.

The matrix \(B=\ba{rrr}1 \amp 4 \amp -7 \\ -3 \amp -11 \amp 19 \\ -1 \amp -9 \amp 18\ea\) is invertible with inverse \(B^{-1}=\ba{rrr} -27 \amp -9 \amp -1 \\ 35 \amp 11 \amp 2 \\ 16 \amp 5 \amp 1 \ea \text{.}\) Since the columns of \(B\) are linearly independent, they form a basis for \(\R^3:\)
\begin{equation*} \cB=\left\{\ba{r}1\\ -3\\ -1\ea, \ba{r}4\\ -11\\ -9\ea,\ba{r}-7\\ 19\\ 18\ea\right\}. \end{equation*}
Let \(\cE\) be the standard basis for \(\R^3.\)
(a)
Suppose that a vector \(v\in \R^3\) has coordinate vector \([v]_\cB = \ba{r}1\\2\\3\ea.\)
Find\([v]_\cE.\)
Solution.
The matrix \(B\) is the change of basis matrix \([I]_\cB^\cE\) so
\begin{equation*} [v]_\cE = [I]_\cB^\cE [v]_\cB = \ba{rrr}1\amp4\amp-7 \\ -3 \amp -11 \amp 19 \\ -1 \amp -9 \amp 18\ea\ba{r}1\\2\\3\ea = \ba{r}-12\\32\\35\ea \end{equation*}
(b)
Suppose that \(T:\R^3\to \R^3\) is the linear map given by \(T(x) = Ax\) where
\begin{equation*} A = [T]_\cE = \ba{rrr}1\amp 2\amp3\\4\amp5\amp6\\7\amp8\amp9\ea. \end{equation*}
Write down an appropriate product of matrices which equal \([T]_\cB.\)
Solution.
\begin{equation*} [T]_\cB = [I]_\cE^\cB [T]_\cE [I]_\cB^\cE = B^{-1}AB. \end{equation*}

11.

Let \(W\) be the subspace of \(M_2(\R)\) spanned by the set \(S\text{,}\) where
\begin{equation*} S=\left\{\ba{rr}0\amp -1\\-1\amp 1\ea, \ba{rr}1\amp 2\\2\amp 3\ea, \ba{rr}2\amp 1\\1\amp 9\ea, \ba{rrr}1\amp -2\\-2\amp 4\ea\right\}. \end{equation*}
(a)
Use the standard basis \(\cB=\{E_{11}, E_{12}, E_{21}, E_{22}\}\) for \(M_2(\R)\) to express each element of \(S\) as a coordinate vector with respect to the basis \(\cB.\)
Solution.
We write the coordinate vectors as columns of the matrix:
\begin{equation*} \ba{rrrr}0\amp 1\amp 2\amp 1\\ -1\amp 2\amp 1\amp -2\\ -1\amp 2\amp 1\amp -2\\ 1\amp 3\amp 9\amp 4\ea. \end{equation*}
(b)
Determine a basis for \(W.\)
Hint.
By staring at the matrix, it is immediate that that rank is at most 3. What are the pivots?
Solution.
We start a row reduction:
\begin{align*} A\amp \mapsto \ba{rrrr}0\amp 1\amp 2\amp 1\\ -1\amp 2\amp 1\amp -2\\ 1\amp 3\amp 9\amp 4\\ 0\amp0\amp0\amp0\ea\mapsto \ba{rrrr} 1\amp 3\amp 9\amp 4\\ 0\amp 1\amp 2\amp 1\\ -1\amp 2\amp 1\amp -2\\ 0\amp0\amp0\amp0\ea\\ \amp\mapsto \ba{rrrr} 1\amp 3\amp 9\amp 4\\ 0\amp 1\amp 2\amp 1\\ 0\amp 5\amp 10\amp 2\\ 0\amp0\amp0\amp0\ea\mapsto \ba{rrrr} 1\amp 3\amp 9\amp 4\\ 0\amp 1\amp 2\amp 1\\ 0\amp 0\amp 0\amp -3\\ 0\amp0\amp0\amp0\ea. \end{align*}
Thus the pivot columns are the first, second, and fourth, so we may take the first, second and fourth elements of \(S\) as a basis for \(W.\)

12.

Let \(A=\ba{rrr}1\amp2\amp3\\1\amp2\amp3\\1\amp2\amp3\ea\text{.}\)
(a)
Compute the rank and nullity of \(A\text{.}\)
Solution.
Too easy! It is obvious that the rank is 1 since all columns are multiples of the first. Rank-nullity tells us that the nullity is \(3-1=2.\)
(b)
Compute \(A\ba{r}1\\1\\1\ea\text{,}\) and use your answer to help conclude (without computing the characteristic polynomial) that \(A\) is diagonalizable.
Solution.
\(A\ba{r}1\\1\\1\ea= \ba{r}6\\6\\6\ea = 6\ba{r}1\\1\\1\ea,\) which means that 6 is a eigenvalue for \(A\text{,}\) and \(\ba{r}1\\1\\1\ea\) is an eigenvector.
The nullity is 2, which means that 0 is an eigenvalue and that the eigenspace corresponding to 0 (the nullspace of \(A\)) has dimension 2, so that there exists a basis of \(\R^3\) consisting of eigenvectors. Recall that by Proposition 1.5.5 the eigenvectors from different eigenspaces are linearly independent.
(c)
Determine the characteristic polynomial of \(A\) from what you have observed.
Solution.
\(\chi_A(x) = x^2 (x-6)\text{.}\) There are two eigenvalues, 0 and 6, and since the matrix is diagonalizable the algebraic multiplicities to which they occur equal their geometric multiplicities (i.e., the dimension of the corresponding eigenspaces), see Theorem 1.5.6.
(d)
Determine a matrix \(P\) so that
\begin{equation*} \ba{rrr}6\amp0\amp0\\0\amp0\amp0\\0\amp0\amp0\ea = P^{-1}AP. \end{equation*}
Solution.
We already know that \(\ba{r}1\\1\\1\ea\) is an eigenvector for the eigenvalue 6, and since 6 occurs as the first entry in the diagonal matrix, that should be the first column of \(P.\)
To find a basis of eigenvectors for the eigenvalue 0, we need to find the nullspace of \(A.\) It is immediate to see that the reduced row-echelon form of \(A\) is
\begin{equation*} R=\ba{rrr}1\amp2\amp3\\0\amp0\amp0\\0\amp0\amp0\ea\text{,} \end{equation*}
which tells us the solutions are
\begin{equation*} \ba{r}x_1\\x_2\\x_3\ea = \ba{c}-2x_2-3x_3\\x_2\\x_3\ea = x_2\ba{r}-2\\1\\0\ea+ x_3\ba{r}-3\\0\\1\ea. \end{equation*}
We may choose either of those vectors (or some linear combinations of them) to fill out the last columns of \(P.\) So one choice for \(P\) is
\begin{equation*} P = \ba{rrr}1\amp-2\amp-3\\1\amp1\amp0\\1\amp0\amp1\ea. \end{equation*}

13.

Let \(\cE_1=\{E_{11},E_{12},E_{21},E_{22}\} = \{ [\begin{smallmatrix} 1\amp0\\0\amp0 \end{smallmatrix}], [\begin{smallmatrix} 0\amp1\\0\amp0 \end{smallmatrix}], [\begin{smallmatrix} 0\amp0\\1\amp0 \end{smallmatrix}],[\begin{smallmatrix} 0\amp0\\0\amp1 \end{smallmatrix}]\}\) be the standard basis for \(M_2(\R)\text{,}\) and \(\cE_2=\{1,x,x^2,x^3\}\) the standard basis for \(\mathcal P_3(\R)\text{.}\) Let \(T:M_2(\R) \to \mathcal P_3(\R)\) be defined by
\begin{equation*} T([ \begin{smallmatrix} a\amp b\\c\amp d \end{smallmatrix}]) = 2a + (b-d)x -(a+c)x^2 + (a+b-c-d)x^3. \end{equation*}
(a)
Find the matrix of \(T\) with respect to the two bases: \([T]_{\cE_1}^{\cE_2}.\)
Solution.
The columns of the matrix \([T]_{\cE_1}^{\cE_2}\) are the coordinate vectors \([T(E_{ij})]_{\cE_2},\) so
\begin{equation*} [T]_{\cE_1}^{\cE_2} = \ba{rrrr}2\amp0\amp0\amp0\\ 0\amp1\amp0\amp-1\\-1\amp0\amp-1\amp0\\1\amp1\amp-1\amp-1\ea. \end{equation*}
(b)
Determine the rank and nullity of \(T.\)
Solution.
It is almost immediate that the first three columns of the matrix are pivot columns (think RREF), so the rank is at least three. Then we notice that the last column is a multiple of the second, which means the rank is at most three. Thus rank is 3 and nullity is 1.
(c)
Find a basis of the image of \(T.\)
Solution.
The first three columns of \([T]_{\cE_1}^{\cE_2}\) are a basis for the column space of the matrix, but we recall that they are coordinate vectors and the codomain is \(P_3(\R),\) so a basis for the image is:
\begin{equation*} \{2-x^2+x^3, x+x^3, -x^2 - x^3\}. \end{equation*}
(d)
Find a basis of the kernel of \(T.\)
Solution.
Since
\begin{equation*} T([ \begin{smallmatrix} a\amp b\\c\amp d \end{smallmatrix}]) = 2a + (b-d)x -(a+c)x^2 + (a+b-c-d)x^3, \end{equation*}
we must characterize all matrices which yield the zero polynomial. We quickly deduce we must have
\begin{equation*} a = c = 0,\text{ and } b=d, \end{equation*}
so one can choose \([\begin{smallmatrix} 0\amp1\\0\amp1 \end{smallmatrix}]\) as a basis for the kernel.

14.

Let \(V\) be a vector space with basis \(\cB=\{v_1, \dots, v_4\}.\) Define a linear transformation by
\begin{equation*} T(v_1) = v_2,\quad T(v_2)=v_3,\quad T(v_3) = v_4, \quad T(v_4) = av_1+bv_2+cv_3+dv_4. \end{equation*}
(a)
What is the matrix of \(T\) with respect to the basis \(\cB\text{?}\)
Solution.
\([T]_\cB=\ba{rrrr}0\amp0\amp 0\amp a\\ 1\amp 0\amp 0\amp b\\ 0\amp1\amp0\amp c\\0\amp0\amp1\amp d\ea.\)
(b)
Determine necessary and sufficient conditions on \(a,b,c,d\) so that \(T\) is invertible.
Hint.
What is the determinant of \(T\text{,}\) or what happens when you row reduce the matrix?
Solution.
The determinant of the matrix is \(-a\text{,}\) so \(T\) is invertible if and only if \(a\ne 0.\) The values of \(b,c,d\) do not matter.
(c)
What is the rank of \(T\) and how does the answer depend upon the values of \(a,b,c,d\text{?}\)
Solution.
With one elementary row operation, we reduce the original matrix to \(\ba{rrrr} 1\amp 0\amp 0\amp b\\ 0\amp1\amp0\amp c\\ 0\amp0\amp1\amp d\\ 0\amp0\amp 0\amp a\ea\) which is in echelon form. If \(a=0,\) the rank is 3, otherwise it is 4.

15.

Define a map \(T:M_{m\times n}(\R) \to \R^m\) as follows: For \(A = [a_{ij}] \in M_{m\times n}(\R),\) define \(T(A) = \ba{c}b_1\\b_2\\\vdots\\b_m\ea\) where \(b_k = \sum_{j=1}^n a_{kj},\) that is, \(b_k\) is the sum of all the elements in the \(k\)-th row of \(A.\) Assume that \(T\) is linear.
(a)
Find the rank and nullity of \(T.\)
Hint.
If you find this too abstract, try an example first, say with \(m=2\) and \(n=3.\) And finding the rank is the easier first step.
Solution.
Using the standard basis \(\{E_{ij}\}\) for \(M_{m\times n}(\R)\text{,}\) we see that \(T(E_{k1}) = e_k\) where \(\{e_1, \dots, e_m\}\) is the standard basis for \(\R^m.\) Since a spanning set for \(\R^m\) is in the image of \(T,\) the map must be surjective, which means the rank is \(m.\) By rank-nullity, the nullity is \(nm-m.\)
(b)
For \(m=2,\) and \(n=3\) find a basis for the nullspace of \(T.\)
Hint.
For an element to be in the nullspace, the sum of the entries in each of its rows needs to be zero. Can you make a basis with one row in each matrix all zero?
Solution.
Consider the set
\begin{equation*} \left\{\ba{rrr}1\amp 0\amp -1\\0\amp 0\amp0\ea, \ba{rrr}0\amp 1\amp -1\\0\amp 0\amp0\ea, \ba{rrr}0\amp 0\amp 0\\1\amp 0\amp -1\ea, \ba{rrr}0\amp0\amp 0\\0\amp 1\amp -1\ea\right\} \end{equation*}
Notice that the 1 which occurs in each matrix occurs in a different location in each matrix. It is now easy to show that any linear combination of these matrices which equals the zero matrix must have all coefficients equal to zero, so the set is linearly independent. Since it has the correct size, it must be a basis for the nullspace.

16.

This exercise is about how to deal with determining independent and spanning sets in vector spaces other than \(F^n.\) Let \(V=P_3(\R),\) the vector space of polynomials of degree at most 3 with real coefficients. Suppose that some process has handed you the set of polynomials
\begin{equation*} S=\{p_1=1+2x+3x^2+3x^3, p_2=5+6x+7x^2+8x^3, p_3=9+10x+11x^2+12x^3, p_4=13+14x+15x^2+16x^3\} \end{equation*}
We want to know whether \(S\) is a basis for \(V,\) or barring that extract a maximal linearly independent subset.
(a)
How can we translate this problem about polynomials into one about vectors in \(\R^n?\)
Solution.
We know that Theorem 1.2.5 tells us that \(P_3(\R)\) is isomorphic to \(\R^4,\) and all we need to do is map a basis to a basis, but we would like a little more information at our disposal.
Let \(\cB=\{1,x,x^2,x^3\}\) be the standard basis for \(V=P_3(\R).\) Then the map
\begin{equation*} T(v)= [v]_\cB \end{equation*}
which takes a vector \(v\) to its coordinate vector is such an isomorphism. What is important is that linear dependence relations among the vectors in \(S\) are automatically reflected in linear dependence relations among the coordinate vectors.
(b)
Determine a maximal linearly independent subset of \(S.\)
Solution.
If we record the coordinate vectors for the polynomials in \(S\) as columns of a matrix, we produce a matrix \(A\) and its RREF \(R\text{:}\)
\begin{equation*} A=\ba{rrrr}1 \amp 2 \amp 3 \amp 4 \\ 5 \amp 6 \amp 7 \amp 8 \\ 9 \amp 10 \amp 11 \amp 12 \\ 13 \amp 14 \amp 15 \amp 16\ea \mapsto R=\ba{rrrr} 1 \amp 0 \amp -1 \amp -2 \\ 0 \amp 1 \amp 2 \amp 3 \\ 0 \amp 0 \amp 0 \amp 0 \\ 0 \amp 0 \amp 0 \amp 0 \ea \end{equation*}
So we see that the first two columns are pivot columns which means \(S_0=\{p_1,p_2\}\) is a maximal linearly independent set.
We also recall that from the RREF, we can read off the linear dependencies with the other two vecotrs:
\begin{equation*} p_3=-p_1+2p_2 \text{ and } p_4 = -2p_1 + 3p_2. \end{equation*}
(c)
Extend the linearly independent set from the previous part to a basis for \(P_3(\R).\)
Solution.
Since we are free to add whatever vectors we want to the given set, we can add column vectors to the ones for \(p_1\) and \(p_2\) to see if we can extend the basis. We know that \(\{p_1, p_2, 1, x, x^2,x^3\}\) is a linearly dependent spanning set. We convert to coordinates and row reduce to find the pivots. So we build a matrix \(B\) and its RREF:
\begin{equation*} \ba{rrrrrr} 1 \amp 5 \amp 1 \amp 0 \amp 0 \amp 0 \\ 2 \amp 6 \amp 0 \amp 1 \amp 0 \amp 0 \\ 3 \amp 7 \amp 0 \amp 0 \amp 1 \amp 0 \\ 4 \amp 8 \amp 0 \amp 0 \amp 0 \amp 1 \ea \mapsto \ba{rrrrrr} 1 \amp 0 \amp 0 \amp 0 \amp -2 \amp \frac{7}{4} \\ 0 \amp 1 \amp 0 \amp 0 \amp 1 \amp -\frac{3}{4} \\ 0 \amp 0 \amp 1 \amp 0 \amp -3 \amp 2 \\ 0 \amp 0 \amp 0 \amp 1 \amp -2 \amp 1 \ea \end{equation*}
We see the first 4 columns are pivots, so we may take \(\{p_1, p_2, 1, x\}\) as one such basis.

17.

Let \(A\in M_5(\R)\) be the block matrix (with off diagonal blocks all zero) given by:
\begin{equation*} A = \ba{rrrrr} -1\amp 0\amp \\ \alpha\amp 2\\\amp \amp 3\amp 0\amp 0\\ \amp \amp \beta\amp 3\amp 0\\ \amp \amp 0\amp \gamma\amp 3 \ea. \end{equation*}
Determine all values of \(\alpha, \beta, \gamma\) for which \(A\) is diagonalizable.
Solution.
Since the matrix is lower triangular, it is easy to compute the characteristic polynomial:
\begin{equation*} \chi_A = (x+1)(x-2)(x-3)^3\text{.} \end{equation*}
The eigenspaces for \(\lambda = -1, 2\) each have dimension 1 (the required minimum) and equal to the algebraic multiplicity, so the only question is what happens with the eigenvalue \(\lambda = 3\text{.}\) Consider the matrix \(A-3I = \ba{rrrrr} -4\amp 0\amp \\ \alpha\amp -1\\\amp \amp 0\amp 0\amp 0\\ \amp \amp \beta\amp 0\amp 0\\ \amp \amp 0\amp \gamma\amp 0 \ea.\) For the nullspace of \(A-3I\) to have dimension 3, the rank must be 2. Clearly the first two rows are linearly independent (independent of \(\alpha\)), while if either \(\beta\) or \(\gamma\) is nonzero, this will increase the rank beyond two. So the answer is \(\alpha\) can be anything, but \(\beta\) and \(\gamma\) must both be zero.

18.

Let \(A=\ba{rrr}3\amp 0\amp 0\\6\amp -1\amp 6\\1\amp 0\amp 2\ea \in M_3(\R).\)
(a)
Find the characteristic polynomial of \(A.\)
Solution.
\(\chi_A = \det(xI-A) = \det\left(\ba{ccc}x-3\amp0\amp0\\-6\amp x+1\amp -6\\-1\amp 0\amp x-2\ea\right)\text{.}\) Expanding along the first row shows that \(\chi_A = (x-3)(x-2)(x+1).\)
(b)
Show that \(A\) is invertible.
Solution.
Many answers are possible: \(\det A = -6 \ne 0\text{,}\) or 0 is not an eigenvalue, or one could row reduce the matrix to the identity. All show \(A\) is invertible.
(c)
Justify that the columns of \(A\) form a basis for \(\R^3.\)
Solution.
Since \(A\) is invertible, the rank of \(A\) is 3, which is the dimension of the column space. So the column space spans all of \(\R^3,\) which means the columns must be linearly independent either by Theorem 1.1.3 or directly since the nullspace is trivial. Thus the columns form a basis.
(d)
Let \(\cB=\{v_1, v_2, v_3\}\) be the columns of \(A,\) and let \(\cE\) be the standard basis for \(\R^3.\) Suppose that \(T:\R^3 \to \R^3\) is a linear map for which \(A= [T]_\cE.\) Determine \([T]_\cB.\)
Solution.
We know that \([T]_\cB = Q^{-1} [T]_\cE Q\text{,}\) where \(Q=[I]_\cB^\cE\) is a change of basis matrix. But we see that \(Q=[I]_\cB^\cE = A\) by definition and since \([T]_\cE = A\) as well, we check that \([T]_\cB = Q^{-1} [T]_\cE Q = A^{-1} A A = A.\)