Skip to main content
Logo image

Section 1.8 Exercises (with solutions)

Exercises Exercises

1.

Let H be the subset of R4 defined by
H={[x1x2x3x4]:x1+x2+x3+x4=0}.
Either show that H is a subspace of R4, or demonstrate how it fails to have a necessary property.
Solution.
The easiest way to show that H is a subspace is to note that it is the kernel of a linear map. Let A be the 1×4 matrix A=[1 1 1 1]. Then
H={xR4Ax=0},
is the nullspace of A, which is always a subspace.
Alternatively of course you could check that 0 is in the set and that it is closed under addition and scalar multiplication.

2.

Suppose that T:R3R3 is a linear map satisfying
T([300])=[636],T([110])=[201], and T([002])=[462].
(a)
If the standard basis for R3 is E={e1,e2,e3}, determine
T(e1),T(e2), and T(e3).
Solution.
Using linearity, we are given T(3e1)=3T(e1)=[636], so T(e1)=[212].
We are given T(e1+e2)=T(e1)+T(e2)=[201], so
T(e2)=T(e1+e2)T(e1)=[201][212]=[011].
Finally, T(2e3)=[462], so T(e3)=[231].
(b)
Find T([111]).
Solution.
We compute
T([111])=T(e1)+T(e2)+T(e3)=[432].

3.

Consider the upper triangular matrix
A=[1xz01y001],
with x,y,zR.
(a)
Give as many reasons as you can that shows the matrix A is invertible.
Solution.
We see that A is already in echelon (not RREF) form, which tells us there is a pivot in each column. Since there are only three variables the system Ax=0 has only the trivial solution, to the linear map xAx is injective. Three pivots also means the column space is spanned by three independent vectors, so is all of R3. So the linear map is bijective, hence invertible.
One could also say that since the RREF of A is the identity matrix, it is invertible.
If you know about determinants, you could say the determinant equals 1, hence is nonzero, which means A is invertible.
(b)
Find the inverse of the matrix A.
Solution.
We row-reduce
[1xz10001y010001001][1x010z01001y001001][1001xz+xy01001y001001].
So
A1=[1xz+xy01y001].

4.

Consider the linear transformation T:R5R4 given by T(x)=Ax where A and its reduced row-echelon form R are given by:
A=[11263210710237151722285] and R=[10050010300012000001].
(a)
Determine kerT, the kernel of T.
Solution.
The kernel of T is the nullspace of A, which we know is the same as the nullspace of R which we can read off:
[x1x2x3x4x5]=[5x43x42x4x40]=x4[53210]
(b)
Determine ImT, the image of T.
Solution.
Depending upon what you already know, you could observe that the RREF R has a pivot in each row which means the columns of A span all of R4.
Or you may know that looking at R tells us there are four pivot columns in A, meaning the column space is spanned by 4 linearly independent vectors, hence the image is all of R4.
Or, if you have already learned the rank-nullity theorem, then from the previous part we would know the nullity is one, and so rank-nullity says the rank is 51=4, so the image is a dimension 4 subspace of R4, which is all of R4.

5.

Let K be the set of solutions in R5 to the homogeneous linear system
x1+x2+x3+x4+x5=0x5=0.
(a)
Find a basis B0 for K.
Solution.
The coefficient matrix for the system is
A=[1111000001]
which is already in reduced row-echelon form. We see there are two pivots, hence 3 free variables, meaning dimK=3. By inspection (or working out the details of finding all solutions), one finds a basis can be taken to
B0={v1=[11000],v2=[10100],v3=[10010]}.
(b)
Extend the basis B0 from the previous part to a basis B for all of R5.
Solution.
To extend a linearly independent set, one must add something not in the original span (see Theorem 1.1.4). There are many correct answers possible, but the vectors
v4=[11110] and v5=[00001]
are clearly not in K since v4 does not satisfy the first defining equation, and v5 does not satisfy the second. So thinking algorithmically, B0{v4} is linearly independent, and v5 is certainly not in the span of those four vectors since their last coordinates are all zero. Thus we may take (as one possible solution)
B=B0{v4,v5}.
(c)
Define a linear transformation T:R5R5 with kernel K and image equal to the set of all vectors with x3=x4=x5=0.
Solution.
By Theorem 1.1.6, a linear map is uniquely defined by its action on a basis. It should be clear that the desired image is defined by the standard basis vectors e1 and e2. So with the given basis B={v1,,v5}, we must have
T(vi)=0, for i=1,2,3,
and T(v4),T(v5) linearly independent vectors in the image, say
T(v4)=e1 and T(v5)=e2.

6.

Let M2×2 be the vector space of 2×2 matrices with real entries, and fix a matrix A=[abcd]M2×2. Consider the linear transformation T:M2×2M2×2 defined by T(X)=AX, which (left) multiplies an arbitrary 2×2 matrix X by the fixed matrix A. Let E={e1=[1000],e2=[0100],e3=[0010],e4=[0001]} be a basis for M2×2.
(a)
Find the matrix of T with respect to the basis E, that is [T]E.
Solution.
T(e1)=[abcd][1000]=[a0c0]=ae1+ce3T(e2)=[abcd][0100]=[0a0c]=ae2+ce4T(e3)=[abcd][0010]=[b0d0]=be1+de3T(e4)=[abcd][0001]=[0b0d]=be2+de4
We now simply record the data as coordinate vectors:
[T]E=[a0b00a0bc0d00c0d]
(b)
Now let B be the basis, B={e1,e3,e2,e4}, that is, the same elements as E, but with the second and third elements interchanged. Write down the appropriate change of basis matrix, [I]BE, and use it to compute the matrix of T with respect to the basis B, that is [T]B.
Solution.
The change of basis matrices [I]BE=[1000001001000001]=[I]EB, so
[T]B=[I]EB[T]E[T]BE=[1000001001000001][a0b00a0bc0d00c0d][1000001001000001]=[ab00cd0000ab00cd].
Of course it was possible to write down [T]B simply from the information in part (a).

7.

Write down an explicit linear transformation T:R2R3 that has as its image the plane x4y+5z=0. What is the kernel of T?
Hint.
Any linear transformation T:RnRm has the form T(x)=Ax where A is the matrix for T with respect to the standard bases. How is the image of T related to the matrix A?
Solution.
We know that T can be given by T(x)=Ax where A is the 3×2 matrix whose columns are T(e1) and T(e2). They must span the given plane, so for example, A=[451001] will do.
By rank-nullity, the kernel must be trivial.

8.

Let AMn(R) which is invertible. Show that the columns of A form a basis for Rn.
Solution.
Since A is invertible, we know that we can find its inverse by row reducing the augmented matrix
[A|In][In|A1].
In particular, this says that the RREF form of A is In.
One way to finish is that the information above says that Ax=0 has only the trivial solution, which means that the n columns of A are linearly independent. Since there are n=dimRn of them, by Theorem 1.1.3, they must be a basis.
Another approach is that the linear map T:RnRn given by T(x)=Ax is an isomorphism with the inverse map being given xA1x. In particular, T is surjective and its image is the column space of A. That means that the n columns of A span all of Rn, and hence must be a basis again by Theorem 1.1.3.

9.

Consider the vector space M2(R) of all 2×2 matrices with real entries. Let’s consider a number of subspaces and their bases. Let E={E11,E12,E21,E22}={[1000],[0100],[0010],[0001]} be the standard basis for M2(R).
(a)
Define a map T:M2(R)R by
T([abcd])=a+d.
The quantity a+d (the sum of the diagonal entries) is called the trace of the matrix. You may assume that T is a linear map. Find a basis for its kernel, K.
Solution.
It is easy to see that T is a surjective map, so by the rank-nullity theorem, dimK=3. Extracting from the standard basis, we see that E12,E21K so are part of a basis for K. We just need to add one more matrix which is not in the span of the two chosen basis vectors.
Certainly, the matrix must have the form [abca], and we need a0, otherwise our matrix is in the span of the other two vectors. But once we realize that, we may as well assume that b=c=0, so that [1001] is a nice choice, and since it is not in the span of the other two, adding it still gives us an independent set.
(b)
Now let’s consider the subspace S consisting of all symmetric matrices, those for which AT=A. It should be clear this is a proper subspace, but what is its dimension. Actually finding a basis helps answer that question.
Hint.
If you don’t like the “brute force” force of the tack of the solution, you could take the high road and consider the space of skew-symmetric matrices, those for which AT=A. It is pretty easy to determine its dimension and then you can use the fact that every matrix can be written as the sum of symmetric and skew-symmetric matrix to tell you the dimension of S.
A=12(A+AT)+12(AAT).
Solution.
Once again, it is clear that some elements of the standard basis are in S, like E11,E22. Since it is a proper subspace, its dimension is either 2 or 3, and a few moments thought convinces you that
[0110]=E12+E21
is symmetric, not in the span of the other two, so forms an independent set in S. So dimS=3, this must be a basis for S.
(c)
Now KS is also a subspace of M2(R). Can we find its dimension.
Solution.
Once again, it is useful to know the dimension of the space. Certainly it is at most 3, but then not every symmetric matrix has zero trace, so it is at most two. Staring at the bases for each of S and K separately, we see that both
[0110] and [1001]
are in the intersection and are clearly linearly independent, so they must be a basis.
(d)
Extend the basis you found for KS to bases for S and for K.
Solution.
Since dim(KS)=2, we need only find one matrix not in their span to give a basis for either K or S. For K, we could choose E12, and for S we could choose E11. Knowing the dimension is clearly a powerful tool since it tells you when you are done.

10.

The matrix B=[147311191918] is invertible with inverse B1=[2791351121651]. Since the columns of B are linearly independent, they form a basis for R3:
B={[131],[4119],[71918]}.
Let E be the standard basis for R3.
(a)
Suppose that a vector vR3 has coordinate vector [v]B=[123].
Find[v]E.
Solution.
The matrix B is the change of basis matrix [I]BE so
[v]E=[I]BE[v]B=[147311191918][123]=[123235]
(b)
Suppose that T:R3R3 is the linear map given by T(x)=Ax where
A=[T]E=[123456789].
Write down an appropriate product of matrices which equal [T]B.
Solution.
[T]B=[I]EB[T]E[I]BE=B1AB.

11.

Let W be the subspace of M2(R) spanned by the set S, where
S={[0111],[1223],[2119],[1224]}.
(a)
Use the standard basis B={E11,E12,E21,E22} for M2(R) to express each element of S as a coordinate vector with respect to the basis B.
Solution.
We write the coordinate vectors as columns of the matrix:
[0121121212121394].
(b)
Determine a basis for W.
Hint.
By staring at the matrix, it is immediate that that rank is at most 3. What are the pivots?
Solution.
We start a row reduction:
A[0121121213940000][1394012112120000][13940121051020000][1394012100030000].
Thus the pivot columns are the first, second, and fourth, so we may take the first, second and fourth elements of S as a basis for W.

12.

Let A=[123123123].
(a)
Compute the rank and nullity of A.
Solution.
Too easy! It is obvious that the rank is 1 since all columns are multiples of the first. Rank-nullity tells us that the nullity is 31=2.
(b)
Compute A[111], and use your answer to help conclude (without computing the characteristic polynomial) that A is diagonalizable.
Solution.
A[111]=[666]=6[111], which means that 6 is a eigenvalue for A, and [111] is an eigenvector.
The nullity is 2, which means that 0 is an eigenvalue and that the eigenspace corresponding to 0 (the nullspace of A) has dimension 2, so that there exists a basis of R3 consisting of eigenvectors. Recall that by Proposition 1.5.5 the eigenvectors from different eigenspaces are linearly independent.
(c)
Determine the characteristic polynomial of A from what you have observed.
Solution.
χA(x)=x2(x6). There are two eigenvalues, 0 and 6, and since the matrix is diagonalizable the algebraic multiplicities to which they occur equal their geometric multiplicities (i.e., the dimension of the corresponding eigenspaces), see Theorem 1.5.6.
(d)
Determine a matrix P so that
[600000000]=P1AP.
Solution.
We already know that [111] is an eigenvector for the eigenvalue 6, and since 6 occurs as the first entry in the diagonal matrix, that should be the first column of P.
To find a basis of eigenvectors for the eigenvalue 0, we need to find the nullspace of A. It is immediate to see that the reduced row-echelon form of A is
R=[123000000],
which tells us the solutions are
[x1x2x3]=[2x23x3x2x3]=x2[210]+x3[301].
We may choose either of those vectors (or some linear combinations of them) to fill out the last columns of P. So one choice for P is
P=[123110101].

13.

Let E1={E11,E12,E21,E22}={[1000],[0100],[0010],[0001]} be the standard basis for M2(R), and E2={1,x,x2,x3} the standard basis for P3(R). Let T:M2(R)P3(R) be defined by
T([abcd])=2a+(bd)x(a+c)x2+(a+bcd)x3.
(a)
Find the matrix of T with respect to the two bases: [T]E1E2.
Solution.
The columns of the matrix [T]E1E2 are the coordinate vectors [T(Eij)]E2, so
[T]E1E2=[2000010110101111].
(b)
Determine the rank and nullity of T.
Solution.
It is almost immediate that the first three columns of the matrix are pivot columns (think RREF), so the rank is at least three. Then we notice that the last column is a multiple of the second, which means the rank is at most three. Thus rank is 3 and nullity is 1.
(c)
Find a basis of the image of T.
Solution.
The first three columns of [T]E1E2 are a basis for the column space of the matrix, but we recall that they are coordinate vectors and the codomain is P3(R), so a basis for the image is:
{2x2+x3,x+x3,x2x3}.
(d)
Find a basis of the kernel of T.
Solution.
Since
T([abcd])=2a+(bd)x(a+c)x2+(a+bcd)x3,
we must characterize all matrices which yield the zero polynomial. We quickly deduce we must have
a=c=0, and b=d,
so one can choose [0101] as a basis for the kernel.

14.

Let V be a vector space with basis B={v1,,v4}. Define a linear transformation by
T(v1)=v2,T(v2)=v3,T(v3)=v4,T(v4)=av1+bv2+cv3+dv4.
(a)
What is the matrix of T with respect to the basis B?
Solution.
[T]B=[000a100b010c001d].
(b)
Determine necessary and sufficient conditions on a,b,c,d so that T is invertible.
Hint.
What is the determinant of T, or what happens when you row reduce the matrix?
Solution.
The determinant of the matrix is a, so T is invertible if and only if a0. The values of b,c,d do not matter.
(c)
What is the rank of T and how does the answer depend upon the values of a,b,c,d?
Solution.
With one elementary row operation, we reduce the original matrix to [100b010c001d000a] which is in echelon form. If a=0, the rank is 3, otherwise it is 4.

15.

Define a map T:Mm×n(R)Rm as follows: For A=[aij]Mm×n(R), define T(A)=[b1b2bm] where bk=j=1nakj, that is, bk is the sum of all the elements in the k-th row of A. Assume that T is linear.
(a)
Find the rank and nullity of T.
Hint.
If you find this too abstract, try an example first, say with m=2 and n=3. And finding the rank is the easier first step.
Solution.
Using the standard basis {Eij} for Mm×n(R), we see that T(Ek1)=ek where {e1,,em} is the standard basis for Rm. Since a spanning set for Rm is in the image of T, the map must be surjective, which means the rank is m. By rank-nullity, the nullity is nmm.
(b)
For m=2, and n=3 find a basis for the nullspace of T.
Hint.
For an element to be in the nullspace, the sum of the entries in each of its rows needs to be zero. Can you make a basis with one row in each matrix all zero?
Solution.
Consider the set
{[101000],[011000],[000101],[000011]}
Notice that the 1 which occurs in each matrix occurs in a different location in each matrix. It is now easy to show that any linear combination of these matrices which equals the zero matrix must have all coefficients equal to zero, so the set is linearly independent. Since it has the correct size, it must be a basis for the nullspace.

16.

This exercise is about how to deal with determining independent and spanning sets in vector spaces other than Fn. Let V=P3(R), the vector space of polynomials of degree at most 3 with real coefficients. Suppose that some process has handed you the set of polynomials
S={p1=1+2x+3x2+3x3,p2=5+6x+7x2+8x3,p3=9+10x+11x2+12x3,p4=13+14x+15x2+16x3}
We want to know whether S is a basis for V, or barring that extract a maximal linearly independent subset.
(a)
How can we translate this problem about polynomials into one about vectors in Rn?
Solution.
We know that Theorem 1.2.5 tells us that P3(R) is isomorphic to R4, and all we need to do is map a basis to a basis, but we would like a little more information at our disposal.
Let B={1,x,x2,x3} be the standard basis for V=P3(R). Then the map
T(v)=[v]B
which takes a vector v to its coordinate vector is such an isomorphism. What is important is that linear dependence relations among the vectors in S are automatically reflected in linear dependence relations among the coordinate vectors.
(b)
Determine a maximal linearly independent subset of S.
Solution.
If we record the coordinate vectors for the polynomials in S as columns of a matrix, we produce a matrix A and its RREF R:
A=[12345678910111213141516]R=[1012012300000000]
So we see that the first two columns are pivot columns which means S0={p1,p2} is a maximal linearly independent set.
We also recall that from the RREF, we can read off the linear dependencies with the other two vecotrs:
p3=p1+2p2 and p4=2p1+3p2.
(c)
Extend the linearly independent set from the previous part to a basis for P3(R).
Solution.
Since we are free to add whatever vectors we want to the given set, we can add column vectors to the ones for p1 and p2 to see if we can extend the basis. We know that {p1,p2,1,x,x2,x3} is a linearly dependent spanning set. We convert to coordinates and row reduce to find the pivots. So we build a matrix B and its RREF:
[151000260100370010480001][10002740100134001032000121]
We see the first 4 columns are pivots, so we may take {p1,p2,1,x} as one such basis.

17.

Let AM5(R) be the block matrix (with off diagonal blocks all zero) given by:
A=[10α2300β300γ3].
Determine all values of α,β,γ for which A is diagonalizable.
Solution.
Since the matrix is lower triangular, it is easy to compute the characteristic polynomial:
χA=(x+1)(x2)(x3)3.
The eigenspaces for λ=1,2 each have dimension 1 (the required minimum) and equal to the algebraic multiplicity, so the only question is what happens with the eigenvalue λ=3. Consider the matrix A3I=[40α1000β000γ0]. For the nullspace of A3I to have dimension 3, the rank must be 2. Clearly the first two rows are linearly independent (independent of α), while if either β or γ is nonzero, this will increase the rank beyond two. So the answer is α can be anything, but β and γ must both be zero.

18.

Let A=[300616102]M3(R).
(a)
Find the characteristic polynomial of A.
Solution.
χA=det(xIA)=det([x3006x+1610x2]). Expanding along the first row shows that χA=(x3)(x2)(x+1).
(b)
Show that A is invertible.
Solution.
Many answers are possible: detA=60, or 0 is not an eigenvalue, or one could row reduce the matrix to the identity. All show A is invertible.
(c)
Justify that the columns of A form a basis for R3.
Solution.
Since A is invertible, the rank of A is 3, which is the dimension of the column space. So the column space spans all of R3, which means the columns must be linearly independent either by Theorem 1.1.3 or directly since the nullspace is trivial. Thus the columns form a basis.
(d)
Let B={v1,v2,v3} be the columns of A, and let E be the standard basis for R3. Suppose that T:R3R3 is a linear map for which A=[T]E. Determine [T]B.
Solution.
We know that [T]B=Q1[T]EQ, where Q=[I]BE is a change of basis matrix. But we see that Q=[I]BE=A by definition and since [T]E=A as well, we check that [T]B=Q1[T]EQ=A1AA=A.