Skip to main content
Logo image

Section 1.1 Vector spaces and linear maps

In simplest terms, linear algebra is the study of vector spaces and linear maps between them. But what does that really mean? One overriding goal in Mathematics is to classify objects into distinct “types”, and also to characterize the manner in which complicated structures are constructed from simpler ones. For example, in linear algebra the notion of when two vector spaces are the same “type” (i.e., are indistinguishable as vector spaces) is captured by the notion of isomorphism. In terms of structure, the notions of bases and direct sums play a crucial role.

Subsection 1.1.1 Some familiar examples of vector spaces

While most of the examples and applications we shall consider are vector spaces over the field of real or complex numbers, for the examples below, we let \(F\) denote any field. First recall the definition of a vector space [click the link to toggle the definition].
  • For an integer \(n \ge 1\text{,}\) the set \(V=F^n\text{,}\) of \(n\)-tuples of numbers in \(F\) viewed as column vectors with \(n\) entries, is a vector space over \(F\text{.}\)
  • For integers \(m,n \ge 1\text{,}\) the vector space of \(m\times n\) matrices with entries from \(F\) is denoted \(M_{m\times n}(F).\) Column vectors in \(F^m\) are the matrices in \(M_{m\times1}(F)\text{,}\) while row vectors in \(F^n\) are matrices in \(M_{1\times n}(F)\text{.}\)
  • For an integer \(n \ge 1\text{,}\) we denote by \(P_n(F)\) the vector space of polynomials of degree at most \(n\) having coefficients in \(F\text{.}\)
  • The vector space of all polynomials with coefficients in \(F\) is often denoted as \(P(F)\) in many linear algebra texts, though in more advanced courses (say abstract algebra) the more typical notation is \(F[x]\) (with \(x\) the indeterminant), a notation we shall use here.

Subsection 1.1.2 Linear independent and spanning sets

Let \(V\) be a vector space over a field \(F\text{.}\) For a subset \(S\subseteq V\text{,}\) we have the fundamental notions of linear independence, linear dependence, span, basis, and dimension. We remind the reader that even when dealing with infinite dimensional vector spaces, linear combinations involve only a finite number of summands.

Checkpoint 1.1.1.

Let \(W\) be a subspace of a vector space \(V\text{,}\) and \(S\) a subset of \(W.\) Show that \(\Span(S) \subseteq W.\)
Hint.
Since \(W\) is itself a vector space, it is closed under vector addition and scalar multiplication.
There are many important theorems which relate the above notions and which can be found in all standard books. We summarize some of these here.
For the remainder of this section we restrict to a vector space \(V\) of finite dimension \(n.\)
As a consequence of the above, we have another important theorem.

Proof.

The proofs are straightforward from the above since if a set of \(n\) linearly independent vectors in \(V\) did not span, you could add a vector to the set of \(n\) and obtain an independent set with \(n+1\) elements. Similarly, if \(n\) elements spanned \(V\) but were not independent, you could eliminate one giving a basis with too few elements.
Fundamental to the proofs of these theorems is the following:

Exercises Exercises

1.
Let \(A\) be an \(m\times n\) matrix. Its row space is the span of the rows of \(A\) and so is a subspace of \(F^n\text{.}\) Its column space is the span of its columns and so is a subspace of \(F^m.\)
Can any given column of a matrix always be used as part of a basis for the column space?
Hint.
Under what conditions is a set with one vector a linearly independent subset of the vector space?
Answer.
Any column of a matrix which is not the column of all zeros can be used as part of the basis of the column space since the single nonzero column is a linearly independent set.
2.
Suppose the first two columns of a matrix are nonzero. What is an easy way to check that both columns can be part of a basis for the column space?
Hint.
What does the notion of linear dependence reduce to in the case of two vectors?
Answer.
Two columns which are not multiples of one another may be used as part of the basis for the column space.
3.
Do you think there is an easy way to determine if the first three nonzero columns of a matrix can be part of a basis for the column space?
Hint.
Easy may be in the eye of the beholder.
Answer.
Not typically by inspection. Given the first two columns are linearly independent, one needs to know the third is not a linear combination of the first two. In Section 1.3 we provide answers using either elementary column operations, or perhaps surprisingly elementary row operations.

Subsection 1.1.3 Defining a linear map.

Starting from the definition of a linear map, one proves by induction that a linear map takes linear combinations of vectors in the domain to the same linear combination of the corresponding vectors in the codomain. More precisely we have
If the goal is to define a linear map \(T:V \to W\text{,}\) one must define \(T(v)\) for all vectors \(v\in V\text{,}\) so it is ideal to know how to represent a given vector as a linear combination of others. In particular this leads to the notion of a basis for a vector space. Recall some standard bases for familiar vector spaces.
So now let’s suppose \(V\) is a finite-dimensional vector space over a field \(F\) with basis \(\cB=\{v_1, \dots, v_n\}\text{,}\) and \(W\) is a completely arbitrary vector space over \(F\text{.}\) To define a linear map \(T:V\to W\) it is certainly necessary to define the values \(T(v_1), \dots, T(v_n)\text{.}\) The important point is that this is all that needs to be done!
Indeed Proposition 1.1.5 tells us that if such a linear map \(T\) exists, then
\begin{equation} T(a_1v_1+ \cdots+a_n v_n) = a_1 w_1 + \cdots + a_n w_n.\tag{1.1.2} \end{equation}
Since \(\cB=\{v_1, \dots, v_n\}\) is a basis for \(V\text{,}\) every element of \(V\) has a unique expression of the form \(a_1v_1+ \cdots+a_n v_n\text{,}\) so the map \(T\) is defined for every vector in \(V\text{,}\) and it is easy to determine from its definition that \(T\) is indeed a linear map.
Next recall the definition of the span of a set of vectors, and gain some facility by doing the following exercises.

Exercises Exercises

1.
Let \(T: V\to W\) be a linear map between vector spaces, and \(\{v_1, \dots, v_r\} \subseteq V.\) Show that
\begin{equation*} T(\Span(\{v_1, \dots, v_r\})) = \Span\{T(v_1), \dots, T(v_r)\}. \end{equation*}
Hint 1.
When you want to show that two sets, say \(X\) and \(Y\) are equal, you must show \(X\subseteq Y\) and \(Y\subseteq X\text{.}\) And to show that (for example) \(X\subseteq Y\text{,}\) you need only show that for each choice of \(x\in X\text{,}\) that \(x\in Y \text{.}\)
Hint 2.
So if \(w \in T(\Span(\{v_1, \dots, v_r\}))\text{,}\) then \(w = T(a_1 v_1 + \cdots + a_r v_r)\) for some choice of scalars \(a_1, \dots, a_r.\)
2.
Let \(V = P_2(\R)\) be the vector space of all polynomials of degree at most two with real coefficients. We know that both sets \(\{1, x, x^2\}\) and \(\{2, 3x, 2+3x+4x^2\}\) are bases for \(V.\)
By Theorem 1.1.6, there are uniquely determined linear maps \(S,T:V\to V\) defined by
\begin{align*} T(1)\amp = 0, \quad T(x) = 1,\quad T(x^2) = 2x.\\ S(2)\amp = 0, \quad S(3x) = 3,\quad S(2+3x+4x^2) = 3+8x. \end{align*}
Show that the maps \(S\) and \(T\) are the same.
Hint 1.
Why is it enough to show that \(S(1)=0\text{,}\) \(S(x)=1\text{,}\) and \(S(x^2)=2x\text{?}\)
Hint 2.
How does the linearity of \(S\) play a role?