Skip to main content
Logo image

Section 2.6 Bases: the critical ingredient

Given a vector space, you have seen the definition of a basis, and are aware there can be many bases for a vector space. Isn’t it enough that there is often a standard basis for a vector space? Do we really have need of different bases? These are good questions which we need to investigate.
We begin by first understanding the value in having a basis. Since a vector space is a very general object whose only structure is vector addition and scalar multiplication, a basis gives us a way in which to reduce the description of an arbitrary vector to a finite amount of data.
For example, when we describe a vector in \(\R^3\text{,}\) we may just write down something like \(v=(1,2,3),\) which makes it seemingly trivial to describe any point in 3-space, no basis needed. But of course we have used the standard basis \(\{e_1, e_2, e_3\}\) to describe as the linear combination \(v = 1e_1 +2e_2+3e_3,\) so that we can specify any of the infinitely many points in \(\R^3\) by knowing only three “coordinates”, the coefficients of the basis in the linear combination.
Similarly, when we specify a polynomial we are simply encoding the coefficients of \(\{1,x,x^2 \cdots\}.\) You may recall from calculus that when we want a Taylor polynomial which approximates a function \(f\) near a point \(x=a,\) one writes
\begin{equation*} f(x) \approx c_0 + c_1 (x-a) + \cdots + c_n (x-a)^n \end{equation*}
where \(c_j = f^{(j)}(a)/j!\) In other words the \(c_j\) are the coefficients (coordinates) of the linear combination with respect to the basis \(\{1, (x-a), \dots, (x-a)^n\}\) of \(P_n(\R).\) So forgiving the pun, different bases are tailored to different purposes.
Now let’s pick apart the requirements for a basis: linear independence and span. The fact that a set of vectors is a spanning set for a vector space tells us we can reduce the description of any vector to a finite linear combination. Well that is certainly good, so what does linear independence add? Uniqueness! That there is only one way to describe the linear combination. First, let’s make that statement explicitly.
To belabor the point, while \(S=\{v_1 =(1,0),v_2= (0,1),v_3=(1,1)\}\) is certainly a spanning set for \(\R^2,\) it is not a linearly independent set since any vector can be expressed in multiple ways as a linear combination. For example,
\begin{equation*} (a,b) = av_1 + bv_2 + 0v_3 = (a-b)v_1 + 0v_2 + bv_3 = 0v_1 + (b-a)v_2 + av_3. \end{equation*}
If we return to Proposition 2.5.1, a linear map \(T:V\to W\) is structure-preserving, namely for any vectors \(v_i\) and scalars \(a_i\text{,}\)
\begin{equation*} T(a_1v_1+ \cdots+a_rv_r) = a_1T(v_1) + \cdots + a_r T(v_r). \end{equation*}
Using the example above with \(S=\{v_1 =(1,0),v_2= (0,1),v_3=(1,1)\}\) as spanning set for \(\R^2,\) we might be inclined to try to define a linear map \(T:\R^2 \to \R^2\) by setting
\begin{equation*} T(v_1) = (1,2),\quad T(v_2) = (3,4), \text{ and } T(v_3) = (5,6). \end{equation*}
But we would find that this map is not linear since \(v_3 = v_1 + v_2,\) but \(T(v_3) = (5,6) \ne (4,6) = T(v_1) + T(v_2).\) We could remedy this of course by defining \(T(v_3) = (4,6)\text{,}\) but then the definition of \(T(v_3)\) is redundant; it is already implied by saying \(T\) is linear and defined on \(v_1\) and \(v_2,\) which span \(\R^2.\) This is where linear independence of the set is important; there is only one way to describe a vector as a linear combination of the elements of the set.
We have the fundamental theorem:

Remark 2.6.3.

We recall that the cardinality of any basis for a vector space is called its dimension, and spaces can be finite-dimensional like \(F^n\) or \(P_n(F)\) or \(M_{m\times n}(F),\) or they can be infinite dimensional like \(P(F)=F[x]\text{,}\) the vector space of all polynomials.
This leads to the crucial result:
This is truly an amazing result. It says given a basis one can define any linear map by simply specifying where to send each of the basis vectors.

Exercises Exercises

1.

Let \(T: V\to W\) be a linear map between vector spaces, and \(\{v_1, \dots, v_r\} \subseteq V.\) Show that
\begin{equation*} T(\Span(\{v_1, \dots, v_r\})) = \Span\{T(v_1), \dots, T(v_r)\}. \end{equation*}
Hint 1.
When you want to show that two sets, say \(X\) and \(Y\) are equal, you must show \(X\subseteq Y\) and \(Y\subseteq X\text{.}\) And to show that (for example) \(X\subseteq Y\text{,}\) you need only show that for each choice of \(x\in X\text{,}\) that \(x\in Y \text{.}\)
Hint 2.
So if \(w \in T(\Span(\{v_1, \dots, v_r\}))\text{,}\) then \(w = T(a_1 v_1 + \cdots + a_r v_r)\) for some choice of scalars \(a_1, \dots, a_r.\)

2.

Let \(V = P_2(\R)\) be the vector space of all polynomials of degree at most two with real coefficients. We know that both sets \(\{1, x, x^2\}\) and \(\{2, 3x, 2+3x+4x^2\}\) are bases for \(V.\)
By Theorem 2.6.4, there are uniquely determined linear maps \(S,T:V\to V\) defined by
\begin{align*} T(1)\amp = 0, \quad T(x) = 1,\quad T(x^2) = 2x.\\ S(2)\amp = 0, \quad S(3x) = 3,\quad S(2+3x+4x^2) = 3+8x. \end{align*}
Show that the maps \(S\) and \(T\) are the same.
Hint 1.
Why is it enough to show that \(S(1)=0\text{,}\) \(S(x)=1\text{,}\) and \(S(x^2)=2x\text{?}\)
Hint 2.
How does the linearity of \(S\) play a role?