Throughout all vector spaces are inner product spaces over the field \(F = \R \text{ or } \C\) with inner product \(\la\cdot,\cdot\ra.\) Generally the vector spaces are finite-dimensional unless noted.
Subsection3.2.1Orthogonal and Orthonormal Bases
Recall that a set \(S\) of vectors is orthogonal if every pair of distinct vectors in \(S\) is orthogonal, and the set is orthonormal if \(S\) is an orthogonal set of unit vectors.
Example3.2.1.The standard basis in \(F^n\).
Let \(\cE=\{e_1, e_2, \dots,
e_n\}\) be the standard basis in \(F^n\) (\(e_i\) has a one in the \(i\)th coordinate and zeros elsewhere). It is immediate to check that this is an orthonormal basis for \(F^n.\)
We first make a very simple observation about an orthogonal set of nonzero vectors; they are linearly independent.
Proposition3.2.2.
Let \(S = \{v_i\}_{i\in I}\) be an orthogonal set of nonzero vectors. Then \(S\) is a linearly independent set.
Here \(S\) can be an infinite set which is why we index its elements by a set \(I,\) but since the notion of linear (in)dependence only involves a finite number of vectors at a time, our proposition holds true in this broader setting.
Proof.
Suppose that \(S\) is a linearly dependent set. Then there exist vectors \(v_{i_1}, \dots, v_{i_k} \in S\) and scalars \(a_{i_j}\) not all zero so that
Indeed, there is no loss to assume all the coefficients are nonzero, so let’s say \(a_{i_1} \ne 0.\) We know that since \(v=0,\)\(\la v,v_{i_1}\ra=0,\) but we now compute it differently and see
But \(v_{i_1}\ne 0,\) so its length is nonzero, forcing \(a_{i_1}=0,\) a contradiction.
Orthonormal bases offer distinct advantages in terms of representing coordinate vectors or the matrix of a linear map. For example if \(\cB=\{v_1, \dots, v_n\}\) is a basis for a vector space \(V,\) we know that every \(v\in V\) has a unique representation as \(v = a_1v_1 + \cdots + a_nv_n\) the coefficients of which provide the coordinate vector \([v]_\cB.\) But determining the coordinates is often a task which requires some work. With an orthonormal basis, this process is completely mechanical.
Theorem3.2.3.
Let \(V,W\) be finite-dimensional inner product spaces with orthonormal bases \(\cB_V=\{e_1, \dots, e_n\}\) and \(\cB_W=\{f_1, \dots, f_m\}.\)
Every vector \(v\in V\) has a unique representation as \(v = a_1e_1 + \cdots + a_n e_n\) where \(a_j = \la v, e_j\ra.\)
If \(T:V\to W\) is a linear map and \(A =
[T]_{\cB_V}^{\cB_W}\text{,}\) then \(A_{ij} = \la T(e_j),f_i\ra.\)
Proof of (1).
Write \(v = a_1e_1 + \cdots + a_n e_n\text{.}\) Then using the linearity of the inner product in the first variable and \(\la e_i,e_j\ra = \delta_{ij},\) the Kronecker delta, we have
It is clear that orthonormal bases have distinct advantages and there is a standard algorithm to produce one from an arbitrary basis, but to understand why the algorithm should work, we need to review projections.
From applications of vector calculus, one recalls the orthogonal projection of a vector \(v\) onto the line spanned by a vector \(u\text{.}\) The projection is a vector parallel to \(u,\) so is of the form \(\lambda u\) for some scalar \(\lambda.\) Referring to the figure below, if \(\theta\) is the angle between the vectors \(u\) and \(v\text{,}\) then the length of \(\proj_u v\) is \(\|v\| \cos\theta\) (technically its absolute value). But \(\cos\theta = \la u,v\ra/(\|u\|\|v\|)\text{,}\) and the direction of \(u\) is given by the unit vector, \(\ds\frac{u}{\|u\|},\) parallel to \(u\text{,}\) so putting things together we see that
so the scalar \(\lambda\) referred to above is \(\ds\frac{\la
u,v\ra}{\|u\|^2}.\) We also note that the vector \(w:= v - \proj_u v\) is orthogonal to \(u.\)
Now the key to an algorithm which takes an arbitrary basis to an orthogonal one is the above construction. Note that in the figure below, the vectors \(u\) and \(v\) are not parallel, so form a linearly independent set. The vectors \(u\) and \(w\) are orthogonal (hence linearly independent) and have the same span as the original vectors. Thus we have turned an arbitrary basis of two elements into an orthogonal one. The Gram-Schmidt process below extends this idea inductively.
Algorithm3.2.5.Gram-Schmidt process.
Let \(V\) be an inner product space, and \(W\) a subspace with basis \(\cB=\{v_1, \dots, v_m\}\text{.}\) To produce an orthogonal basis \(\cE=\{e_1, \dots, e_m\}\) for \(W,\) proceed inductively.
Let \(e_1 = v_1\text{.}\)
Let \(\ds e_k = v_k - \sum_{j=1}^{k-1} \frac{\la
v_k,e_j\ra}{\|e_j\|^2} e_j, \text{ for } 2\le k\le m.\)
To produce an orthonormal basis, normalize each vector replacing \(e_j\) with \(e_j/\|e_j\|.\)
We note that the first two steps of the Gram-Schmidt process are exactly what we did above with the orthogonal projection.
Subsection3.2.2Orthogonal complements and projections
Let \(V\) be an inner product space and \(W\) a subspace. Define
The set \(W^\perp\) is called the orthogonal complement of \(W\) in \(V.\) The notation \(\la v,W\ra =0\) means that \(\la v,w\ra = 0\) for all \(w\in W\text{,}\) so every vector in \(W^\perp\) is orthogonal to every vector of \(W\text{.}\)
Example3.2.6.The orthogonal complement of a plane.
For example, if \(V=\R^3\text{,}\) and \(W\) is a line through the origin, then \(W^\perp\text{,}\) the orthogonal complement of \(W\text{,}\) is a plane through the origin for which the line defines the normal vector.
Checkpoint3.2.7.Is the orthogonal complement a subspace?
If \(W\) is a subspace of a vector space \(V\text{,}\) is \(W^\perp\) necessarily a subspace of \(V?\)
Hint.
How do we check? Is \(0 \in W^\perp\) (why?). If \(u_1, u_2 \in W^\perp\) what about \(u_1+u_2\) and \(\lambda u_1\text{?}\) (why?)}
If may occur to you that the task of finding a vector in \(W^\perp\) could be daunting since you have to check it is orthogonal to every vector in \(W\text{.}\) Or do you?
Checkpoint3.2.8.How do we check if a vector is in the orthogonal complement?
Let \(S\) be a set of vectors in a vector space \(V,\) and \(W = \Span(S)\text{.}\) Show that a vector \(v \in W^\perp\) if and only if \(\la v,s\ra = 0\) for every \(s\in S.\) This means there is only a finite amount of work for any subspace with a finite basis.
Moreover, we know that \(W^\perp\) is a subspace of \(V,\) but what you have shown is that \(S^\perp = W^\perp\) is also.
Hint.
Everything in \(\Span(S)\) is a linear combination of the elements of \(S,\) and we know how to expand \(\la v,
\sum_{k=1}^m \lambda_i s_i \ra.\)
We shall see below that if \(V\) is an inner product space and \(W\) a finite-dimensional subspace, then every vector in \(V\) can be written uniquely as \(v = w + w^\perp\text{,}\) i.e., for unique \(w\in
W\) and \(w^\perp \in W^\perp.\) In different notation, that will say that \(V = W \oplus W^\perp,\) that \(V\) is the direct sum of \(W\) and \(W^\perp.\)
For now let us verify only the simple part of showing it is a direct sum, showing that \(W\cap W^\perp = \{0\}.\)
Proposition3.2.9.
If \(V\) is an inner product space and \(W\) any subspace, then \(W\cap W^\perp = \{0\}.\)
Proof.
Let \(w \in W\cap W^\perp.\) If \(w\ne 0\text{,}\) then by the properties of an inner product \(\la w,w\ra\ne 0.\) But since \(w\in W^\perp,\) the vector \(w\) is orthogonal to every vector in \(W,\) in particular to \(w,\) a contradiction.
Subsection3.2.3What good is an orthogonal complement anyway?
Let’s say that after a great deal of work we have obtained an \(m\times n\) matrix \(A\) and column vector \(b\text{,}\) and desperately want to solve the linear system \(Ax=b.\)
We know that the system is solvable if and only if \(b\) is in \(C(A)\text{,}\) the column space of \(A\text{.}\) But what if \(b\) is not is the column space? We want to solve this problem, right? Should we just throw up our hands?
This dilemma is not dissimilar from trying to find a rational number equal to \(\sqrt 2.\) It cannot be done. But there are rational numbers arbitrarily close to \(\sqrt 2.\) Perhaps an approximation to a solution would be good enough.
So now let’s make the problem geometric. Suppose we have a plane \(P\) in \(\R^3\) and a point \(x\) not on the plane. How would we find the point on \(P\) closest to the point \(x\text{?}\) Intuitively, we might “drop a perpendicular” from the point to the plane and the point \(x_0\) where it intersects would be the desired closest point.
This is correct and gives us the intuition to develop the notion of an orthogonal projection. To apply it to our inconsistent linear system, we want to find a column vector \(\hat b\) (in the column space of \(A\)) closest to \(b\text{.}\) We then check (see Corollary 3.2.15) that the solution \(\hat
x\) to \(Ax=\hat b\) satisfies the property that
\begin{equation*}
\|
A\hat x -b\| \le \|Ax -b\| \text{ for any } x\in \R^n.
\end{equation*}
Since the original system \(Ax=b\) is not solvable, we know that \(\|Ax-b\| >0\) for every \(x\text{,}\) and that difference is an error term given by the distance between \(Ax\) and \(b.\) The value \(\hat x\) minimizes the error, and is called the least squares solution to \(Ax=b\) (since there is no exact solution). We shall explore this in more detail a bit later.
Subsection3.2.4Orthogonal Projections
Now we want to take our intuitive example of “dropping a perpendicular” and develop it into a formal tool for inner product spaces.
Let \(V\) be an inner product space and \(W\) be a finite-dimensional subspace. Since \(W\) has a basis, we can use the Gram-Schmidt process to produce and orthogonal basis \(\{w_1, \dots , w_r\}\) for \(W\text{.}\)
Theorem3.2.10.
Let \(\{w_1, \dots, w_r\}\) be an orthogonal basis for a subspace \(W\) of an inner product space \(V.\) Each vector \(v\in V\) can be represented uniquely as \(v = w^\perp + w\) where \(w\in W,\) and \(w^\perp
\in W^\perp,\) that is \(w^\perp\) is orthogonal to \(W.\) Moreover,
Certainly \(w\) as defined is an element of \(W\text{,}\) and to see that \(w^\perp = v-w\) is orthogonal to \(W\text{,}\) it is sufficient by Checkpoint 3.2.8 to verify that \(\la w^\perp, w_i\ra = 0\) for each \(i=1,
\dots, r.\)
Using the definition of \(w^\perp\) and bilinearity of the inner product we have
Finally to see that \(w^\perp\) and \(w\) are uniquely determined by these conditions, suppose that as above \(v=
w^\perp + w\text{,}\) and also \(v = w_1^\perp +
w_1\) with \(w_1\in W\) and \(w_1^\perp \in W^\perp\text{.}\)
Setting the two expressions equal to each other and solving gives that
But the left hand side is an element of \(W\) while the right hand side is an element of \(W^\perp,\) so by Proposition 3.2.9, both expressions equal zero, which gives the uniqueness.
Corollary3.2.11.
Let \(V\) be an inner product space and \(W\) be a finite-dimensional subspace. Then
\begin{equation*}
V = W\oplus W^\perp.
\end{equation*}
In this case the direct sum is an orthogonal sum, so the expression is often written as
\begin{equation*}
V = W \boxplus W^\perp.
\end{equation*}
Another useful property of the orthogonal complement is
Corollary3.2.12.
Let \(V\) be an inner product space and \(W\) a finite-dimensional subspace. Then
\begin{equation*}
(W^\perp)^\perp = W.
\end{equation*}
In particular, every \(w \in W\) is orthogonal to all of \(W^\perp,\) so that \(W \subseteq (W^\perp)^\perp.\) The other containment takes a bit more care.
Let \(v \in (W^\perp)^\perp.\) Since \(W\) is finite-dimensional, Theorem 3.2.10 says that \(v\) can be written uniquely as
\begin{equation*}
v = w^\perp + w
\end{equation*}
where \(w\in W\) and \(w^\perp \in W^\perp.\) The goal is to show that \(w^\perp = 0.\)
Consider \(w^\perp = v-w.\) Since \(v \in
(W^\perp)^\perp\text{,}\) and \(w\in W \subseteq
(W^\perp)^\perp,\) we conclude \(w^\perp \in
(W^\perp)^\perp,\) so \(\la w^\perp , W^\perp\ra = 0.\) But \(w^\perp \in W^\perp\) by the theorem, so \(\la
w^\perp,w^\perp\ra =0\) implying that \(w^\perp =0\) by the axioms for an inner product. Thus \(v = w \in W,\) meaning \((W^\perp)^\perp \subseteq W,\) giving us the desired equality.
Definition3.2.13.
If \(V\) is an inner product space and \(W\) a finite-dimensional subspace with orthogonal basis \(\{w_1, \dots
, w_r\}\text{,}\) then the orthogonal projection of a vector \(v\) onto the subspace \(W\) is given by the expression in Theorem 3.2.10:
Let \(V\) be an inner product space and \(W\) be a finite-dimensional subspace. If \(w\in W,\) then
\begin{equation*}
\proj_W w = w.
\end{equation*}
Proof.
Combining Theorem 3.2.10 with the definition of projection, we know that \(w\) can be written uniquely as \(w=w^\perp+\proj_W w,\) where \(w^\perp \in W^\perp.\) But \(w = 0+w\text{,}\) so \(w^\perp =
0\) and \(w = \proj_W w\text{.}\)
To complete our formalization of the idea of dropping a perpendicular, we now show that the projection \(\proj_W v\) of a vector \(v\) is the unique vector in \(W\) closest to \(v.\)
Corollary3.2.15.
Let \(V\) be an inner product space and \(W\) be a finite-dimensional subspace. If \(v\in V\text{,}\) then
for all \(w\in W\text{,}\) with \(w \ne \proj_W v. \)
Proof.
By Corollary 3.2.14, we may assume that \(v \notin W,\) so consider any \(w \in W\) with \(w \ne \proj_W v.\) We certainly know that
\begin{equation*}
v - w = v -\proj_W v + \proj_W v - w,
\end{equation*}
and we know that \(\proj_W v - w \in W\) while by Theorem 3.2.10 we know that \(v -\proj_W v \in W^\perp.\) Thus the vectors \(v-w\text{,}\)\(v -\proj_W v\) and \(\proj_W v -w\) form a right triangle whose lengths satisfy the Pythagorean identity:
\begin{equation*}
\|v - w\|^2 = \|v -\proj_W v \|^2 + \|\proj_W v -w\|^2.
\end{equation*}
It follows that if \(w \ne \proj_W v\text{,}\) that \(\|\proj_W v
-w\|>0\text{,}\) so that \(\|v - w\| > \|v -\proj_W v\|.\)
Subsection3.2.5A first look at the four fundamental subspaces
While in the previous section, we have seen how orthogonal projections and complements are related, there is another prominent place in which orthogonal complements arise naturally.
Let \(A \in M_{m\times n}(\C)\text{.}\) Associated to \(A\) we have a linear transformation \(L_A: \C^n \to \C^m\) given by left multiplication by \(A\text{.}\) To obviate the need to introduce \(L_A\text{,}\) we often write \(\ker A\) for \(\ker
L_A\text{,}\) and \(\range A\) for \(\range L_A\) which we know is the column space, \(C(A)\text{,}\) of \(A\text{.}\)
Additionally, we also have a linear transformation \(L_{A^*}: \C^m \to \C^n\) given by left multiplication by \(A^*\text{.}\) We have the following very useful property relating \(A\) and \(A^*\text{:}\)
Proposition3.2.16.
Let \(A \in M_{m\times n}(\C).\) For \(x\in \C^n\) and \(y\in \C^m\text{,}\) we have
where we have subscripted the inner product symbols to remind the reader of the ambient inner product space, \(\C^m\) or \(\C^n.\)
Proof.
Recall the inner product \(\la v,w\ra\) in \(\C^\ell\) is \(w^* v\) the matrix product of a \(1\times \ell\) row vector with an \(\ell\times 1\) column vector. Thus
Many authors, e.g., [2] and [3], define the four fundamental subspaces. For complex matrices, these are most easily described by the kernel and range of \(A\) and \(A^*.\) For real matrices, the same identities can be rewritten in terms of the row and column spaces of \(A\) and \(A^T.\) The significance of these four subspaces will be evident when we discuss the singular value decomposition of a matrix in Section 3.6, but for now we reveal their basic relations.
Let \(w\in \ker A^*\text{.}\) Then \(A^*w = 0,\) hence \(\la A^*w , v\ra = 0\) for all \(v\in \C^n.\) By taking complex conjugates in Proposition 3.2.16,
In particular, taking \(v=A^*w\text{,}\) we have \(\la A^* w, A^*w\ra=0\) which means that \(A^*w
=0,\) showing that \(\range(A)^\perp \subseteq
\ker(A^*)\text{,}\) giving us the first equality.
Since the first equality is valid for any matrix \(A,\) we replace \(A\) by \(A^*\text{,}\) and use that \(A^{**} = A\) to conclude that
\begin{equation*}
C(A)^\perp = \ker(A^T)\mbox{ and }R(A)^\perp = \ker A.
\end{equation*}
Proof.
The first statement is immediate from the previous theorem since \(\range(A) = C(A).\) For the second, we had deduced above that \(\ker(A) = \range(A^*)^\perp\text{.}\) Now if \(A\) is a real matrix,