Skip to main content
Logo image

Section 3.5 Adjoint Maps and properties

In Proposition 3.2.16, we have seen how a complex \(m\times n\) matrix and its conjugate transpose have a natural relation with respect to inner products, and in Subsection 3.2.5 took a first look at the four fundamental subspaces. In this section we develop the corresponding notions for linear maps between inner product spaces.

Subsection 3.5.1 Basic Properties

Let \(V,W\) be inner product spaces and \(T:V\to W\) be a linear map. We can ask if there exists a linear map \(S:W\to V\) so that
\begin{equation*} \la T(v),w\ra_W = \la v, S(w)\ra_V. \end{equation*}
Let’s look at a few examples.

Example 3.5.1. \(T(x) = Ax\).

If \(A\) is an \(m\times n\) complex matrix, then \(T(x) = Ax\) defines a linear transformation \(T:\C^n \to \C^m\text{.}\) In Proposition 3.2.16, we saw that the linear map \(S:\C^m \to \C^n\) given by \(S(X) = A^* x\) satisfies the requisite property that
\begin{equation*} \la T(v),w\ra_W = \la v, S(w)\ra_V. \end{equation*}

Example 3.5.2. Orthogonal Projections.

Let \(V\) be an inner product space and \(W\) a subspace with orthonormal basis \(\{w_1, \dots, w_r\}.\) As we have seen in Subsection 3.2.4, the orthogonal projection of \(V\) onto \(W\) is given by
\begin{equation*} \proj_W(v) = \la v,w_1\ra w_1 + \cdots + \la v,w_r\ra w_r. \end{equation*}
By Theorem 3.2.10 and Corollary 3.2.14, the projection map satisfies \(v^\perp := v - \proj_W(v) \in W^\perp\) and \(\proj_W^2 = \proj_W\text{.}\) We wish to show that the projection is self-adjoint, that is,
\begin{equation*} \la\proj_W v, u\ra = \la v, \proj_W u\ra. \end{equation*}

Proof.

To show that
\begin{equation*} \la\proj_W v, u\ra = \la v, \proj_W u\ra \end{equation*}
for all \(u,v \in V,\) we write \(v = \proj_W v + v^\perp\) and \(u = \proj_W u + u^\perp\) (with \(v^\perp, u^\perp \in W^\perp\)). Then
\begin{equation*} \la \proj_W v,u\ra = \la \proj_W v, \proj_W u\ra + \la \proj_W v, u^\perp\ra = \la \proj_W v, \proj_W u\ra, \end{equation*}
and
\begin{equation*} \la v,\proj_W u\ra = \la \proj_W v,\proj_W u\ra + \la v^\perp, \proj_W u\ra =\la \proj_W v,\proj_W u\ra \end{equation*}
which establishes the equality.

Example 3.5.3. Hyperplane Reflections.

Let \(V\) be a finite-dimensional inner product space, \(u\) a unit vector, and \(W\) the hyperplane (through the origin) normal to \(u.\) Geometrically, we want to reflect a vector \(v\) across the hyperplane \(W.\) One way to describe this is to write \(v = \proj_W v + v^\perp\) and define \(H(v) = \proj_W v - v^\perp\text{,}\) but we recognize that \(v^\perp = \la v,u\ra u,\) so we can simply write
\begin{equation*} H(v) = v- 2\la v,u\ra u. \end{equation*}
The map \(H\) is often called a Householder transformation. We show that it too is self-adjoint.

Proof.

As before, we compute both sides of the desired equality: \(\la Hv,z\ra = \la v,Hz\ra\) and show they are equal.
One the one hand,
\begin{equation*} \la Hv,z\ra = \la v - 2\la v,u\ra u,z \ra = \la v,z\ra - 2\la v,u\ra \la u,z\ra. \end{equation*}
On the other hand,
\begin{equation*} \la v, Hz\ra = \la v, z - 2\la z,u\ra u\ra = \la v,z\ra - 2\overline{\la z,u\ra} \la v,u\ra, \end{equation*}
and since \(\overline{\la z,u\ra} = \la u,z\ra,\) we have the desired equality.
It is straightforward to show than if an adjoint exists, it is unique:

Proof.

If for all \(v\in V, w\in W\)
\begin{equation*} \la Tv,w\ra = \la v,Sw\ra = \la v,S^\prime w\ra, \end{equation*}
then
\begin{equation*} \la v , (S-S')w\ra = 0 \end{equation*}
for all \(v,w\text{,}\) which implies \(S=S'.\)
We denote the unique adjoint of the linear map \(T\) as \(T^*\text{.}\) As a consequence of uniqueness it is immediate to check that
\begin{equation*} (\lambda T)^* = \overline\lambda T^*, (S+T)^* = S^* + T^*. \text{ and } (T^*)^* = T. \end{equation*}
If \(V\) is a finite-dimensional inner product space, it is easy to show that every linear map \(T:V\to W\) has an adjoint.

Proof.

Recall that by Theorem 3.2.3, every vector \(v\in V\) has a unique representation as \(v= \sum_{k=1}^n \la v,e_k\ra e_k\text{.}\) As a consequence,
\begin{gather*} \la Tv,w\ra = \la T(\sum_{k=1}^n \la v,e_k\ra e_k), w\ra = \sum_{k=1}^n \la v,e_k\ra \la T(e_k), w\ra\\ = \sum_{k=1}^n\overline{\la w, T(e_k)\ra} \la v,e_k\ra = \sum_{k=1}^n \la v,\la w, T(e_k) e_k\ra e_k\ra \\ = \la v, \sum_{k=1}^n\la w, T(e_k)\ra e_k \ra =\la v,T^*(w) \ra. \end{gather*}
It follows from this definition and properties of the inner product that \(T^*\) is linear.
As a means of connecting this notion of adjoint with the properties of the conjugate transpose of a matrix given in Proposition 3.2.16, we have the following proposition.

Proof.

Let the orthonormal bases be given by \(\cB_V = \{e_1, \dots, e_n\}\) and \(\cB_W = \{f_1, \dots, f_m\}\text{.}\) If \(A = [T]^{\cB_W}_{\cB_V}\) and \(B=[T^*]_{\cB_W}^{\cB_V}\) then by Theorem 3.2.3, \(B_{ij} = \la T^*(f_j),e_i\ra\) and
\begin{equation*} A_{ij} = \la T(e_j),f_i\ra = \la e_j, T^*(f_i)\ra = \overline{\la T^*(f_i),e_j\ra}=\overline{B_{ji}} \end{equation*}
which establishes the result.

Subsection 3.5.2 A second look at the four fundamental subspaces

In the previous section, we established the existence and uniqueness of the adjoint of a linear map defined on a finite-dimensional inner product space, and connections with the matrix of the linear transformation. Here we look at a few more properties including a second look at the four fundamental subspaces which lie at the heart of the singular value decomposition.
A linear operator \(T:V \to V\) on an inner product space is called self-adjoint or Hermitian if \(T^* = T.\) We saw that both the Householder transformation and the orthogonal projection were examples of self-adjoint operators.

Proof.

For all \(u \in U\) and \(w\in W\) we have
\begin{equation*} \la u, S^*T^*w\ra = \la Su, T^*w\ra = \la TS(u),w\ra = \la u,(TS)^*w\ra \end{equation*}
which yields \(S^*T^* = (TS)^*\text{.}\)
For the second, it is immediate that the adjoint of the identity map is again an identity map. As a consequence (and using the first part),
\begin{equation*} (T^{-1}T)^*= (Id_V)^*= Id_V = T^*(T^{-1})^*\text{,} \end{equation*}
which together with the identity with the operators reversed gives the result. Actually, being operators on finite dimensional vector spaces, one such identity yields the other by rank-nullity.
Another important class of linear map between inner product spaces is the notion of an isometry, a linear map \(T:V\to W\) which satisfies
\begin{equation*} \la Tv_1, T v_2\ra_W = \la v_1, v_2\ra_V \end{equation*}
for all \(v_1, v_2 \in V.\)

Proof.

To show that \(T\) is injective, we note that the kernel is trivial: If \(T(v) = 0\text{,}\) then
\begin{equation*} \la T(v), T(v)\ra = \la v,v\ra = 0 \end{equation*}
which can happen if and only if \(v=0.\)
Now suppose that \(T\) is surjective. Let \(v\in V\) and \(w\in W\) be arbitrary. Choose \(v' \in V\) with \(T(v') = w.\) Then
\begin{equation*} \la T(v),w\ra = \la T(v), T(v')\ra = \la v,v'\ra = \la v,T^{-1}(w)\ra, \end{equation*}
so by uniqueness of the adjoint, \(T^* = T^{-1}.\)
The following theorem should be compared to Theorem 3.2.17 and its corollary.

Proof.

Let \(w \in \ker(T^*)\text{.}\) Then \(T^*(w)=0\text{,}\) so \(\la v, T^*(w)\ra = 0 = \la T(v),w\) for all \(v\in V\text{.}\) Thus \(w\) is orthogonal to the image of \(T\text{,}\) i.e., \(\ker(T^*) \subseteq (\Im T)^\perp.\) Conversely, if \(w\in (\Im T)^\perp\text{,}\) then for all \(v\in V\text{,}\)
\begin{equation*} 0 = \la T(v),w\ra = \la v, T^*(w)\ra. \end{equation*}
In particular, choosing \(v=T^*(w)\) shows that \(T^*(w)=0\text{,}\) hence \((\Im T)^\perp \subseteq \ker T^*,\) giving the first equality.
Since the first statement is true for any linear map and finite-dimensional inner product spaces, we replace \(T\) by \(T^*\) and use \((T^*)^* = T\) to conclude
\begin{equation*} \ker T = (\Im T^*)^\perp\text{.} \end{equation*}
Finally, using Corollary 3.2.12 yields
\begin{equation*} (\ker T)^\perp = \Im T^*. \end{equation*}