next up previous
Next: Using Matrix Inverses Up: matrices3 Previous: Rules for Matrix Arithmetic


Matrix Inverses

In an earlier handout, we hinted about the possibility of solving a matrix equation $ AX=B$ by multiplying both sides of the equation by an inverse $ A^{-1}$ to the matrix $ A$. Now we are going to define the inverse matrix and see how to compute it. First, we need to define another concept:

The $ n \times n$ identity matrix $ I$ (sometimes denoted $ I_n$, if the dimension is not clear from context) is the $ n \times n$ matrix that has 1's down its main diagonal and 0's everyplace else:

$\displaystyle \left(\begin{array}{cc} 1 & 0 \\  0 & 1
\end{array}\right)
\qqua...
...0 & 0 \\  0 & 0 & 1 & 0 \\  0 & 0 &
0 & 1
\end{array}\right)
\qquad\qquad\dots
$

The identity matrices are the 1's (the ``units'' in technical mathematical language) of matrix multiplication. That is, if $ I$ is the $ n \times n$ identity matrix, and $ A$ and $ B$ are any matrices of the right dimensions so the following products are defined, then

$\displaystyle IA = A \qquad\qquad\hbox{and}\qquad\qquad BI=B.$

Now an $ n \times n$ matrix $ A$ has an inverse $ A^{-1}$ if there is an $ n \times n$ matrix $ A^{-1}$ such that

$\displaystyle AA^{-1} = A^{-1}A =I.$

This means, in particular, that multiplying by $ A^{-1}$ is like dividing by $ A$:

$\displaystyle A^{-1}(AX) = (A^{-1}A)(X) = IX = X.$

Now that we have a definition of a matrix inverse, how do we find one? First, we need a small but important fact:

Fact: If $ A$ and $ B$ are $ n \times n$ matrices and $ AB= I$, then also $ BA
= I$. (Remember that in general, $ AB \neq BA$, so the truth of this fact is not obvious.)

This fact means that if we can find an $ n \times n$ matrix $ X$ with the property that $ AX = I$, then we will know that $ XA = I$ also, so $ X = A^{-1}$. And (guess what?) we know how to solve the matrix equation $ AX = I$: Write down the augmented matrix $ A\vdots I$, row-reduce it, then look at the equivalent matrix equation obtained from the new, row-reduced augmented matrix.

For example, let's try to find an inverse to the matrix

$\displaystyle A =
\left(\begin{array}{ccc} 1 & 2 & 1 \\  1 & 1 & 1 \\  2 & 1 & 1
\end{array}\right).$

We try to solve the matrix equation $ AX = I$ by writing down the augmented matrix

$\displaystyle A\vdots I =
\left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 ...
... 1 & \vdots &
0 & 1 & 0 \\  2 & 1 & 1 & \vdots & 0 & 0 & 1
\end{array}\right)
$

and row-reducing it. (Some of these steps combine two operations in one. For example, the first step consists of first adding $ -1$ times row 1 to row 2 and then adding $ -2$ times row 1 to row 3. The second step is simply to multply row 2 by $ -1$.)

$\displaystyle \left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 & 0 & 0 \\  1...
... 1 & \vdots &
0 & 1 & 0 \\  2 & 1 & 1 & \vdots & 0 & 0 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 & 0 & 0 \\  0...
... \vdots &
-1 & 1 & 0 \\  0 & -3 & -1 & \vdots & -2 & 0 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 & 0 & 0 \\  0...
... \vdots &
-1 & 1 & 0 \\  0 & -3 & -1 & \vdots & -2 & 0 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 0 & 1 & \vdots & -1 & 2 & 0 \\  ...
...& \vdots &
1 & -1 & 0 \\  0 & 0 & -1 & \vdots & 1 & -3 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 0 & 1 & \vdots & -1 & 2 & 0 \\  ...
...& \vdots &
1 & -1 & 0 \\  0 & 0 & 1 & \vdots & -1 & 3 & -1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 0 & 0 & \vdots & 0 & -1 & 1 \\  ...
... \vdots &
1 & -1 & 0 \\  0 & 0 & 1 & \vdots & -1 & 3 & -1
\end{array}\right).
$

This is the augmented matrix of the matrix equation

$\displaystyle \left(\begin{array}{ccc} 1 & 0 & 0 \\  0 & 1 & 0 \\  0 & 0 & 1
\...
...ray}{ccccccc}
0 & -1 & 1 \\
1 & -1 & 0 \\
-1 & 3 & -1
\end{array}\right) ,$

or

$\displaystyle X =
\left(\begin{array}{ccccccc}
0 & -1 & 1 \\
1 & -1 & 0 \\
-1 & 3 & -1
\end{array}\right) .$

Therefore we have our solution:

$\displaystyle A^{-1} =
\left(\begin{array}{ccccccc}
0 & -1 & 1 \\
1 & -1 & 0 \\
-1 & 3 & -1
\end{array}\right) .$

This is one way things can work out when we try to find the inverse of a matrix A. Here's another: let us try to find an inverse to the matrix

$\displaystyle A =
\left(\begin{array}{ccc} 1 & 2 & 1 \\  1 & 1 & 1 \\  2 & 3 & 2
\end{array}\right).$

We try to solve the matrix equation $ AX = I$ by writing down the augmented matrix

$\displaystyle A\vdots I =
\left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 ...
... 1 & \vdots &
0 & 1 & 0 \\  2 & 3 & 2 & \vdots & 0 & 0 & 1
\end{array}\right)
$

and row-reducing it:

$\displaystyle \left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 & 0 & 0 \\  1...
... 1 & \vdots &
0 & 1 & 0 \\  2 & 3 & 2 & \vdots & 0 & 0 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 & 0 & 0 \\  0...
...& \vdots &
-1 & 1 & 0 \\  0 & -1 & 0 & \vdots & -2 & 0 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 2 & 1 & \vdots & 1 & 0 & 0 \\  0...
...& \vdots &
-1 & 1 & 0 \\  0 & -1 & 0 & \vdots & -2 & 0 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 0 & 1 & \vdots & -1 & 2 & 0 \\  ...
...& \vdots &
1 & -1 & 0 \\  0 & 0 & 0 & \vdots & -1 & -1 & 1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 0 & 1 & \vdots & -1 & 2 & 0 \\  ...
... & \vdots &
1 & -1 & 0 \\  0 & 0 & 0 & \vdots & 1 & 1 & -1
\end{array}\right)
$

$\displaystyle \left(\begin{array}{ccccccc} 1 & 0 & 1 & \vdots & 0 & 3 & -1 \\  ...
...& \vdots &
0 & -2 & 1 \\  0 & 0 & 0 & \vdots & 1 & 1 & -1
\end{array}\right).
$

This is the augmented matrix of the matrix equation

$\displaystyle \left(\begin{array}{ccc} 1 & 0 & 0 \\  0 & 1 & 0 \\  0 & 0 & 0
\...
...rray}{ccccccc}
0 & 3 & -1 \\
0 & -2 & 1 \\
1 & 1 & -1
\end{array}\right) .$

Now this equation has no solutions. How do we know this? The entries in the third row of the product

$\displaystyle \left(\begin{array}{ccc} 1 & 0 & 0 \\  0 & 1 & 0 \\  0 & 0 & 0
\end{array}\right) X
$

will be the products of the third row of the first factor with the columns of $ X$. Since the third row of the first factor consists entirely of zeroes, these products will all be zero as well, so whatever $ X$ is, the third row of

$\displaystyle \left(\begin{array}{ccc} 1 & 0 & 0 \\  0 & 1 & 0 \\  0 & 0 & 0
\end{array}\right) X
$

will be $ (0,0,0)$; it is impossible for it to be $ (1,1,-1)$. Therefore, the matrix $ A$ does not have an inverse.

These are the two possibilities. We can collect this information into a procedure:

To find the inverse $ A^{-1}$ of a square matrix $ A$, write down the matrix $ A\vdots I$ and then row-reduce it. Either it will row-reduce to a matrix of the form $ I \vdots B$, in which case $ B = A^{-1}$, or it will row-reduce to a matrix of the form $ C
\vdots B$ where $ C$ has a row consisting entirely of zeroes, in which case $ A$ has no inverse.




next up previous
Next: Using Matrix Inverses Up: matrices3 Previous: Rules for Matrix Arithmetic
Peter Kostelec
2000-05-08