Section 1.3 Solving systems of linear equations — theory
When you first met and considered a system of equations of the form \(Ax=b,\) you were taught that Guassian elimination on the augmented matrix provided a means to extract the solutions. In particular, your solutions to such a system are unaffected by elementary row operations on the augmented matrix. More precisely,
Theorem 1.3.1.
Let \(Ax=b\) represent a system of linear equations. If one uses Gaussian elimination to reduce the augmented matrix
\begin{equation*}
[A|b] \mapsto [R|b'],
\end{equation*}
then the solution spaces to \(Ax=b\) and \(Rx=b'\) are identical.
However, while finding the reduced row-echelon form of an augmented matrix provides the information one needs to find solutions to a particular system of linear equations, here we review a bit more about how we know for which \(b\) a system \(Ax=b\) will be consistent.
Spoiler alert: It is difficult to have a meaningful discussion concerning solutions to systems of linear equations without mentioning some basic notions about
vector spaces (explored more fully in the next section), and in particular the notions of
linear combinations and how they are related to
spans and the notion of
linear independence.
[Clicking on the links will drop down the definition; clicking again will roll it up.]
Observation 1.3.2. A terribly useful observation.
If a matrix is given by \(A =
[a_1 \ a_2\ \cdots\ a_n]\) where \(a_j\) is the \(j\)th column of \(A,\) and if \(x\) is the column vector \(x=[x_1, \dots, x_n]\text{,}\) then the matrix product \(Ax\) is a linear combination of the columns of \(A.\) More precisely,
\begin{equation*}
Ax = x_1 a_1 + \cdots + x_na_n.
\end{equation*}
An important and immediate consequence of the above observation is
Corollary 1.3.3.
A linear system
\(Ax=b\) is solvable if and only if
\(b\) is in the
column space of
\(A.\)
So let’s try to find the solutions to the matrix equation
\(Ax=b\text{,}\) where
\(A\) is a
\(5\times7\) matrix of
rank 5. For concreteness, let’s fix one matrix to enable a conversation. We shall consider the matrix
\(A\) to be the coefficient matrix of a linear system
\(Ax=b.\)
From the RREF, we see there are 5 constrained variables (pivots) and 2 free variables, so whenever \(Ax=b\) is solvable, it will have infinitely many solutions.
When one first studies linear systems, one checks whether the system is solvable by row reducing the augmented matrix and looking for no pivot in the last column. Later you learn (see the spoiler alert above) that a system \(Ax=b\) is solvable if and only if \(b\) is in the column space of \(A.\)
So let’s choose \(b\) to be in the column space of \(A\text{,}\) say we choose \(b\) to be twice the first column plus 3 times the second. Note that columns (and rows) are indexed starting with 0.
Here is how to check that \(b\) is in the column space.
As we noted, one way to solve this system is to row reduce the augmented matrix. This method leads to finding all the solutions.
Another way is to use a command to find a single solution to the system \(Ax=b.\) We then know that every solution to \(Ax=b\) has the form a sum of this particular solution and a solution to the homogeneous system \(Ax=\0.\)
In this case, this is an expected solution (recall how we constructed \(b\)), though not the only one since there are free variables, but let’s look at something curious. Let us now choose for \(b\) the last column of \(A\) given explicitly as a (column) vector (even though it is written as a row vector).
Not exactly the solution we were looking for, so maybe we should look for all solutions. For that we need to find all the solutions to the homogeneous solutions, that is solutions to \(Ax=0.\) Our usual method is to look at the RREF and extract a basis for the nullspace.
You should note that the vectors you get for a basis of the nullspace bear a striking resemblance to:
And now it is easy to check that the difference of the solution given by Sage and the last column of \(A\) lies in the nullspace.
Playground space (Enter your own commands).