Section 1.3 Solving systems of linear equations — theory
When you first met and considered a system of equations of the form
you were taught that Guassian elimination on the augmented matrix provided a means to extract the solutions. In particular, your solutions to such a system are unaffected by elementary row operations on the augmented matrix. More precisely,
Theorem 1.3.1.
Let
represent a system of linear equations. If one uses Gaussian elimination to reduce the augmented matrix
then the solution spaces to
and
are identical.
However, while finding the reduced row-echelon form of an augmented matrix provides the information one needs to find solutions to a particular system of linear equations, here we review a bit more about how we know for which
a system
will be consistent.
Spoiler alert: It is difficult to have a meaningful discussion concerning solutions to systems of linear equations without mentioning some basic notions about
vector spaces (explored more fully in the next section), and in particular the notions of
linear combinations and how they are related to
spans and the notion of
linear independence.
[Clicking on the links will drop down the definition; clicking again will roll it up.]
Observation 1.3.2. A terribly useful observation.
If a matrix is given by
where
is the
th column of
and if
is the column vector
then the matrix product
is a linear combination of the columns of
More precisely,
An important and immediate consequence of the above observation is
Corollary 1.3.3.
A linear system
is solvable if and only if
is in the
column space of
So let’s try to find the solutions to the matrix equation
where
is a
matrix of
rank 5. For concreteness, let’s fix one matrix to enable a conversation. We shall consider the matrix
to be the coefficient matrix of a linear system
From the RREF, we see there are 5 constrained variables (pivots) and 2 free variables, so whenever
is solvable, it will have infinitely many solutions.
When one first studies linear systems, one checks whether the system is solvable by row reducing the augmented matrix and looking for no pivot in the last column. Later you learn (see the spoiler alert above) that a system
is solvable if and only if
is in the column space of
So let’s choose
to be in the column space of
say we choose
to be twice the first column plus 3 times the second.
Note that columns (and rows) are indexed starting with 0.
Here is how to check that
is in the column space.
As we noted, one way to solve this system is to row reduce the augmented matrix. This method leads to finding all the solutions.
Another way is to use a command to find a single solution to the system
We then know that
every solution to
has the form a sum of this particular solution and a solution to the homogeneous system
In this case, this is an expected solution (recall how we constructed
), though not the only one since there are free variables, but let’s look at something curious. Let us now choose for
the last column of
given explicitly as a (column) vector (even though it is written as a row vector).
Not exactly the solution we were looking for, so maybe we should look for all solutions. For that we need to find all the solutions to the homogeneous solutions, that is solutions to
Our usual method is to look at the RREF and extract a basis for the nullspace.
You should note that the vectors you get for a basis of the nullspace bear a striking resemblance to:
And now it is easy to check that the difference of the solution given by Sage and the last column of
lies in the nullspace.
Playground space (Enter your own commands).