import numpy as np
import matplotlib.pyplot as plt
from lab2_helper_funcs import *
Computational Lab #2: Gaussian elimination and Cholesky factorization
Math 56, Winter 2025
See the course GitHub page to download the notebook. Make sure to respond to all questions!
Problem 1
Write Python functions fsub
/bsub
that implement forward/backward substitution for lower/upper triangular systems. Both functions should accept a lower/upper triangular matrix as well as a RHS vector, and return the solution obtained by forward/backward substitution.
Write tests (perhaps using random matrices) to check that each of your functions is correct.
### Your code here
Problem 2
Write Python functions lu_gewp
, lu_gepp
, and lu_gecp
implementing Gaussian elimination without pivoting, with partial pivoting, and with complete pivoting, respectively. Each function should accept an invertible matrix A
as an input, and the output for each function should be: - lu_gewp
: L
, U
, growth_factor
- lu_gepp
: P
, L
, U
, growth_factor
- lu_gecp
: P
, Q
, L
, U
, growth_factor
Write tests (perhaps using random matrices) to check that each of your functions is correct.
Bonus: Also write a Python function lu_gerp
implementing Gaussian elimination with rook pivoting.
### Your code here
Problem 3
Make a scatter plot as follows. For a large ensemble of random matrices generated by A = np.random.randn(n,n)
(start with \(n = 10\)), apply each of GEWP, GEPP, and GECP, keeping track of the growth factor gfac
for each method as well as the Frobenius norm errors for the computed factors. Here, we take the Frobenius errors to be: - \(\| \mathbf{A} - \tilde{\mathbf{L}} \tilde{\mathbf{U}} \|_F\) for GEWP, - \(\| \mathbf{P} \mathbf{A} - \tilde{\mathbf{L}} \tilde{\mathbf{U}} \|_F\) for GEPP, - \(\| \mathbf{P} \mathbf{A} \mathbf{Q} - \tilde{\mathbf{L}} \tilde{\mathbf{U}} \|_F\) for GECP.
Use plt.scatter()
to plot the growth factors on the x-axis and the Frobenius norm errors on the y-axis — make sure to label each method as GEWP/GEPP/GECP and call plt.legend()
to add a legend. Add labels for both axes as well as a figure title. Finally, you should call plt.xscale("log")
and plt.yscale("log")
to place the plot in log-log scale so that the trend can be seen clearly.
Part (a): Using your scatter plot, what observations can you make about the relationship between the growth factor and the accuracy of the computed factorizations? Try to connect what you see to the concepts we discussed in class. Discuss why you may prefer to use one method over another.
Response:
### Your code here
Part (b): Repeat this experiment for several increasing values of \(n\). What do you observe as \(n\) is increased?
Response:
### Your code here
Part (c): Repeat this experiment again with \(n = 10\) but with the random matrices generated as A = np.random.randn(n,n); A = A.T @ A + 1e-10*np.eye(n)
(this generates a random SPD matrix). Do you notice anything difference from Part (a)? If so, can you explain this behavior based on our discussions in class?
Response:
### Your code here
Bonus: include results for rook pivoting throughout this problem.
Problem 4
The function large_gfac_matrix(n)
is provided, which returns the \(n \times n\) matrix (shown for \(n = 4\)) \[
\begin{bmatrix} 1 & 0 & 0 & 1 \\ -1 & 1 & 0 & 1 \\ -1 & -1 & 1 & 1 \\ -1 & -1 & -1 & 1
\end{bmatrix}.
\] This is an example of a matrix which obtains the upper bound \(\rho(n) \leq 2^{n-1}\) for the growth factor when partial pivoting is used. Make a plot of the relationship between the growth factor and the Frobenius norm error (similar to in Problem 3), for several values of \(n\) between \(100\) and \(500\). Show the results for both GEPP and GECP. What do you observe? What might we expect to see if we tried solving linear systems with this matrix using GEPP as opposed to GECP?
Response:
### Your code here
Problem 5
The function hilbert_matrix(n)
is provided which produces the Hilbert matrix of order \(n\). Additionally, the function vandermonde_matrix
is provided which produces the \(n \times n\) Vandermonde matrix corresponding to \(n\) equispaced grid points in the interval \([0,1]\).
For each value ofn
between \(2\) and \(20\), let A
be either the Hilbert matrix or the Vandermonde matrix, generate a RHS vector b
using b = A @ np.ones(n)
, and solve the system using GEPP to obtain a computed solution \(\tilde{\mathbf{x}}\). Plot the relative error \(\| \tilde{\mathbf{x}} - \mathbf{x}_{\text{true}}\|_2/\| \mathbf{x}_{\text{true}} \|_2\) as a function of \(n\).
Why are the solutions for the Vandermonde systems more accurate than those for the Hilbert systems? You might make another plot to support your answer.
Response:
### Your code here
Problem 6
Write a Python function cholfac
that accepts an SPD matrix \(\mathbf{A}\) and returns the lower triangular factor \(\mathbf{L}\) in \(\mathbf{A} = \mathbf{L} \mathbf{L}^T\). Write tests (perhaps using random matrices) to check that your cholfac
function is correct. Then, plot the matrices \(\mathbf{A}\) and \(\mathbf{L}\) using plt.imshow()
for a \(25 \times 25\) pentadiagonal matrix \(\mathbf{A}\). What do you notice about \(\mathbf{L}\)?
Response:
### Your code here