Math 116 Spring 2014
Syllabus in PDF.
We meet in Kemeny 006 in the 2A slot (23:50pm Tu, Th, with intermission).
You may and should
bring laptops either to type into or do numerical experiments occasionally;
if you don't we can share.
Office hours are 23pm Wednesday, and 56pm Thursday; you can come to
Math 56
office hours if noone else is there, or by arrangement.
Overview
How are we able to compute the massive things that we can? How are we
able to find eigenvalues of large matrices, solve PDEs with error
0.0000000001, optimize a function of thousands of variables, or solve
linear systems with a million unknowns? Although faster hardware is
part of the story, much of the credit goes to the invention
(discovery?) of vastly more efficent numerical algorithms, say O(N log
N) instead of O(N^{2}).
This spring I will run Math 116 as a researchpaperbased ``journal
club'' where students will present some of the great numerical analysis
and computational algorithms (either classic or recent).
I first
plan to lecture for two weeks to teach rapidly some key concepts in
numerical analysis, and provide us with common langauge.
Then we shift to student presentations of papers from a list (see below) of
classic and modern papers,
including some of the ``Top Ten Algorithms of the 20th Century''
(according to SIAM).
Many of these algorithms are run millions of times per day
worldwide, and have had a huge impact on science.
We keep themes accessible:
a focus on numerical linear algebra
(since this hasn't been covered much in this department, yet is so important),
also papers on Fourier methods, finite differencing, randomized algorithms.
None of the papers are too technical: they require mostly linear algebra
(rather than, say, functional analysis),
and a enthusiasm to dig into algorithms even when subscripts get messy!
Your main workload will be presenting an appropriate chunk of your
chosen paper during two 90min lectures (ie 1 week of our 2A slot,
with intermissions, and 10 mins at the end for discussion and filling
feedback forms). You can use chalk, computer slides, handouts, or
activities for the audience which teach them ideas, or a mixture (I
encourage this). A large part of your presentation should be the main
mathematical ideas/theory/algorithm, defining new objects clearly.
Then I suggest you do computer implementation/experiments, either of
a piece of the algorithm, or the whole thing (ask me what's reasonable).
You should also try to answer:
Why is this algorithm better than other ways of solving the problem? What research areas in science use the algorithm? What impact has it had?
Each week one of the other students will be ``scribe'' and
take as good notes as they can, and type them up in a few pages of LaTeX;
the lecturer will check over them for corrections.
Your work (and how I assess you) will then be:
 Reading, understanding enough of, and presenting, your paper
and numerical experiments, checking your scribe's writeup: 75%
 Being scribe for one of the other presentations: 15%
 Overall class participation, giving feedback, asking questions: 10%
Your workload will therefore be concentrated into the week before, and the
week of, your presentation; most of the rest of the quarter you need only be
an engaged class participant.
You do not have to do implement the algorithms you present, but I
suggest you try this; none of them are too complicated to code.
Why? It's fun, Matlab makes it pretty
quick to do, and I and your peers are here to help. Presenting your
code and its results could form a chunk of your 2nd (ie Thursday)
lecture. I will consider it all part of your effort.
Choose your paper by Tuesday April 1st.
If you don't get your first choice, rest assured they are all beautiful and interesting. I would like the papers to
be presented roughly in historical order.
There is no final exam, although I may give summary lectures at the end.
Goals

Appreciating some of the highestimpact
mathematical algorithms for numerical computation.

Understand core ideas from many of these algorithms; breadth not depth.

Learning at least one set of ideas/algorithm
deeply by having to explain them to others.

Improve communication skills:
practice extracting the core ideas, presenting them as clearly as possible,
learn from audience feedback.

Learning to read research literature: knowing when to keep pushing,
when to skim, when to seek help.

Coding up some algorithms to see how they perform, numerical experimentation.

Appreciate some current research directions in numerical mathematics.
Hints
Shirley Zhao is running upcoming LaTeX crash courses.

This course is about skills any successful researcher needs: reading complicated stuff, extracting what you need, explaining to others, and implementing/testing it.

Start reading 2 weeks before, and look/ask around for textbooks, lecture
notes, reviews, to help you understand the paper and its context.
Start writing your two lectures 1 week before.

Do a little practice writing clearly with chalk if you haven't done this
muchpress hard, write big.

Explain things to each other; explain them to me in office hrs.
Keep a list of what you're
stuck on. Don't worry if there's some bits of the paper you never quite get;
focus on what matters. Narrative flow is important.

Don't get carried away with perfect computer presentations, although
beamer
is useful.

Start messing around with Matlab to get into the swing of things. Is
there a command that already does your algorithm? (if so, well, you
should still implement it; if not, why not?)

Think if there's a mathematical step/proof you can have us the audience do, give us the pieces and try it. More fun than watching you do it at the board.
Lectures
 3/25/14:
Lecture 1.
Direct vs iterative, complexity, algebraic vs exponential
convergence.
Companion matrix worksheet.
Types of convergence
 3/27/14:
Lecture 2.
Iterations for eigenpairs: power, inverse, Rayleigh quotient iteration;
cubic convergence.
Condition number and matrix 2norm.
Code:
poweriter.m
 4/1/14:
Lecture 3.
Relative condition number, conditioning of linear systems.
Singular Value Decomposition.
Shirley Zhao on library tools, citation searches.
 4/3/14:
Lecture 4.
Floating point representation and arithmetic. Catastrophic cancellation.
Stability and backwards stability.
Leastsquares solution via SVD.
Stability of Gaussian elimination.
(scribbles).
Code:
lustab.m
 4/8/14:
Lecture 5.
QR factorization: classical and modified GramSchmidt,
reflectors.
Householder reflectors worksheet.
Code:
qrstab.m
.
(needs clgs.m
and mgs.m
).
 4/10/14:
Lecture 6.
Finish QR.
Fast Fourier transform (FFT), for array length powers of two.
Code:
cooleytukey.m
, and
test_fft.m
 4/15/14: Week 4: Damian on CFL. CourantFriedrichsLewy (1928).
Tommy's lecture notes.
 4/22/14: Weel 5: Seth on QR iteration. Francis (1961).
scribe: Larry
 4/29/14: Week 6: Tommy on Metropolis/MCMC. Hastings (1970).
scribe: Damian
 5/6/14: Week 7: Larry on GMRES. SaadSchultz (1986).
scribe: Lin
 5/13/14: Week 8: Zach on FMM. GreengardRokhlin (1987).
scribe: Seth
 5/20/14: Week 9: Lin on randomized SVD. HalkoMartinssonTropp (2011).
scribe: Zach
 5/27/14: No lecture (undergraduate senior thesis presentations)
The papers
See discussion in syllabus. PDFs only available from dartmouth.edu
domain.

On the Partial Difference Equations of Mathematical Physics, by R. Courant, K. Friedrichs, and H. Lewy, Mathematische Annalen 100, 3274 (1928).
English translation.
PDF

A rapidly convergent descent method for minimization,
by R. Fletcher and M. J. D. Powell,
The Computer Journal (1970) 13 (4): 413417.
PDF
Also see:
W. C. Davidon, AEC Report, Variable metric method for minimization (1959)
PDF,
and NocedalWright book below.

An Algorithm for the Machine Calculation of Complex Fourier Series, by J.W. Cooley and J.W. Tukey, Math. Comput. 19, 297301 (1965).
PDF

The QR Transformation: A Unitary Analogue to the LR Transformation  Part 1, by J.G.F. Francis, Comput. J. 4, 265271 (1961).
PDF.
The QR Transformation  Part 2, by J.G.F. Francis, Comput. J. 4, 332345 (1962). PDF.

Monte Carlo sampling methods using Markov
chains and their applications, by W. K. Hastings, Biometrika 57 (1)
97109 (1970).
PDF
Also see:
Metropolis original paper
(although really Ulam and von Neumann invented it);
D. J. MacKay review of MCMC (1996)

Y. Saad and M. H. Schultz, "GMRES: a generalized
minimal residual algorithm for solving nonsymmetric linear systems,"
SIAM J. Stat. Sci. Comput. 7 (3) 856869 (1986).
PDF
Also see:
T. A. Driscoll, K.C. Toh, and L. N. Trefethen,
"From potential theory for matrix iterations in six steps,"
SIAM Review, 40 (3), 547578 (1998). PDF

A Fast Algorithm for Particle Simulations, L. Greengard and V. Rokhlin, J. Comput. Phys. 135, 280292 (1987). PDF
Also see:
Lec 1516 here (monopole convention is opposite),
also BeatsonGreengard review.
L. Ying et al, J. Comput. Phys. 196, 591626 (2004) has a nice review of FMM including in App. B the translation operators for 2D Laplace kernel.

Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,
by N. Halko, P. G. Martinsson, and J. A. Tropp.
SIAM Rev., Survey and Review section, Vol. 53, num. 2, pp. 217288, June 2011.
PDF
References
 L. N. Trefethen and D. Bau, Numerical Linear Algebra (SIAM, 1997)
$57.50 [NLA].
Beautiful accessible introduction.
 G.H. Golub and C.F. Van Loan, Matrix Computations, 4th Edition
(John Hopkins, 2012). $56. [MC].
Pretty much the bible for numerical
linear algebra.
 R. Kress, Numerical Analysis, Graduate Texts in Mathematics 181 (Springer, 1998) $75. [NA]
Concise theoretical introduction for most of numerical analysis,
especially quadrature and integral equations. Poor on finite differencing.

R. J. LeVeque,
Finite Difference Methods for Ordinary and Partial Differential Equations
(SIAM, 2007) [FD]
 J. Nocedal and S. J. Wright,
Numerical Optimization, (Springer Verlag, 1999). [NW]
Resources
Numerical Analysis
 Intro to floatingpoint
number system by Cleve Moler (founder of Mathworks),
also here.
 Numerical Analysis
Digest email discussion group archive.

W. H. Press, S. A. Teukolsky, W. H. Vetterling, and B. P. Flannery,
Numerical
Recipes in C, 2nd Edition,
available here as PDF files online, also worth buying for your shelf.
Good
allround general overview of numerical methods, with lots of practical
tips,
good intuitive explanations, aimed at users. Not very rigorous,
uptodate or complete on PDEs (sometimes this book annoys
numerical analysists, but physicists like it).
LaTeX
This is the typesetting package essentially all mathematicians use.
You will use it for projects, and, I hope, homeworks.
 Shirley Zhao
has a crash course on LaTeX.
 Here is our department's LaTeX resources.
 If you have a UNIX account (you probably do as a grad student; ask
your sysadmin) or linux OS
then you can use a standard text editor and the latex command.
If you have Mac OSX or Windows you need to install a LaTeX
distribution, as described here.
 Please let me know if you are stuck with installing LaTeX.
Sarunas Burdulis (our sysadmin, behing the math office) may also be able to help better than I.
 Once you have it installed,
here are some
simple sample files that you can edit for your homework.
They produce PS which you can convert to a PDF file.
 Here is a quick guide to all the math symbols and brackets, etc. You will need to
\usepackage{amssymb,amsmath}
to get some of these symbols.
 Beamer is a great latex package in which to make slides for your talks.
(Although I still use Prosper).
Here's my toy files: a.tex
which uses EPS figure
fig.eps
. It produces this PDF
.
Coding tips
 Always write on paper what you want to achieve, and then write out a pseudocode, before you start typing.
 Try to put all useradjustable parameters once at the top of your
program (script). Make everything work from these parameters.
 Write brief comments to explain what nonobvious lines do.
 Break down tasks which are repeated into subroutines (in MATLAB
these are called functions). Write out the interface (inputs and
outputs) to your function on pencil and paper before you start to code it up.
Document your function as you go.
 For every function, write a test routine that goes along with it and
verifies it does what it is supposed to do, on at least one known
test case.