Web Page for Math 440, Numerical Analysis, Fall 2016
John Bardsley, Professor of Mathematics
SYLLABI: PDF
TEXTBOOK: "Fundamentals of Matrix Computations", David Watkins, Wiley, 2010.
WEEK 1 LECTURES:
Day 1, Syllabus, 8/29: PDF, Video
Day 2, Lecture 1, 8/30: PDF, Video
Day 3, Lecture 2, 8/31: PDF, Video
Day 4, Lecture 3, 9/2: PDF, Video
HOMEWORK ASSIGMENTS:
#0, Week 1 (highly recommended, but you don't need to turn anything in): open MATLAB and read/work from the 'Introduction' up through the 'Advanced Graphics' section in this Practical Introduction to MATLAB, by Mark Gockenbach at Michigan Tech.
#1, Due Sept 13: Hand in your MATLAB m-files for any problems that require coding. I will run your codes and will dock you points for code that doesn't run or that has mistakes.
(1) Ex. 1.1.10. NOTE: MATLAB's tic and toc commands provide a simple way to measure elapsed time.
(2) Ex. 1.1.25
(3) Ex. 1.2.4.
(4) Ex. 1.2.16. Use MATLAB to solve the problem when m=20, c=d=0, and f(x)=1. The toeplitz function will be useful.
(5) Write down the algorithm for backward substitution, then modify ForwardSub.m so that it implements the algorithm. Call the new code BackwardSub.m and use it to solve the system in 1.3.17. Verify that your code is giving you the correct answer.
#2, Due Sept 23:
(1) Exercise 1.4.16: write your own code that computes the Cholesky factorization of a matrix using pseudo-code (1.4.17). Check your code by implementing it on the linear system found in Exercise 1.4.21. Compare your solution with that obtained using MATLAB's chol function to verify that they are giving the Cholesky factor and solution.
Extra Credit: Use your Cholesky factorization, forward substitution, and backward substitution codes all together to solve the same linear system.
(2) Ex. 1.4.23.
(3) Ex. 1.4.38.
(4) Ex. 1.4.52, 54, 56.
(5) Do Exercise 1.6.7 (a-d) for B=A'*A defined in AO2D.m.
#3, Due October 5:
(1) Exercise 3.2.14, using a rotator.
(2) Exercise 3.2.47.
(3) Write an MATLAB function implementing algorithm (3.2.35). Your function should take a vector x as an input and output a vector u and a scalar gamma, such that Q=I-gamma*(u*u') satisfies Qx=[+/-||x||,0,...,0]^T.
(4) Use your MATLAB function from problem (3) to do 3.2.39.
(5) GIVE YOURSELF TIME ON THIS ONE! Exercise 3.3.9, using your MATLAB function from problem (3) to generate the reflector matrices.
#4, Due October 18:
(1) 4.1.11.
(2) 4.1.13 & 4.3.8.
(3) 4.1.14.
(4) 4.3.9.
(5) SVD application: do problems 2-4 at the bottom of Deblur2D.m. Hand in your finished code.
#5, Due October 31, in class: for each coding problem, hand in code that provides output answering relevant questions.
(i) Exercise 5.3.10
(ii) Exercise 5.3.18
(iii) Exercise 5.3.36
(iv) Exercise 5.4.7
(v) Exercise 5.5.7
#6, Due November 9:
(i) 5.8.14,16,17,18.
(ii) Do problems 1-3 at the bottom of Eigenfaces.m.
(ii) Extra Credit: do problem 4 at the bottom of Eigenfaces.m.
#7, Due 11/22, Steepest Descent iterations: Lab #8
#8, Due 12/7, 5pm:
A. Conjugate Gradient iteration: note that this is a modification of Lab #9.
Hand in code for (i) and (iii) and figures (or code that generates figures) for (ii) and (iv).
(i) Write your own conjugate gradient iteration based on the pseudo code (8.7.3) on page 604, first with the preconditioner M=I. Your code should be written as a function. The inputs should be A, b, the maximum number of iterations, and the stopping tolerance for the relative residual norm (||b-Ax||/||b||); use initial guess x=0. The outputs should include the approximate solution x and the vector of relative residual norms at each iteration.
(ii) Use your code to compare the performance of CG with steepest descent, without preconditioning, on the model problem in Laplace2DBook.m. Stop iterations once the relative residual norm is below 10^-8, and in a single figure, plot the relative residual norm versus iteration for the two methods.
(iii) Modify your code from (i) so that it implements preconditioned conjugate gradient with SSOR preconditioner, defined in Example 8.6.8.
(iv) Use your code to compare the performance of CG with steepest descent, both with SSOR preconditioner, on the model problem in Laplace2DBook.m. Use the optimal relaxation parameter w for the preconditioner. Is it the same for both algorithms? Stop iterations once the relative residual norm is below 10^-8, and in a single figure, plot the relative residual norm versus iteration for the two methods.
B. Minimizing the Rosenbrock function f(x,y)=100*(y-x^2)^2+(1-x)^2.
Hand the code that solves problem (ii) & (iii).
(i) Compute the gradient and Hessian of f. Prove that (x*,y*)=(1,1) is the only local minimizer of this function, and that the Hessian matrix at x* is positive definite.
(ii) Write a MATLAB function that has (x,y) as its input and the gradient and Hessian of f at (x,y) as its outputs.
(iii) Using your code from (ii), write programs implementing the steepest descent and Newton algorithms to minimize f. Use the backtracking line search algorithm from the notes with initial step length alpha_0=1 and backtracking parameter rho=1/2. Stop iterations once the norm of the gradient is below 10^-8. First try the initial point (x_0,y_0)=(1.2,1.2) and then the more difficult starting point (x_0,y_0)=(-1.2,1). How many iterations do the algorithms take for these two cases.
COMPUTER LABS/CODE:
Lab #1, September 6:
(a) Do Exercise 1.1.9 & 1.1.14.
(b) Do Exercise 1.3.4, then enter G and b into MATLAB and compute the answer using ForwardSub.m.
(c) Let's have a look at the m-file ToeplitzMatrix.m to see how to use the toeplitz function in MATLAB.
Lab #2, September 13: adaptive optics example and coding the Cholesky factorization.
(a) Modify AO1D.m and AO2D.m so that A'*A x=A'*b is solved using the Cholesky factorization.
(b) Do Exercise 1.6.7 (a-d) for B=A'*A defined in AO2D.m.
(c) Exercise 1.4.16: write your own code that computes the Cholesky factorization of a matrix using pseudo-code (1.4.17). Check your code by implementing it on the linear system found in Exercise 1.4.21. Compare your solution with that obtained using MATLAB's chol function to verify that they are giving the Cholesky factor and solution.
Lab #3, September 20, PDF: a lab on the Kronecker product and computing the Cholesky factorization of the 3D negative-Laplacian.
Lab #4, September 27: QR factorization
(i) Use the QR factorization to solve the least squares problem found in AO2D.m; that is, compute the QR factorization of A and then use it to compute the least squares solution. Verify that you get the same solution as is given by 'recon = (A'*A)\(A'*b);' in the code.
(ii) Do problem 3 from Homework #3 and then use it to do problem 4 from Homework #4.
Lab #5, October 4: Deblurring and the SVD
(i) Have a look at the code Deblur1D.m as it relates to yesterday's lecture on deblurring. Modify the codes so that the following Gaussian kernel is used instead: a(t) = (2*pi/20^2)^(-1/2)*exp(-t^2/(2/20^2)).
(ii) Work on HW #3 coding exercises.
Lab #6, October 11: Deblurring and the SVD
(i) work on the problems at the bottom of Deblur1D.m.
(ii) work on the problems at the bottom of Deblur2D.m.
Lab #7, October 18: power and shifted inverse power methods. Here's a PDF of the problems below.
(i) Do Exercise 5.3.10.
(ii) Do Exercise 5.3.18.
Lab #7, October 28: PDF: GaussianDrawsPCA.m, Eigenfaces.m, and faces.zip.
Lab #8, November 15, steepest descent iteration for solving Ax=b when A is symmetric positive definite:
(i) Write your own steepest descent iteration based on the pseudo code (8.6.2) on page 598, first with the preconditioner M=I. Your code should be written as a function. The inputs should be A, b, the maximum number of iterations, and the stopping tolerance for the relative residual norm (||b-Ax||/||b||); use initial guess x=0. The outputs should be the approximate solution x, the relative residual norm at x, and the number of iterations.
(ii) Use your code to verify the results in Example 8.4.9.
(iii) Use your code to verify the results in Example 8.4.16. For this you'll need Laplace2DBook.m. Note that the results may be slightly different since your stopping tolerance is different.
(iv) Modify your code from (i) so that it implements preconditioned steepest descent with SSOR preconditioner defined in Example 8.6.8, then use it to verify the results in Example 8.6.8. For this you'll need Laplace2DBook.m. Again, your results may be slight different due to different stopping tolerances.
(v) Find the optimal relaxation parameter w between 1 and 2 for the SSOR preconditioner in terms of steepest descent convergence. Plot the relative residual curves for various values of w to illustrate the different convergence rates.
Lab #9, November 28, conjugate gradient iteration for solving Ax=b, where A is SPD.
(i) Write your own conjugate gradient iteration based on the pseudo code (8.7.3) on page 604, first with the preconditioner M=I. Your code should be written as a function. The inputs should be A, b, the maximum number of iterations, and the stopping tolerance for the relative residual norm (||b-Ax||/||b||); use initial guess x=0. The outputs should be the approximate solution x, the relative residual norm at x, and the number of iterations.
(ii) Use your code to compare performance of CG with steepest descent, both without preconditioning, on the model problem in Laplace2DBook.m.
(iii) Modify your code from (i) so that it implements preconditioned conjugate gradient with SSOR preconditioner, then compare your results with SSOR preconditioned steepest descent on the model problem in Laplace2DBook.m. Is the optimal relaxation parameter w the same for CG.
(iv) Compare the relative residual error curves in a plot for SSOR preconditioned steepest descent and SSOR preconditioned conjugate gradient with optimal w parameter in both cases. Which is better?