Web Page for Math 440
Fall Semester 2014
TEXTBOOK: "Fundamentals of Matrix Computations", David Watkins, Wiley, 2010.
#1, Due Sept 8: Hand in your MATLAB m-files for any problems that require coding. You will be docked points for codes that does not run or has mistakes.
(1) Ex. 1.1.10. This was already mostly done in class on the second day. Use the tic and toc commands to compute CPU time in MATLAB.
#2, Due Sept 17:
(2) Write a MATLAB code that implements matrix-matrix multiplication using pseudo-code (1.1.13). You can simply modify your code from part (1). Add a check in your code that the inner-dimensions of A and X agree.
(3) Ex. 1.1.20.
(4) Ex. 1.2.4.
(5) Ex. 1.2.16. Use MATLAB to solve the problem when m=20, c=d=0, and f(x)=1. The toeplitz function will be useful.
(1) First, do Exercise 1.3.17(a) by hand. Then modify ForwardSub.m so that it implements backward substitution for upper-triangular matrices (call it BackwardSub.m), and use your algorithm to solve the same system. Verify that the two results are the same. Hand in a listing of your code;
(2) Exercise 1.3.23, hint: use induction;
(3) Exercise 1.4.16, test your algorithm on the matrix A Exercise 1.4.21;
(4) Exercise 1.4.21, compute the Cholesky factorization by hand in part (a) and verify it's the same as your algorithm gave in 1.4.16.
#3, Due Sep 30:
(1) 1.4.44 and 1.4.64.
(2) Modify AO2D.m so that it solves the system A'*A=A'*b using the Cholesky factorization of A'*A both with and without symmetric approximate minimum degree reordering (symamd in MATLAB). Generate the sparsity plots for the original and reordered matrix using the spy function, as well as spy plots of the Cholesky factorizations of both matrices (four plots all together). Does CPU time improve when the reordering is used? Hand in a listing of your code.
(3) 1.8.4; 1.8.9; 1.8.11.
(4) 1.8.12, test your algorithm on the matrix given in 1.8.4.
#4, Due Oct 10:
(3) 4.1.11 & 4.1.13
(4) SVD application (also the week 6 lab): use the truncated SVD on the 2D example Deblur2D.m. Report the optimal truncation level, i.e., the one that minimizes ||x_recon-x_true||, and plot the corresponding reconstruction.
#5, Due 10/21 (SVD problems): 4.3.8,9; 5.8.14,16,18.
#6, Due 10/31 (QR problems):
(1) 3.2.14 (use a rotator)
(4) Hand in your code for *both parts* of the Week 9 lab.
#7, Due 11/17 (Classical iterative methods):
(1) Hand in your codes implementing Jacobi, Gauss-Seidel, and SSOR from lab 11 below.
(2) 8.2.4, 8.2.12, 8.2.18.
(3) Using Laplace2D.m as a template, implement the methods in part (1) on Example 8.2.8. Plot the residual norm versus iteration for all methods in the same figure as a convergence comparison. For SSOR, use w=1 and w=2/(1+sin(pi*h)), which is the optimal value. So your figure should contain four residual norm curves.
#8, Due 12/2 (Steepest Descent and Conjugate Gradient iterations)
(1) Week 12 Lab
(2) Week 13 Lab
(3) Implement PCG with an incomplete Cholesky factorization (ichol in MATLAB) preconditioner on the X-ray tomography example in Tomography.m. You'll also need Xraymat.m.
Week 1, MATLAB m-file for matrix multiplication: MatMultRowFun.m.
Week 2, Using MATLAB's toeplitz function: ToeplitzMatrix.m.
Week 2, m-file for forward substitution ForwardSub.m
Week 3, m-file for adaptive optics application in 1D AO1D.m. Lab problem: modify line 21 of AO1D.m so that the Cholesky factorization is used instead of '\' for solving the system. Note that this is what is done in operational adaptive optics systems, though in two-dimensions.
Week 3 Lab, PDF: a lab on the Kronecker product and using sparse reordering to reduce fill-in for the Cholesky factorization.
Week 5, m-file for deblurring (deconvolution) example from class Deblur1D.m.
Week 5 Lab: work on the problems at the bottom of Deblur1D.m.
Week 6 Lab: work on the problem at the bottom of Deblur2D.m.
Week 7 Lab, PDF: GaussianDrawsPCA.m, Eigenfaces.m, and faces.zip.
Week 9 Lab: (i) Use the QR factorization to solve the least squares problem found in AO2D.m; (ii) Write your own QR using reflector.m and test it on a random 10x10 matrix to verify that it works.
Week 11, Lab, Classical Iterative Methods.
(i) Use Jacobi.m to verify the results in Example 8.2.3.
Week 12, Lab, steepest descent iteration for solving Ax=b, where A is SPD.
(ii) Modify Jacobi.m so that it implements Gauss-Seidel, call it GaussSeidel.m. Then use it to verify the results in Example 8.2.11.
(iii) Modify Jacoi.m or GaussSeidel.m so that it implements SSOR iteration. Apply it to the example in 8.2.3&11 with w=1 and initial guess x=0. Report the number of iterations needed by each of the above three methods to reach the stopping tolerance res_tol=1e-8.
(iv) Apply each of the above three methods on the example in AO2D.m. Report the number of iterations needed by each of the above three methods to reach the stopping tolerance res_tol=1e-8. Also, plot the residual norm vector versus iteration for each of the three methods, all with initial guess x=0.
(i) Write your own steepest descent iteration based on the pseudo code (8.6.2) on page 598, first with the preconditioner M=I. Your code should be written as a function. The inputs should be A, b, the maximum number of iterations, and the stopping tolerance for the relative residual norm (||b-Ax||/||b||); use initial guess x=0. The outputs should be the approximate solution x, the relative residual norm at x, and the number of iterations.
Week 13, Lab, conjugate gradient iteration for solving Ax=b, where A is SPD.
(ii) Use your code to verify the results in Example 8.4.9.
(iii) Use your code to verify the results in Example 8.4.16. For this you'll need Laplace2DBook.m. Note that the results may be slightly different since your stopping tolerance is different.
(iv) Modify your code from (i) so that it implements preconditioned steepest descent with SSOR preconditioner, then use it to verify the results in Example 8.6.8. For this you'll need Laplace2DBook.m. Again, your results may be slight different due to different stopping tolerances.
(v) Find the optimal relaxation parameter w between 1 and 2 in terms of steepest descent convergence. Plot the relative residual curves for various values of w to illustrate the different convergence rates.
(i) Write your own conjugate gradient iteration based on the pseudo code (8.7.3) on page 604, first with the preconditioner M=I. Your code should be written as a function. The inputs should be A, b, the maximum number of iterations, and the stopping tolerance for the relative residual norm (||b-Ax||/||b||); use initial guess x=0. The outputs should be the approximate solution x, the relative residual norm at x, and the number of iterations.
(ii) Use your code to compare performance of CG with steepest descent, both without preconditioning, on the model problem in Laplace2DBook.m.
(iii) Modify your code from (i) so that it implements preconditioned conjugate gradient with SSOR preconditioner, then compare your results with SSOR preconditioned steepest descent on the model problem in Laplace2DBook.m. Is the optimal relaxation parameter w the same for CG.
(iv) Compare the relative residual error curves in a plot for SSOR preconditioned steepest descent and SSOR preconditioned conjugate gradient with optimal w parameter in both cases. Which is better?