Web Page for Math 514
Spring Semester 2012
Numerical Linear Algebra
SYLLABI: PDF
TEXTBOOK: "Fundamentals of Matrix Computations", David Watkins, Wiley, 2010.
HOMEWORK ASSIGMENTS:
#1, Due 2/1: Exercises 1.1.10,20,25; 1.2.17,21.
#2, Due 2/10:
(1) Modify forwardsub.m so that it implements backward substitution, hand in a listing of your code;
(2) Use your code to solve the system in 1.3.17 and also solve the system by hand to see that you get the same result;
(3) Do 1.3.25&26;
(4) Do 1.4.21;
(5) Write a short MATLAB function that does the following: (i) inputs a matrix A and vector b, (ii) uses MATLAB's 'chol' function to compute the Cholesky factorization of A, (iii) solves Ax=b using the 'forwardsub' and 'backwardsub' functions you wrote above, (iv) outputs the solution. Test the code on the system in 1.4.21 and see that you get the same answer.
#3, Due 2/22:
(1) Do 1.4.38.
(2) Do 1.4.44.
(3) (i) Do 1.5.9, then (ii) modify your codes from the previous homework so that they implement banded forward and backward substitution, with the semiband width s as an input, and finally (iii) use your codes for solving Ax=b, where A is the 2D negative-Laplacian matrix gotten from Laplacian.m and b is a vector of 1's. Hand in a listing of your codes and a plot of the solution x.
(4) Do 1.5.14, but with Ax=b from problem 1.2.17, which you constructed in HW #1. In part (b), don't worry about the 'physical significance'.
(5) Do 1.6.7.
#4, Due 3/2:
(1) Modify AOTwoD.m so that it solves the system using (a) A\b (Gaussian elimination with partial pivoting), (b) the Cholesky factorization with symmetric approximate minimum degree reordering (symamd in MATLAB), and (c) the LU factorization with column approximate minimum degree reordering (colamd in MATLAB, see p. 109). Which is the most efficient method?
(2) 1.7.10 (b) & (c), 1.7.18
(3) 1.8.4, 1.8.9, 1.8.11
#5, Due 3/12:
(1) 1.7.47
(2) 2.1.17
(3) 2.1.32 & 2.1.33
(4) 2.2.11
#6, Due 3/21:
(1) QR by rotators: 3.2.12 & 3.2.14
(2) QR by reflectors: 3.2.28, 3.2.39, & 3.2.47
#7, Due 3/30:
(1) Exercise 3.3.9, as instructed in computer lab #8.
(2) Use the QR factorization to solve the least squares problem found in AOTwoD.m.
#8, Due 4/16
(1) Subspace and basis: Exercise 3.5.1, 3.5.5, 3.5.7.
(2) Null-space and rank: Exercise 3.5.19.
(3) Least-squares solutions: 3.5.23, 3.5.29
#9, Due 4/25
(1) SVD Theory: 4.1.5,6,11, 4.3.8,9,11
(2) Computation: 4.3.9
#10, Due 5/4
(1) SVD application: Use the truncated SVD, in a fashion similar to what you did in lab 9, on the 2D example BlurringLabTwoD.m
(2) Eigenvalues: 5.2.6 (also find the eigenvectors of B and C), 5.2.14 (a), (b), (d).
(3) Power Method: 5.3.10, 5.3.18 (turn in your m-file for implementing these power methods).
(4) Computing the SVD: 5.8.14, 5.8.16, 5.8.17.
COMPUTER LABS AND CODE:
(1/27) lab 1: lab1.pdf: Ex1_1_9.m, Ex1_1_10.m, Sect1_2_BVP.m
(2/1) lab 2: (1) Write a MATLAB function, with inputs L and b, that performs forward substitution. Note that L must be lower triangular. Use your function to solve the system in Exercise 1.3.4. (2) If there's time, try Exercise 1.3.7.
(2/6) lab 3: Work on problems 1, 2, and 5 in Assignment #2 above.
(2/13) lab 4: Using the 3D Laplacian in Laplacian.m, do Exercise 1.6.5.
(2/15) lab 5: (1) Using the 2D Laplacian in Laplacian.m do problem 1.6.7 (problem 5 from HW 3); and (2) discuss the adaptive optics phase reconstruction problem, have a look at AOTwoD.m, then modify the code so that it implement the Cholesky factorization with the best reordering from part (1).
(2/24) lab 6: (1) Do problem #1 from Homework 4; (2) Modify GMRFsimulate.m so that samples are computed from the 1d and 2d Gaussian Markov random fields (you'll need to compute the Cholesky factorization).
(3/5) lab 7: Do exercise 3.1.5 from the text.
(3/19) lab 8: Do exercises 3.2.48 and 3.3.9 from the text.
(4/19) lab 9: Do exercises 4.2.8, 4.3.9. Then have a look at BlurringLabOneD.m.
(4/25) lab 10: (i) write MATLAB code that implements the power method (see Exercise 5.3.10), then verify the results in Example 5.3.5; (ii) modify your code so that it implements the shifted-inverse power method, then verify the results of example 5.3.16.