Homework assignments for Math 221, Fall 2024

Homework should be turned in on gradescope. Homework answers will be posted on bcourses.berkeley.edu.
  • Homework #1 due Monday Sep 9, by 11:59pm
  • From Chapter 1: 2, 4, 10, 11
  • Extra credit from Chapter 1: 18
    See List of Errata for the Textbook
    Hint: Prove the following Lemma: If floating point numbers x and y satisfy 2*y >= x >= y >= 0, then fl(x-y) = x-y, i.e. x-y is an exact floating point number.
  • In the last version of the IEEE 754 Floating Point Standard from 2019, we voted to include a new recommended instruction that implements a single operation performing the computation in this question. The motivation was to support both higher precision arithmetic as described in the question, and reproducibility, i.e. getting the bitwise identical answer no matter the order in which a sum is computed. Here is a short news article about why this is important. Here is a link to work on implementing reproducible dot products, etc., using existing floating point operations predating the latest IEEE 754 standard. Finally, this article describes how to implement reproducible summation both with and without the new operation in the latest IEEE 754 standard.
  • Recently, two new floating point standard committees have started meeting. One is the next IEEE 754 Floating Point Standard, since the IEEE requires all standards be reviewed and editted every 10 years to keep them up-to-date (so our deadline for the next standard is 2029!). The other is the P3109 Working Group on 8-bit Floating Point for Machine Learning. 8 bits is very low precision for classical uses of floating point, but appears to be enough to tell dogs from cats and many other ML applications. Here is an interim report from the committee. In the meantime, many companies have already agreed on their own common standard.

  • Homework #2 due Monday, Sep 16, by 11:59pm
    Again, please check List of Errata for the Textbook
  • From Chapter 1: 3, 5, 7, 13, 14, 15
  • From Chapter 1: 16 (parts 1, 2, 4, 5, 6, 8, 10)
  • From Chapter 3: 9
  • Extra Credit: From Chapter 1: 20

  • Homework #3 due Monday, Sep 23, by 11:59pm
  • Question 2.13, parts 1 and 2
  • Consider the problem of solving L*X = B where B is a given n-by-n matrix, L is a given n-by-n nonsingular lower triangular matrix, and X is an unknown n-by-n matrix.
  • Write down the simplest algorithm you can, and count the flops.
  • Analogous to the blocked matrix-multiplication algorithm presented in class, write down a blocked version of this algorithm, and analyze the number of words moved; can you make it move as few words as matrix-multiply (in the Big-Oh sense)?
  • Does it satisfy the same backward error analysis as the simple algorithm (as in Q. 1.11)?
  • Analogous to the recursive matrix-multiplication algorithm presented in class, write down a recursive version of this algorithm, and analyze the number of words moved; can you make it move as few words as matrix-multiply (in the Big-Oh sense)?
  • Does it satisfy the same backward error analysis as the simple algorithm (as in Q. 1.11)?
  • (Extra credit) Show that you can't communicate fewer words to solve L*X=B than in matrix-multiply, namely Omega(n^3/sqrt(M)). Hint: Show that any routine for solving L*X=B can be used to do matrix-multiplication (of slightly smaller matrices), so the lower bound for matrix-multiplication applies (perhaps with a smaller constant, all hidden in the Big-Omega notation).

  • Homework #4 due Monday, Sep 30, by 11:59pm
  • Questions 2.2, 2.6, 2.7
  • Question 2.8. Hint: Write a matlab program to generate random examples according to the hint in the book, and compute the backward error of the solutions computed by Cramer's Rule.
  • Question 2.12
  • Analyze the cost of recursive LU (RLU) presented in lecture, deriving and solving recurrences for A(n) = #arithmetic operations and W(n) = #words moved.

  • Homework #5 due Monday, Oct 7, by 11:59pm
  • Question 1, Part 1: Consider an n by m matrix A in Compressed Sparse Row (CSR) data format: if A has nnz nonzero entries then
  • val(1:nnz) contains the nonzero entries packed row by row, from left to right in each row
  • colindex(1:nnz) contains the column index of each nonzero entry, i.e. val(i) is in column colindex(i) of A
  • rowbegin(1:n+1) points to where each row begins: the nonzeros of row j are contained in val( rowbegin(j) : rowbegin(j+1)-1 )

  • Write, in Matlab, or your preferred language, explicit loops to compute the matrix-vector product y = A*x, where x and y are (dense) vectors of length m and n. (In other words, don't use any of Matlab's built in function for sparse matrices, though you may use notation like sum(a(i:j).*b(i:j)) or (a(i:j))'*b(i:j) for the dot product of parts of two column vectors.) Test it by running it on a few (small) sparse matrices, and comparing to Matlab's built-in routine for y=A*x. Make sure to test a matrix A with one or more zero rows. (Matlab has a built in sparse-matrix-vector-multiplication routine, that is called when doing A*x in Matlab on a matrix A that has been declared "sparse." But the point of this question is to understand CSR format, which is very commonly used.)
  • Question 1, Part 2: Now suppose that A is stored in Compressed Sparse Column (CSC) data format; this means storing A^T in CSR. Again write in Matlab explicit loops to compute y = A*x.
  • Question 2.19. Another hint for Part 1: To show A is nonsingular, we need to show that if the column vector x is nonzero, so is x'*A. Suppose x(i) is the largest component of x in absolute value. What can you say about the i-th component of x'*A?
  • Question 3.1
  • Question 3.3, parts 1, 2 and 3

  • Homework #6 due Monday, Oct 14, by 11:59pm
  • Question 3.4, just the normal equations part
  • Question 3.11
  • Question 3.12
  • Question 3.13
  • Question 3.14
  • Extra Credit: Question 3.17, part 1
  • Question 3.18

  • Homework #7 due Wednesday, Oct 23 by 11:59pm, click here; Corrections made (in boldface) on Oct 22, 3:22pm and 5:45pm

  • Homework #8 due Wednesday, Oct 30, by 11:59pm
  • Questions 4.1, 4.2, 4.3, 4.4, 4.5

  • Homework #9 due Wednesday, Nov 6, by 11:59pm
  • Questions 4.6, 4.8, 4.11, 4.13, 5.4 (part 1 only), 5.5, 5.14,

  • Homework #10 due Wednesday, Nov 13, by 11:59pm
  • Question 6.1
  • Question 6.4 (only need to do parts not done in class)
  • Question 6.5 (see items on pages 357 and 358 of errata)
  • Work on your class projects!

  • Homework #11 due Wednesday, Nov 20 at 11:59pm
  • Consider the slight variation on the 2D Model Problem, gotten by discretizing the slightly different PDE
    -a * d^2 v(x,y)/dx^2 - b * d^2 v(x,y)/dy^2 = f(x,y)
    where a and b are some positive constants. Briefly justify your answers to the following questions. (Hint: Use Matlab to confirm that your answers are correct for small examples.)
  • Using Kronecker product notation, write down the matrix; call it T(a,b).
  • Write down explicit formulas for the eigenvalues and eigenvectors of T(a,b), again using Kronecker product notation. Is T(a,b) symmetric and positive definite? What is its 2-norm condition number?
  • Does Jacobi's method converge for T(a,b)? What is rho(R_J), the spectral radius of the Jacobi iteration matrix R_J?
  • Repeat the last question for the 3D problem gotten by discretizing
    -a * d^2 v(x,y,z)/dx^2 - b * d^2 v(x,y,z)/dy^2 - c * d^2 v(x,y,z)/dz^2 = f(x,y,z)
    where a, b and c are some positive constants.
  • Extra Credit: Question 6.10

  • Homework #12 due Wednesday, Nov 27 at 23:59 Pacific Time
  • Consider the same slight variation on the 2D Model Problem, T(a,b), defined in last week's homework.
  • Is there an ordering of the equations and unknowns for which SOR(w) converges, for T(a,b), for some values of w? What ordering, and which values of w?
  • What value w_opt minimizes rho(R_SOR(w))? What is the value of rho(R_SOR(w_opt))?
  • Question 6.15, parts 1 and 2
  • Extra Credit: Question 6.15, part 3
  • Keep working on your class projects!