$29
This assignment contains ve questions for a total of 50 marks (equal to 6% of the nal mark for this unit).
Late submissions will be subject to late penalties (see the unit guide for full details).
Please submit the hardcopy in the assignment box. The code for the programming exercises as well as relevant output are to be submitted as part of your hardcopy sub-mission. In addition, collect all your Matlab m- les in one zip or rar- le and submit this le through Moodle. The Moodle submission is due at the same time as the hardcopy submission.
1. Singular Value Decomposition. (10 marks)
Suppose a matrix A 2 Rm n has an SVD A = U V ⊤. Answer the following questions:
Show that the rank of the matrix A 2 Rm n is equal to the number of its nonzero singular values.
Show that multiplication by an orthogonal matrix on the left and multiplication by an orthogonal matrix on the right, i.e., UA and BU, where A 2 Rm n and B 2 Rn m are general matrices, and U 2 Rm m is an orthogonal matrix, preserve the Frobenius norm of A and B.
Following the above result, show that, the Frobenius norm of a matrix A 2 Rm n is equal to the square root of the sum of square of its nonzero singular values, i.e.,
∥A∥F = √∑ri=1 i2.
School of Mathematical Sciences Monash University
2. Bidiagonalisation. (10 marks)
Suppose we have a matrix A 2 Rm n cedure and the Lawson-Hanson-Chan Answer the following questions:
Recall the Golub-Kahan bidiagonalisation pro-(LHC) bidiagonalisation procedure (Section 8.2).
Workout the operation counts required by the Golub-Kahan bidiagonalisation.
Workout the operation counts required by the LHC bidiagonalisation.
Using the ratio mn , derive and explain under what circumstances the LHC is com-putationally more advantageous than the Golub-Kahan.
Suppose we have a bidiagonal matrix B 2 Rn n, show that both B⊤B and BB⊤ are tridiagonal matrices.
Hint: recall that the operation counts of the QR factorisation (using Householder re ec-tion) is about 2mn2 23 n3. You can relate those two bidiagonalisation procedures to the QR factorisation to work out their operation counts.
3. Lanczos Method. (Optional)
Recall the update formula of the Lanczos algorithm
A⃗qk = k 1⃗qk 1 + k⃗qk + k⃗qk+1:
~
Write the matrices Tk+1, Qk+1, and Tk+1 as
2
3
0
0
7
6
6
7
2 0
1
1
3
6
7
6
..
.
..
.
k
1
7
6
7
6
7
Tk+1 =
6
12
...
7
; Qk+1
=
6
⃗q2
⃗qk
7
;
6
k 1
k
7
6
7
6
7
6
7
4
5
4
5
0
0
3
2 0
1
1
T~
= 6
12
...
k
1
7
:
6
7
k+1
6
7
6
...
...
7
6
7
6
k
7
6
7
4
k 1
k
5
6
7
Then the resulting decomposition has a matrix form
e
AQk+1 = Qk+2Tk+1:
Suppose the Lanczos algorithm is executed for a particular symmetric and real matrix
m m ⃗
A 2 R and vector b until at some step k < m, an entry k = 0 (breakdown) is encountered.
School of Mathematical Sciences Monash University
Show that AQk+1 = Qk+1Tk+1.
Show that each eigenvalue of the matrix Tk+1 is an eigenvalue of A.
Solving Least Square Problems using SVD. (15 marks)
Recall that the LHC bidiagonalisation rst applies the QR factorisation A = QR to reduce the matrix A to a square and upper triangular matrix R, and then applies bidi-
^
agonalisation to transform R to a bidiagonalised matrix B. Then the S-matrix method
^
can be applied to B to compute singular values and singular vectors.
Implement a simpli ed version of SVD solver for m ≫ n that bypasses the bidiagonali-sation step. Then apply your method to solve a polynomial least square tting problem.
The simpli ed version of the SVD solver consists of the following steps:
Use the reduced QR factorisation in MATLAB ([Q, R] = qr(A, 0);) to compute the reduced QR factors of the matrix A.
Construct the S matrix using the square and upper triangular matrix R, and then solve the eigendecomposition of S using MATLAB built-in eig function.
Obtain the SVD of R using the eigendecomposition of S. Here you need pay attention to the ordering of eigenvalues produced by eig.
Compute the reduced SVD of A using the matrix Q and the SVD of R.
Hint: Q3 of Tutorial 11 derives the full and reduced SVD using QR factorisation and the SVD of R. You should use the function header given below.
function [U, Sigma, V] = mySVD(A)
Usage: [U, Sigma, V] = mySVD(A) computes the SVD of the matrix A using
the S-matrix method.
%
To expreience the S-matrix method and avoid complicated steps such as
bidiagonalisation and shifted QR eigenvalue solvers. Here we assume the
input matrix A is a square matrix, explicitly construct the matrix, and
then use the MATALB eigenvalue solver to find the SVD of A.
your code
end
Then apply the SVD function you implemented to the polynomial least square tting problem shown in Example 8.19. The problem setup is given in the MATLAB le svd lsq.m (given below and available on Moodle). Implement your solution to the least square problem using the SVD function implemented above, and then plot the least square solutions including the tted coefficients and the tted polynomial compared to the data.
3
School of Mathematical Sciences Monash University
generating the test problem m = 51;
n = 11;
locations for taking data xs = linspace(0, 1, m)';
the data
ys = sin(10*xs);
create the Vandermonde matrix
A = fliplr(vander(xs)); A = A(:, 1:n);
compute the SVD using your code [U, Sigma, V] = mySVD(A);
check the function mySVD
if norm(U*Sigma*V' - A) 1E-10
disp('U Sigma Vt does not equal to A')
end
if sum( abs(diag(U'*U) - 1) 1E-10 ) ~= 0
disp('U is not orthnormal')
end
if sum( abs(diag(V'*V) - 1) 1E-10 ) ~= 0
disp('V is not orthnormal')
end
% your code goes here
X-Ray Imaging. (15 marks)
Implement the truncated SVD method for reconstructing X-ray images. Download the data le (either xray data 32.mat or xray data 64.mat ) from Moodle. Two X-ray imaging problems with different sizes (32 32 and 64 64) are given in these data les. You can use either of these data les to complete this question. In the data le, the matrix F is the model, the vector d is the noisy data, the scalar m de nes the image size (m m), and the vector xr represents a random image used for demonstrating how to plot the reconstructed image.
Complete the following tasks:
(i) Compute the SVD of F , use the truncated SVD with rank-k to reconstruct the
image, i.e., ⃗x+ = Vk
1U⊤d⃗.
k
k k
4
School of Mathematical Sciences
Monash University
(ii) Use k = 100; 101; : : : ; 930 to compute the goodness of t (of the reconstructed
+
⃗
+
image) to the data, ∥F ⃗xk
dn∥, and the 2-norm of the reconstructed image, ∥⃗xk ∥.
Plot the L-curve and determine the index of the corner of the L-curve (which indicates the balance point between the representation error and the robustness). Plot the corner you nd on the same plot.
Plot the reconstructed image using the truncation index found in step (iii). Hint: the MATLAB command imagesc(reshape(xr, m, m), [-0.01, 0.01]); plots the reconstructed image de ned by the vector xr.
5