Starting from:

$30

Matrix Computations Lab Session 3 Solution

MA-423 : Lab 2018 R. Alam







The purpose of this lab tutorial is to solve the Least-Squares Problem (in short, LSP) Ax = b.




Here A ∈ Cm n and b ∈ Cm, and usually m is much bigger than n.




Origin: Suppose that we have a data set (ti, bi), for i = 1 : m, that have been obtained from some experiment. These data are governed by some unknown laws. So, the task is to come up with a model that best fits these data. A model is generated by a few functions, called model functions, ϕ1, . . . , ϕn. Therefore once a model is chosen, the task is to find a function p from the span of the model functions that best fits the data.




Suppose that the model functions ϕ1, . . . , ϕn are given. For p ∈ span(ϕ1, . . . , ϕn), we have p = x1ϕ1 + · · · + xnϕn for some xj ∈ C. Now, forcing p to pass through the data (ti, bi) for i = 1 : m, we have p(ti) = bi + ri, where ri is the error. We want to choose that p for which

the sum of the squares of the errors ri is the smallest, that is,


m


|ri|2 is minimized.


i=1








































Thus in matrix notation,
Now p(ti) = bi + ri gives x1ϕ1(ti) + · · · + xnϕn(ti) = bi + ri.∑












ϕ1(t1) · · ·
ϕn(t1)


x1






b1






r1






..




· · ·




..




..




..




..




ϕ1(t2)
ϕn(t2)


x2






b2






r2




ϕ
(t
m
)
· · ·
ϕ
n
(t
m
)
x


=
b
m


+
r
m


.


1
.






.




n
























· · ·






































This is of the form Ax = b + r and we have to choose x ∈ Cn for which ∥r∥2 = ∥Ax − b∥2 is minimized. We write this as LSP Ax = b.




1. Consider the following data












t


1.5
2.0
2.5
3.0


1.0
















b
1.1
1.2
1.3
1.3
1.4

















Setup the LSP Ax = b for a straight line passing through the data points. Use the standard basis polynomials ϕ1(t) = 1 and ϕ2(t) = t.



Useful matlab commands: t = 1:.5:3; t = t’; s = ones(5,1); A = [s t];




Compute solution of the LSP Ax = b by using the matlab command x = A\b (the ”backslash” command).



Use the matlab plot command to plot the five data points and your least squares straight line. Type help plot for information about using the plot command.



Use matlab to compute ∥r∥2, the norm of the residual r.



Determine the polynomial of degree 19 that best fits the function f(t) = sin(π5 t) + 5t for t1 = −5, t2 = −4.5, . . . , t23 = 6. Setup the LSP Ax = b and determine the polynomial p in three different ways:



By using the matlab command A \ b



This uses QR factorization to solve the LSP Ax = b. Call this polynomial p1.




1






(b)
By solving the normal equation A Ax = A b. Use x = (A’*A)\(A’*b). Call this


polynomial p2.


0 ] [


] [


]


[
A
x
0
(c)
By solving the system
Im
A
−r
=
b
. Call this polynomial p3.



Compute the condition number (use the matlab command cond(A)) of the coefficient matrix associated with each of the systems that you are solving. Print the result to 16 digits (use format long e). Which one is the most ill conditioned?






The norm of the residual ∥r∥2 = Σ23i=1|pj (ti) − f(ti)|2 j = 1, 2, 3 for each of these methods gives an idea of the goodness of the fit in each case. Compute these norms (again in format long e). You may use the polyval command to evaluate the polynomials p1, p2 and p3 at the points ti, i = 1 : 23. However you must flip the vectors p1, p2 and p3 upside down by using the flipud command before this. Type help polyval and help flipud for details.




Finally, plot the polynomials p1, p2, p3 and the function f on [−5, 6]. Use different colours to distinguish these plots. Do you observe any difference? If yes, which polynomial is a better approximation of f?




The Least Squares Problem (LSP) Ax = b has a solution where the fit is good if b is nearly in the range R(A) of A or in other words the angle θ between b and Ax is very small. The purpose of the following exercise is to show that in such cases, the QR method of solving the LSP Ax = b is better than Normal Equations method.



Use the linspace command to generate a column vector X consisting of 50 equally spaced points between 0 and 1. Generate the Vandermonde matrix which has columns X.i 1 for i = 1 : 7. Choose w = randn(7, 1) and b = A ∗ w. Then b ≈ p(X) where p is the polynomial p(t) = w(1) + w(2)t + · · · + w(7)t6. This ensures that θ ≈ 0 for the LSP Ax = b. Solve this problem via Normal Equations method and QR method via reflectors (this is the default procedure so that you just have to type A\b for this!) and denote the solutions as xhat and xtilde, respectively. Examine the relative
errors ∥xhat − w∥2 , and ∥xtilde − w∥2 in the solutions as well as those in the fits




∥w∥2 ∥w∥2

∥rhat∥2 and ∥rtilde∥2 where rhat := b − A ∗ xhat and rtilde := b − A ∗ xtilde.




∥b∥2 ∥b∥2

Which method fares better? Also find the condition number of A.




Repeat the above process for 10 Vandermonde matrices corresponding to polyno-mials of degrees 6 to 15. Store the relative errors in the solutions corresponding to each method in separate arrays and plot them on the same graph in log 10 scale on the y − axis for a comparative analysis. Do the same also for the relative errors in the fits (measured relative to b as above). Also store the condition numbers of the Vandermonde matrices at each step in a single array.



What do you observe about the performance of the two methods as the polynomials increase in degree? Which is more sensitive to ill conditioning, the solutions or the fits?







End ***









2

More products