Starting from:
$30

$24

Homework 7 Solution




Hand in hard copies of your code and results, and answers to the questions. Submit the les BFGS.m and LBFGS.m, as well as typeset or neatly written responses to the questions.




Complete the proof of Theorem 3.7 in the book by showing the converse of the results proved in the book. That is: under the conditions of the theorem, if fxkg converges superlinearly to x , then
lim k(Bk r 2f(x ))pkk = 0:







k!1




Question 6.7 from the text.



Prove the statements 2 and 3 on p. 145 of the text.



(a) Use the BFGS method to solve a nonlinear least squares problem in which the objective



is de ned by

15

f(x) = 12 X ri2(x);




i=1




where x 2 R3. The function f and its gradient are calculated by the routine nls resida.m, available on the web site, which has the usual calling sequence for function and gradient evaluation routines. The starting point is speci ed in testqn.m. Note that testqn.m also prints the number of function and gradient evaluations (numf and numg, respec-tively). These are already tracked in the function evaluation les, so you don’t need to increment them.




Store the approximation Hk to the inverse Hessian. Use formula (6.20) from the text to reset H0, after the rst step has been taken but before the update to H1 is performed. Your calling sequence should be




function [inform, x] = BFGS(fun, x, qnparams)




where qnparams = struct(’toler’, 1.0e-6, ’maxit’, 1000) and x is a struct with the elds x.p and x.g, as in previous homeworks.




Use the stopping criterion




krf(x)k2 qnparams:toler(1 + jf(x)j):




You should use your line search routine EBLS.m with parameter settings c1 = 10 4, c2 = 0:4, maxit=100, alpha start=1.




(b) Repeat this process with the function f(x) =
3
r
(x), where x




R2
and the residuals
are de ned by
Pi=1


Ti
B x


2




ri(x) = a + Hx + 25 x
1


1
d;




1






1














where a, H, d, and B are de ne in the evaluation routine nls residb.m for the function and its gradient is, available on the web site. The starting point is speci ed in testqn.m.









Use your BFGS code to minimize with the function xpowsing from Homework 6. The starting point and dimension n are speci ed in testqn.m.



Implement the LBFGS method, Algorithm 7.5 in the text. Use the EBLS.m routine with parameters set as above. Test it on the function



f(x) =
1
(x1
1)2 +


1
n 1(xi 2xi+1)4;




2






Xi


2








=1



with n = 1000 and x = (1; 1; : : : ; 1)T . The evaluation routine for this function is tridia.m. Your calling sequence should be




[inform,xnew] = LBFGS(fun,x,lbfgsparams)




where lbfgsparams is de ned by




lbfgsparams=struct(’toler’,1.e-4,’maxit’,1000,’m’,5);




(The value of m, which is the number of saved steps, is set to a number of di erent values in the calling code testqn.m.)




Comment on the following issues.



How does the number of iterations of LBFGS change as a function of number of saved steps m, from di erent starting points?



How does the performance of BFGS on xpowsing compare with the techniques you used in the previous homework (namely, nonlinear conjugate gradient and steepest descent)?























































































2

More products