Starting from:
$35

$29

Numerical Methods HW2 Solution

Numerical Methods HW2 Solution

2.1
Errors: Finite Precision Algebra
5 pts
Consider a “FAKE” computer that rounds any real number into the 3-significant figure, floa@ng-

point decimal format ± d.dd x 10d-5, where each d is a decimal from 0-9, but there are no other
restric@ons (no “infinity”, no “forcing a number to be denormal if the d in the exponent is 0”). (a)  What is the largest number (realmax) you could store in this FAKE computer?

(b)  What is the smallest posi@ve number (realmin) you could store in this FAKE computer that is not denormal (i.e. maintains all 3 significant figures)?

(c)  What is the value of machine precision (ε)?

Now let’s say you want to add the following four numbers with this FAKE computer: S = 1.43 + 173 + 41.3 + 6.41 (The real, exact sum is clearly 222.14)

(d)  What is the final value of S in the fake computer if you did the sum in the le[-to-right order wri\en above? What is the percent error in the value of S (compared to the real sum)?

(e)  Demonstrate the ideal order you should add the four numbers to provide the most accurate value of S. What is that final value of S, and what is its percent error?

AS ALWAYS: SHOW ALL YOUR WORK !!! JUSTIFY YOUR RESULTS !!!

I care less about you having the right answer than you showing me the right process. That means you have to show me all your steps in how you came up with (say) ε, or how and why you ordered your numbers a certain way in part (e).

2.2 Analyzing Error plots (Fundamentals of Logarithms)

5 pts This problem has nothing to do with MATLAB. It’s to review your high-school “rules” of how logarithms work. Go see the wikipedia page on logarithms if you forgot how to calculate log10(0.0001), or expand log10(xy) and log10(xy).

In many numerical processes, the Error varies exponen@ally with some parameter n according to Error = k n p for constants k and p. We then call p the “order” of the Error with respect to n . This semester we’ll o[en plot log10(Error) on the y-axis against log10(n) on the x-axis to visualize how quickly Error goes down as n gets smaller.

a) Take the log10 of the equa@on Error = k n p and determine in general how the constants k and p relate to the slope and y-intercept of the straight line log10(Error) vs. log10(n) plot you’d get.


I now plo\ed an example base-10 log of Error





vs. base-10 log of n at right. It shows that Error

0
Smaller Error


goes down for smaller n, and log10(Error) vs.
)




log10(n) looks like a straight line.




b) 
Use the plot to read off the values of Error for
(Error
-2



both n = 1 and n = 0.0001.
10





log



c) 
Apply the general rela@on you developed in











part (a) to determine the value of the error

-4

Smaller n

order p for the data in this plot.











For everything – show your work! Don’t just





write down an answer, show me how you got it.


-4
0





log10 (n)
Numerical Methods – HW2 25

2.3 Fundamentals: ConverZng between Binary and Decimal Show your work!!

a) Convert the following base 2 (binary) representa@ons of numbers to base 10 (decimal):

2 pts i) (10110101.010111)2 -- “fixed” point binary (many bits to the le[ of the “point”)

ii) (1.001011)2 x 2(10110)2 -- “floa@ng” point binary (like how MATLAB stores it)

b) 
Convert the following decimal numbers to binary. Please do NOT write it in “floa@ng-point binary”

form like (a)(ii) above; just write it in the simpler “fixed” point form like (a)(i).
3 pts
i)
470.125

ii)
12.85 (if this ends up repea@ng in base 2, just give answer to first 7 frac@onal “bits”


to the right of the “point”, i.e. something.bbbbbbb)

2.4 Recall from class that MATLAB uses standard (IEEE) double-precision floa@ng point nota@on:

52 bits

11 bits
Any Number = +/- (1.bbb…bbb)2
x 2
where each bit b


(bbb…bbb2 – 102310) represents the digit 0 or 1.


That is, the man@ssa is always assumed to start with a 1, with 52 bits a[erwards, and the exponent is an eleven bit integer (from 000…001 to 111…110) biased by subtrac@ng 1023.

Well, in “my college days” the standard was single-precision floa@ng point nota@on in 32-bit words:

23 bits 8 bits




Any Number = +/- (1.bbb…bbb)2 x 2 (bbbbbbbb2 – 12710)

That is, the man@ssa is always assumed to start with a 1, with 23 bits a[erwards, and the exponent

is an eight bit integer (from 00000001 to 11111110) biased by subtrac@ng 127 to allow for an

almost equal range of posi@ve, zero, and nega@ve exponents. And it s@ll reserved all exponent bits

iden@cally equal to 00000000 for the number 0 (and “denormal” numbers), and 11111111 for ∞.
a) 
Evaluate REALMAX (the largest possible posi@ve number that is not infinity) for my (1990s) single-

precision computer.
3 pts
i. 
Express your value as a base 10 floa@ng-point numbers with 3 sig. figs (e.g. 3.45 x 1025), and

ii. 
show your work! (What did you start with in binary, and how did you get that to decimal?)


Compare your value to today’s double-precision computers (just type realmax in MATLAB)
b) 
I told you in class that machine precision in MATLAB, which is defined as the difference between 1

and the next largest storable number, is approximately 2.2204 x 10-16 in today’s double-precision

computers (just type eps in MATLAB to confirm).
3 pts
i.  Evaluate machine precision for the single-precision computers from “my day”, and express


your answer as a base 10 floa@ng-point number with 3 sig. figs (e.g. 3.45 x 10-10). Show your


work! (What did you start with in binary, and how did you get that to decimal?)

ii.  What’s the ra@o of machine precision now to machine precision then? (i.e. how much more accurately can we store numbers since 64-bit processors arrived in the early 2000s?)

EVERYTHING FOR 2.1 – 2.4 IS BEING HANDED IN TO CLASS ON PAPER (not online).
Numerical Methods – HW2 25

2.5

12 pts

From “real” calculus, you should know the deriva@ve of f(x) = x2 is f’(x) = 2x. So at x = 300, the exact value of the deriva@ve f’(300) is obviously 600.
However, computers approximate the deriva@ve by taking the limit f ʹ(x) = lim f (x + a) − f (x) .

So applying this to the f(x) = x2 example above, I would hope that
a→0
a



f’(300) would be well-approximated by the limit f ʹ(300) = 600 = lim
(300 + a)2 − (300)2


a
a→0


Let’s test this out using MATLAB for smaller- and-smaller values a, and look at the errors! Specifically, we’ll look at the right-hand -side of the limit using a range of a from 1 to 10-18, and compare that approximate value of the deriva@ve to the exact value of f’(300) = 600.

(A)  First, make the exact MATLAB func@on deriv.m below to evaluate the approximate limit for any a:

function fprime = deriv(a)

x = 300;

fprime = ((x+a).^2 – x^2) ./ a;

end

Do not simplify or change the formula for deriv(a) in any way! The equa@on is important so the mathema@cal opera@ons are done the same way for everyone in the course. If you change it, you may not see the numerical phenomena below, of which your explana@on is being graded.

(B)  Make your own script, called HW2_5.m, that uses deriv.m to calculate vectors of the following four parameters using the 18 values of a in the vector [1, 0.1, 10-2 , 10-3 , … , 10-18 ] :

a, deriv(a), absolute value of “error”, absolute value of “% rela@ve error” For example, you might call the four vectors a, fprime, Error and PctError .

•  Define “error” here as the difference between deriv(a) and the exact value of 600.

•  Your error must be only posi@ve, because soon you’ll take it’s logarithm. Try using the built-in command abs( ) to take the absolute value of elements in a vector.

(C)  Add to the end of your script, the following code fragment to make two plots in your figure window: one of the approximate deriva@ve, and one of the log10 of the percent rela@ve error, both as a func@on of the base-10 logarithm of a:

subplot(211); plot(log10(a), fprime, '-o')

xlabel('log_1_0(a)'); ylabel(’Approx Derivative')

subplot(212); plot(log10(n), log10(PctError),'-o')

xlabel('log_1_0(a)'); ylabel('log_1_0(|% Error|)')

Are any of these plovng commands new for you?
•  subplot(211) and subplot(212) allow you to put two plots in the same figure.

•  log10(x) returns the base- 10 logarithm of x. Don’t use log(x), that’s used instead for the natural log (i.e. ln(x) ).

•  The underscores (‘_’) in the x- and y- labels tell MATLAB to write the next le\er as a subscript font, so your label looks very professional as log10(a).
Numerical Methods – HW2 25

2.5 conZnued …

(D)  Save your figure (with both plots in it) as a single pdf file called PLOT2_5.pdf. I’d use the command

print('-dpdf','PLOT2_5.pdf') but do whatever works for you.

That’s it! You’re done the MATLAB work. Now you’re ready to interpret the results.

You would hope that your approxima@on deriv(a) would just keep gevng closer-and-closer to the exact value of 600 as a gets smaller. That’s what “real” calculus would say is true. What you should be seeing is this trend working ini@ally (good!), but then for small-enough a the approxima@on starts gevng worse (bad!), and then once a hits a cri@cal value of 10-14 things go really bad and you suddenly get a 100% error (log10(% error) = 2) !!! Please see the TA if you don’t see that trend. You need to make sure you did your MATLAB work correctly so we’re all star@ng from the same values and plot to answer these ques@ons.

Problem (a): What is the smallest value of percent error, and the corresponding (“op@mum”) value of a? Enter these values directly in the comment box on Carmen.

So smaller a some@mes helps, but not always. Does this make sense? Recall that total numerical error comes from both Finite Precision Algebra and Series TruncaRon (algorithm) errors. Work through the remaining ques@ons to understand and explain the error trend be\er:

Problem (b): (A-type “error” in the approximate deriva@ve due to just Finite Precision Algebra)

(i)  Describe how you expect only this source of error to behave as a gets smaller. Be sure to use quanZtaZve reasoning (i.e. don’t just guess) to jus@fy/explain your answer.

(ii) Which region of the plot (as a range of a) do you think the total error is being dominated by this (finite precision) source of error?

Problem (c): (Why does “error” suddenly jump to 100% for a at and below 10-14?)

(i)  Here’s the most important part of the homework: think carefully and explain to me (“quan@ta@vely jus@fy”) exactly why it turned out to be that value of a specifically. In other words, based on the equa@on we’re using for the deriva@ve approxima@on, and your understanding of how MATLAB calculates it, how could you have predicted the error would “blow-up” at that specific value of a = 10-14?

Problem (d): (B-type “error” in the approximate deriva@ve due to just Series TruncaRon)

(i)  Describe how you expect only this source of error to behave as a gets smaller. Be sure to use quanZtaZve reasoning to jus@fy/explain your answer.

(ii)  Which region of the plot (range of a) do you think is being dominated by this source of error?

(iii)  Look at the slope of the log10(Error) vs. log10(a) plot in the region you iden@fied in (d)(ii). Use that slope to compute the order of the error (i.e. “p”, as defined in problem 2.2).

(iv)  Finally: using just the equa@ons in deriv(a) show me how you could have analy@cally predicted that value of error order p for this problem, before even making the plot. Hint: similar to part (c)(i), think about how MATLAB is calcula@ng deriv(a) as a gets smaller
Numerical Methods – HW2 25

2.5 conZnued …

Why I love this homework: half is you trying a new process and making observaRons about the “error”, and the other half is you analyRcally jusRfying why those error trends make sense for this par@cular case.



Please submit the following:

•  ONLINE: 4 things! Your plot PLOT2_5.pdf, documented script HW2_5.m, and two numbers in the

comment sec@on: the minimum Error and corresponding value of a from problem (a).

•  ON PAPER: Answers, discussions and jus@fica@ons/proofs from problems (b), (c) and (d). Here’s the catch: The ON PAPER part MUST USE A SEPARATE PIECE OF PAPER from HW 2.1 – 2.4. I’ll collect your work in class for 2.5 in a separate pile from your stapled 2.1 – 2.4 (goes to a different grader).

More products