Starting from:
$35

$29

Assignment 2 Solution

Instructions: Solutions to problems 1 and 2 are to be submitted on Quercus (PDF les only). You are strongly encouraged to do problems 3{6 but these are not to be submitted for grading.







Suppose we want to generate independent Normal random variables with mean 0 and variance 1 using some sort of rejection sampling. An intuitively reasonable proposal density is a logistic density with the general form



exp(x=s)

g(x; s) = s f1 + exp(x=s)g2







where s 0 is a parameter.




(a) Show that we can generate a random variable X from g(x; s) by X = s ln(U=(1 U)).




(b) To generate N (0; 1) random variables using g(x; s) as a proposal density, we should choose s to maximize the probability that a proposal from g(x; s) is accepted; this probability is given by 1=M(s) where

f(x)

M(s) = max







where f(x) is the N (0; 1) density. (Therefore, we want to nd s to minimize M(s).) Show
p


p




that if s 1=
2, f(x)=g(x; s) is maximized at x = 0 with M(s) minimized (for s 1= 2)
p




at s = 1= 2. (Hint: Use calculus to nd the maximum of ln(f(x)) ln(g(x; s)).)




p




Given part (b), we know that M(s) must be minimized for s between 0 and 1= 2. Use some approach to nd the value of s that minimizes M(s). (Hint: You may nd it useful to use the dnorm and dlogis functions in R; for example, to look at f(x)=g(x; s) for a speci ed value of s and x between 0 and 3, we could use the following code:



s <- 1/sqrt(2)



x <- c(0:3000)/1000 # generates values 0, 0.001, 0.002 ,..., 2.999, 3



plot(x,dnorm(x)/dlogis(x,location=0,scale=s),type="l")



It is possible to use calculus to determine the optimal value of s although ad hoc approaches are ne; there is some detective work required here and you are encouraged to use R to gain some intuition for the problem.)










1
2. Suppose we observe y1; ; yn where




yi = i + "i (i = 1; ; n)




where f"ig is a sequence of random variables with mean 0 and nite variance representing noise. We will assume that 1; ; n are smooth in the sense that i = g(i) for some continuous and di erentiable function g. The least squares estimates of 1; ; n are trivial




bi = yi for all i | but we can modify least squares in a number of ways to accommodate the \smoothness" assumption on f ig. In this problem, we will consider estimating f ig by
minimizing

n 1
(yi i)2
+
( i+1 2 i + i 1)2
X


Xi
i=1


=2



where 0 is a tuning parameter that controls the smoothness of the estimates b1; ; bn.




(a) Show the estimates b1; ; bn satisfy the equations






y1 = (1 + ) 1
22+3




























y
2
=
2 1
b
b


2
b


3
+
4






















+(1+5 )


4


























j
=


j 2b
4 j 1 + b


b


j
b


j+1
j+2






y










(1+6 )


4




+for j = 3;
; n 2
y
n 1 =
bn 3
4
bn 2
+ (1 + 5 )bn 1




b


n
b
































2










y
n =
bn 2
2
bn 1
+(1+
)


nb








b






























































b


b






b




















(Note that if we write this in matrix form y = A b, the matrix A is sparse, having at most 5 non-zero entries per row.)




Show that if fyig are exactly linear, i.e. yi = a i + b for all i then bi = yi for all i. (In other words, these linear functions are eigenvectors of A with eigenvector 1.)



Show (using results from class) that the Gauss-Seidel algorithm can be used to compute the estimates de ned in part (a).



Write a function in R to implement the Gauss-Seidel algorithm above. (A template function is provided on Quercus but you do need to follow it.) The inputs for this function are a vector of responses y and the tuning parameter lambda. Test your function (for various tuning parameters) on data generated by the following command:



x <- c(1:1000)/1000



y <- cos(6*pi*x)*exp(-2*x) + rnorm(1000,0,0.05)



Note that the algorithm will converge more slowly as increases; the convergence can be improved by modifying the algorithm, for example, by using the Successive Over Relaxation method described in the text.




2
Supplemental problems (not to hand in):




To generate random variables from some distributions, we need to generate two indepen-dent two independent random variables Y and V where Y has a uniform distribution over some nite set and V has a uniform distribution on [0; 1]. It turns out that Y and V can be generated from a single Unif(0; 1) random variable U.



(a) Suppose for simplicity that the nite set is f0; 1; ; n 1g for some integer n 2. For U Unif(0; 1), de ne




Y = bnUc and V = nU Y




where bxc is the integer part of x. Show that Y has a uniform distribution on the set f0; 1; ; n 1g, V has a uniform distribution on [0; 1], and Y and V are independent.

(b) What happens to the precision of V de ned in part (a) as n increases? (For example, if




has 16 decimal digits and n = 106, how many decimal digits will V have?) Is the method in part (a) particularly feasible if n is very large?






One issue with rejection sampling is a lack of e ciency due to the rejection of random variables generated from the proposal density. An alternative is the acceptance-complement (A-C) method described below.



Suppose we want to generate a continuous random variable from a density f(x) and that




f(x) = f1(x) + f2(x) (where both f1 and f2 are non-negative) where f1(x) g(x) for some density function g. Then the A-C method works as follows:




Generate two independent random variables U Unif(0; 1) and X with density g.



If U f1(X)=g(X) then return X.



Otherwise (that is, if U f1(X)=g(X)) generate X from the density



f2 (x) =
f2(x)
:




Z 1 f2(t) dt
Note that we must be able to easily sample from g and f2 in order for the A-C method to be e cient; in some cases, they can both be taken to be uniform distributions.




Show that the A-C method generates a random variable with a density f. What is the probability that the X generated in step 1 (from g) is \rejected" in step 2?



Suppose we want to sample from the truncated Cauchy density



f(x) =
2
( 1 x 1)
(1 + x2)






3


2
1


x


2
using the A-C method with f (x) = k, a constant, for




1 (so that f (x) = 1=2 is a
uniform density on [ 1; 1]) with













f1(x) = f(x) f2(x) = f(x) k ( 1 x 1):




If g(x) is also a uniform density on [ 1; 1] for what range of values of k can the A-C method be applied?




(c) De ning f1, f2, and g as in part (b), what value of k minimizes the probability that X generated in step 1 of the A-C algorithm is rejected?







Suppose we want to generate a random variable X from the tail of a standard normal distribution, that is, a normal distribution conditioned to be greater than some constant b. The density in question is



exp( x2=2)


f(x) = p2 (1 (b))
for x b



with f(x) = 0 for x < b where (x) is the standard normal distribution function. Consider rejection sampling using the shifted exponential proposal density




g(x) = b exp( b(x b)) for x b.




De ne Y be an exponential random variable with mean 1 and U be a uniform random variable on [0; 1] independent of Y . Show that the rejection sampling scheme de nes X =



b + Y =b if

Y 2

2 ln(U) b2 :







(Hint: Note that b + Y =b has density g.)




(b) Show the probability of acceptance is given by




p




2 b(1 (b)):




exp( b2=2)




What happens to this probability for large values of b? (Hint: You need to evaluate M = max f(x)=g(x).)




(c) Suppose we replace the proposal density g de ned above by




g (x) = exp( (x b)) for x b.













4
(Note that g is also a shifted exponential density.) What value of maximizes the proba-bility of acceptance? (Hint: Note that you are trying to solve the problem




min max f(x)




0 x b g (x)




for . Because the density g (x) has heavier tails, the minimax problem above will have the same solution as the maximin problem




max min f(x)




x b 0 g (x)




which may be easier to solve.)







6. Another interesting variation of rejection sampling is the ratio of uniforms method. We Z 1




start by taking a bounded function h with h(x) 0 for all x and h(x) dx < 1. We then de ne the region




 

q




Ch = (u; v) : 0 u h(v=u)




and generate (U; V ) uniformly distributed on Ch. We then de ne the random variable X = V=U.




(a) The joint density of (U; V ) is








f(u; v) =
1


for (u; v) 2 Ch






jChj



where jChj is the area of Ch. Show that the joint density of (U; X) is








u


for 0 u q


g(u; x) =




h(x)


h




jC
j





and that the density of X is h(x) for some 0.




The implementation of this method is somewhat complicated by the fact that it is typically di cult to sample (U; V ) from a uniform distribution on Ch. However, it is usually



possible to nd a rectangle of the form Dh = f(u; v) : 0 u u+; v v v+g such that Ch is contained within Dh. Thus to draw (U; V ) from a uniform distribution on Ch, we can use rejection sampling where we draw proposals (U ; V ) from a uniform distribution on the rectangle Dh; note that the proposals U and V are independent random variables with

Unif(0; u+) and Unif(v
; v+) distributions, respectively. Show that we can de ne u+, v and
v+ as follows:




























+ =
x
q


x
q




+
x
q




u


h(x) v
h(x):


max


h(x) v = min x




= max x








5
(Hint: It su ces to show that if (u; v) 2 Ch then (u; v) 2 Dh where Dh is de ned using u+,

, and v+ above.)



(c) Implement (in R) the method above for the standard normal distribution taking h(x) =




exp( x2=2). In this case, u+ = 1, v =
q


= 0:8577639, and v+ = q




2=e
2=e = 0:8577639.
What is the probability that proposals are accepted?















































































































































































6

More products