$24
Instructions: Solutions to problems 1 and 2 are to be submitted on Quercus (PDF les only). You are strongly encouraged to do problems 3{6 but these are not to be submitted for grading.
1. In lecture, we showed that we could estimate the trace of a matrix A (tr(A)) by
1 m
X T
tr(bA) = m i=1 V i AV i
where V 1; ; V m are independent random vectors with E[V iV Ti ] = I.
Suppose that the elements of each V i are independent, identically distribution random
variables with mean 0 and variance 1. Show that Var(tr(bA) is minimized by taking the elements of V i to be 1 each with probability 1=2.
(Hint: This is easier than it looks { Var(V T AV ) = E[(V T AV )2] tr(A)2 so it su ces to
minimize
n n
n n
E[(V T AV )2] =
aijak‘E(ViVjVkV‘):
XXXX
i=1 j=1 k=1 ‘=1
Given our conditions on the elements of V i, V1; ; Vn, most of E(ViVjVkV‘) are either 0 or 1. You should be able to show that
n
E[(V T AV )2] = X a2iiE(Vi4) + constant
i=1
and nd Vi to minimize E(Vi4) subject to E(Vi2) = 1.)
(b) We also noted that if B is a symmetric n n matrix whose eigenvalues are less than 1 in absolute value then we have the following formula for the determinate of the matrix I B:
det(I
B) = exp
1 1
tr(Bk)!
=1
k
kX
and so we can estimate this determinant by
I B) = exp
1
m
V iT BV i +
1
1
!
m i=1
2V iT B2V i + + r V iT BrV i
det(
X
where r is \large enough".
1
In linear regression, one measure of the leverage of a subset of ‘ observations (x1; y1); ; (x‘; y‘) is 1 det(I H11) where H11 is an ‘ ‘ matrix de ned in terms of the linear regression \hat" matrix as follows:
0 1
= X(XT X) 1XT = @ H11 H12 A:
H21 H22
From above, we can estimate det(I
H11) by
I H11) = exp
1 m
V iT H11V i +
1
1
!
m i=1
2V iT H112
V i + + r V iT H11rV i
det(
X
for some r.
Show that we can compute H11V and H11kV for k 2 using the hat matrix H as follows:
H
0
V
1
=
0
H11V
1
@
A
@
A
H21V
and
H
0 H11k 1V 1
=
0
H11kkV1
1
:
@
0
A
@
H21H11
V A
(c) On Quercus, there is a function leverage (in the le leverage.txt) that computes the leverage for a given subset of observations for a design matrix X. (This function uses the QR decomposition of X to compute HV i; the functions are qr, which computes the QR decomposition of X and qr.fitted. which computes Hy = QQT y.)
Suppose that yi = g(xi) + "i for i = 1; ; n for some smooth function g and consider the following two parametric models for g:
5
X
g1(x) = 0 + f 2k 1 cos(2k x) + 2k sin(2k x)g
k=1
and
g2(x) = 0 + 1 1(x) + + 10 10(x)
where 1(x); ; 10(x) are B-spline functions. Suppose that x1; ; x1000 are equally spaced points on [0; 1] with xi = i=1000. The B-spline functions and the respective design matrices can be constructed using the following R code:
x <- c(1:1000)/1000
X1<-1
for (k in 1:5) X1 <- cbind(X1,cos(2*k*pi*x),sin(2*k*pi*x))
library(splines) # loads the library of functions to compute B-splines
X2 <- cbind(1,bs(x,df=10))
2
Note that both X1 and X2 are 1000 11 matrices. You can see the B-spline functions 1(x); ; 10(x) as follows:
plot(x,X2[,2])
for (i in 3:11) points(x,X2[,i])
Estimate the leverage of the points fxi : (k 1)=20 < xi k=20g for k = 1; ; 20 for both designs; for each design, you will obtain 20 leverages. Comment on the di erences between the leverages estimated for the two designs. To estimate the leverages for the two designs, you may want to modify the function leverage to estimate the leverages for both designs using the same values of V 1; ; V m (why?).
2. Suppose that X1; ; Xn are independent Gamma random variables with common density
x 1 exp(
x)
f(x; ; ) =
for x 0
)
where 0 and 0 are unknown parameters.
The mean and variance of the Gamma distribution are = and = 2, respectively. Use these to de ne method of moments estimates of and based on the sample mean and variance of the data x1; ; xn
Derive the likelihood equations for the MLEs of and and derive a Newton-Raphson algorithm for computing the MLEs based on x1; ; xn. Implement this algorithm in R and test on data generated from a Gamma distribution (using the R function rgamma). Your function should also output an estimate of the variance-covariance matrix of the MLEs { this can be obtained from the Hessian of the log-likelihood function.
Important note: To implement the Newton-Raphson algorithm, you will need to compute the rst and second derivatives of ln ). These two derivatives are called (respectively) the digamma and trigamma functions, and these functions are available in R as digamma and trigamma; for example,
gamma(2) # gamma function evaluated at 2 [1] 1
digamma(2) # digamma function evaluated at 2 [1] 0.4227843
trigamma(2) # trigamma function evaluated at 2 [1] 0.6449341
3
Supplemental problems:
3. Consider LASSO estimation in linear regression where we de ne b to minimize
n
p
(yi
y xiT )2 +j jj
Xi
X
=1
j=1
for some 0. (We assume that the predictors are centred and scaled to have mean 0 and variance 1, in which case y is the estimate of the intercept.) Suppose that the least squares estimate (i.e. for = 0) is non-unique | this may occur, for example, if there is some exact linear dependence in the predictors or if p n. De ne
n
= min
Xi
(yi y xiT )2
=1
and the set
n
y xiT )2 = ) :
C =
( : (yi
Xi
=1
We want to look at what happens to the LASSO estimate b as # 0.
(a) Show that b minimizes
(
n
y xiT )2 )
p
(yi
+ j jj:
1
Xi
X
=1
j=1
(b) Find the limit of
(
n
)
(yi y xiT )2
1
Xi
=1
as # 0 as a function of . (What happens when 62 ?)C Use this to deduce that as # 0,
b
b
b
p
minimizes
jX
! 0
where 0
j jj on the set C.
=1
Show that b0 is the solution of a linear programming problem. (Hint: Note that C can be expressed in terms of satisfying p linear equations.)
4. Consider minimizing the function
g(x) = x2 2 x + jxj
where 0 and 0 < < 1. (This problem arises, in a somewhat more complicated form, in shrinkage estimation in regression.) The function jxj has a \cusp" at 0, which mean that if is su cient large then g is minimized at x = 0.
4
(a) g is minimized at x = 0 if, and only if,
" #1
2 2 2 j j2 : (1)
Otherwise, g is minimized at x satisfying g0(x ) = 0. Using R, compare the following two iterative algorithms for computing x (when condition (1) does not hold):
(i) Set x0 = and de ne
x
=
jxk 1j
k = 1; 2; 3;
k
2 xk 1
(ii) The Newton-Raphson algorithm with x0 = .
Use di erent values of , , and to test these algorithms. Which algorithm is faster?
Functions like g arise in so-called bridge estimation in linear regression (which are gener-alizations of the LASSO) { such estimation combines the features of ridge regression (which
shrinks least squares estimates towards 0) and model selection methods (which produce ex-act 0 estimates for some or all parameters). Bridge estimates b minimize (for some 0
and 0),
n
p
(yi xiT )2 +
j jj :
(2)
Xi
X
=1
j=1
See the paper by Huang, Horowitz and Ma (2008) (\Asymptotic properties of bridge esti-mators in sparse high-dimensional regression models" Annals of Statistics. 36, 587{613) for details. Describe how the algorithms in part (a) could be used to de ne a coordinate descent algorithm to nd b minimizing (2) iteratively one parameter at a time.
(c) Prove that g is minimized at 0 if, and only if, condition (1) in part (a) holds.
Suppose that A is a symmetric non-negative de nite matrix with eigenvalues 12
n 0. Consider the following algorithm for computing the maximum eigenvalue 1:
Given x0, de ne for k = 0; 1; 2; , xk+1 =
Axk
and k+1 =
xkT+1Axk+1
.
kAxkk2
xkT+1xk+1
Under certain conditions, k ! 1, the maximum eigenvalue of A; this algorithm is known as the power method and is particularly useful when A is sparse.
(a) Suppose that v1; ; vn are the eigenvectors of A corresponding to the eigenvalues 1; ; n. Show that k ! 1 if xT0 v1 6= 0 and 1 2.
What happens to the algorithm if if the maximum eigenvalue is not unique, that is, 1 = 2 = = k?
5
6. (a) Suppose that A is an invertible matrix that can be written as I B where B has its eigenvalues in the interval ( 1; 1). Show that
1
tr(A 1) = X tr(Bk)
k=0
(where B0 = I).
We can use Hutchinson’s method to estimate tr(A 1) by exploiting the formula in part (a), truncating the in nite series at some nite point r (where Bk = 0 for k r). The key lies in writing A = (I B) for some constant and matrix B whose eigenvalues lie in
( 1; 1); then
1
tr(A 1) = 1 X tr(Bk):
k=0
Suppose that 1; ; n 0 are the eigenvalues of A and de ne so that
max i:
1 i n
In terms of , how would you de ne and B?
Suppose that A is symmetric positive de nite with elements faij : 1 i; j ng. Show that we can take in part (b) to be
n
jX
=
1maxin
=1 jaijj:
(Hint: Show that = kAk1.)
6