Starting from:

$30

Homework 2 Solution




You may complete this homework assignment either individually or in teams up to 2 people.




Age regression: Train an age regressor that analyzes a (48 48 = 2304)-pixel grayscale face image and outputs a real number y^ that estimates how old the person is (in years). Your regressor should be implemented using linear regression. The training and testing data are available here:



https://s3.amazonaws.com/jrwprojects/age_regression_Xtr.npy https://s3.amazonaws.com/jrwprojects/age_regression_ytr.npy https://s3.amazonaws.com/jrwprojects/age_regression_Xte.npy https://s3.amazonaws.com/jrwprojects/age_regression_yte.npy




Note: you must complete this problem using only linear algebraic operations in numpy { you may not use any o -the-shelf linear regression software, as that would defeat the purpose.




One-shot (analytical) solution [20 points]: Compute the optimal weights w = (w1; : : : ; w2304) and bias term b for a linear regression model by deriving the expression for the gradient of the cost function w.r.t. w and b, setting it to 0, and then solving. The cost function is



1


n






Xi
fMSE(w; b) = 2n
(^y(i) y(i))2






=1



where y^ = g(x; w; b) = xw + b and n is the number of examples in the training set

Dtr = f(x(1); y (1)); : : : ; (x(n); y(n))g, each x(i) 2 R2304 and each y(i) 2 f0; 1g. After optimizing w and b only on the training set, compute and report the cost fMSE on the training set Dtr and (separately) on the testing set Dte. Suggestion: to solve for w and b simultaneously, use the trick shown in class whereby each image (represented as a vector x) is appended with a constant 1 term (to yield an appended representation x~). Then compute the optimal w~ (comprising the original w and an appended b term) using the closed formula:

~ ~~ 1~

w = XX Xy




For appending, you might nd the functions np.hstack, np.vstack, np.atleast 2d useful. After optimizing w~ and b (using f~MSE), compute and report the cost fMSE on the training set Dtr and (separately) the testing set Dte.




Gradient descent [25 points]: Pick a random starting value for w 2 R2304 and b 2 R and



a small learning rate (e.g., = :001). (In my code, I sampled each component of w and b




from a Normal distribution with standard deviation 0:01; use np.random.randn). Then, using the expression for the gradient of the cost function, iteratively update w; b to reduce the cost fMSE(w; b). Stop after conducting T gradient descent iterations (I suggest T = 5000 with a step size (aka learning rate) of = 0:003). After optimizing w and b only on the training set, compute and report the cost fMSE on the training set Dtr and (separately) on the testing set Dte. After optimizing w and b (using f~MSE), compute and report the cost fMSE on the training set Dtr and (separately) the testing set Dte.




Note: as mentioned during class, on this particular dataset it would take a very long time using gradient descent to reach weights as the w found by the analytical solution. For T = 5000, your training cost on part (b) will be higher than for part (a). However, the testing cost should actually be lower since the relatively small number of gradient descent steps prevents w from growing too large and hence acts as an implicit regularizer.










1



Regularization [15 points]: Same as (b) above, but change the cost function to include a penalty for jwj2 growing too large:
f~MSE(w) =
1
n
(^y(i)
y(i))2 +


ww


Xi




2n




2n


=1



















where 2 R+. Set = 1:0 (this worked well for me) and then optimize f~MSE w.r.t. w and b. After optimizing w and b (using f~MSE), compute and report the cost fMSE (without the L2 term) on the training set Dtr and (separately) the testing set Dte. Important: the regularization should be applied only to the w, not the b. I suggest a regularization strength of = 0:1.




Note: as mentioned during class, since part (b) already provides implicit regularization by limiting the number of gradient descent steps (to T = 5000), you should not expect to see much (or any) di erence between parts (c) and (b) on this dataset. In general, however, the L2 regularization term can make a big di erence.




Visualizing the machine’s behavior [10 points]: After training the regressors in parts (a), (b), and (c), create a 48 48 image representing the learned weights w (without the b term) from each of the di erent training methods. Use plt.imshow(). How are the weight vectors from the di erent methods di erent? Next, using the regressor in part (c), predict the ages of all the images in the test set and report the RMSE (in years). Then, show the top 5 most egregious errors, i.e., the test images whose ground-truth label y is farthest from your machine’s estimate y^. Include the images, along with associated y and y^ values, in a PDF. 4



Extra credit [1 point]: Suppose you trained a linear regressor on a set of training data to predict whether or not a face image was smiling (1) or not smiling (0). Let be the L2 regularization strength. What will happen to the optimal weights w as ! 1, i.e., what will they look like as increases? Justify your answer with a precise mathematical argument based on the analytical solution (you have to derive it yourself) to L2-regularized linear regression.



Submission: Put your solution in a Python le called homework2 WPIUSERNAME.py







(or homework2 WPIUSERNAME1 WPIUSERNAME2.py for teams), and show the most egregious errors for part (a) in homework2 errors WPIUSERNAME.pdf.


















































































2

More products