Starting from:
$35

$29

Problem Set 2: Perceptron and regression Solution

Design (specify for) a two-input perceptron (with an additional bias or o set term) that computes the following boolean functions. Assume T = 1 and F = 1. If a valid perceptron exists, show that it is not unique by designing another valid perceptron (with a di erent hyperplane, not simply through normalization). If no perceptron exists, state why.

(a) OR    (b) XOR


    • Logistic Regression [10 pts]

Consider the objective function that we minimize in logistic regression:

N
X
J( ) =    [yn log h (xn) + (1    yn) log (1    h (xn))]

n=1


    (a) Find the partial derivatives @J .
@ j

(b) Find the partial second derivatives

@2J
and show that the Hessian (the matrix H of second

@ j @ k







@2J
N
T
derivatives with elements Hjk =

) can be written as H = Pn=1 h (xn) (1
h (xn)) xnxn .

@ j @ k


    (c) Show that J is a convex function and therefore has no local minima other than the global one.

Hint: A function J is convex if its Hessian is positive semi-de nite (PSD), written H  0. A

matrix is PSD if and only if
X
zT Hz    zjzkHjk    0:

j;k

for all real vectors z.

    • Maximum Likelihood Estimation [15 pts]

Suppose we observe the values of n independent random variables X1; : : : ; Xn drawn from the same Bernoulli distribution with parameter 1. In other words, for each Xi, we know that

P (Xi = 1) =    and    P (Xi = 0) = 1    :

Our goal is to estimate the value of    from these observed values of X1 through Xn.
^
For any hypothetical value  , we can compute the probability of observing the outcome X1; : : : ; Xn

^
if the true parameter value were equal to . This probability of the observed data is often called the likelihood, and the function L( ) that maps each to the corresponding likelihood is called the likelihood function. A natural way to estimate the unknown parameter is to choose the that maximizes the likelihood function. Formally,

^
MLE = arg max L( ):


    (a) Write a formula for the likelihood function, L( ) = P (X1; : : : ; Xn; ). Your function should
depend on the random variables X1; : : : ; Xn and the hypothetical parameter . Does the likelihood function depend on the order in which the random variables are observed ?

    (b) Since the log function is increasing, the that maximizes the log likelihood ‘( ) = log(L( )) is the same as the that maximizes the likelihood. Find ‘( ) and its rst and second derivatives, and use these to nd a closed-form formula for the MLE.

    (c) Suppose that n = 10 and the data set contains six 1s and four 0s.  Write a short pro-
^
gram likelihood.py that plots the likelihood function of this data for each value of in f0; 0:01; 0:02; : : : ; 1:0g (use np.linspace(...) to generate this spacing). For the plot, the x-axis should be and the y-axis L( ). Scale your y-axis so that you can see some variation in its value. Include the plot in your writeup (there is no need to submit your code). Estimate
^    ^
MLE by marking on the x-axis the value of that maximizes the likelihood. Does the answer agree with the closed form answer ?

    (d) Create three more likelihood plots: one where n = 5 and the data set contains three 1s and two 0s; one where n = 100 and the data set contains sixty 1s and forty 0s; and one where n = 10 and there are ve 1s and ve 0s. Include these plots in your writeup, and describe how the likelihood functions and maximum likelihood estimates compare for the di erent data sets.












    • This is a common assumption for sampling data. So we will denote this assumption as iid, short for Independent and Identically Distributed, meaning that each random variable has the same distribution and is drawn independent of all the other random variables

3
    • Implementation: Polynomial Regression [20 pts]

In this exercise, you will work through linear and polynomial regression. Our data consists of inputs xn 2 R and outputs yn 2 R; n 2 f1; : : : ; Ng, which are related through a target function y = f(x). Your goal is to learn a linear predictor h (x) that best approximates f(x). But this time, rather than using scikit-learn, we will further open the \black-box", and you will implement the regression model!



code and data

code : regression.py

data : regression_train.csv, regression_test.csv


This is likely the rst time that many of you are working with numpy and matrix operations within a programming environment. For the uninitiated, you may nd it useful to work through a numpy tutorial rst.2 Here are some things to keep in mind as you complete this problem:

If you are seeing many errors at runtime, inspect your matrix operations to make sure that you are adding and multiplying matrices of compatible dimensions. Printing the dimensions of variables with the X.shape command will help you debug.

When working with numpy arrays, remember that numpy interprets the * operator as element-wise multiplication. This is a common source of size incompatibility errors. If you want matrix multiplication, you need to use the dot function in Python. For example, A*B does element-wise multiplication while dot(A,B) does a matrix multiply.

Be careful when handling numpy vectors (rank-1 arrays): the vector shapes 1 N, N 1, and N are all di erent things. For these dimensions, we follow the the conventions of scikit-learn’s LinearRegression class3. Most importantly, unless otherwise indicated (in the code documentation), both column and row vectors are rank-1 arrays of shape N, not rank-2 arrays of shape N 1 or shape 1 N.

Visualization [1 pts]

As we learned last week, it is often useful to understand the data through visualizations. For this data set, you can use a scatter plot to visualize the data since it has only two properties to plot (x and y).


    (a) Visualize the training and test data using the plot_data(...) function. What do you ob-serve? For example, can you make an educated guess on the e ectiveness of linear regression in predicting the data?









    • Try out SciPy’s tutorial (https://docs.scipy.org/doc/numpy/user/quickstart.html), or use your favorite search engine to nd an alternative.
    • http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html

4

Linear Regression [12 pts]

Recall that linear regression attempts to minimize the objective function








N

















X















J( )=
(h (xn)  yn)2:












n=1









In this problem, we will use the matrix-vector form where

0


1

0
y

1



0
xT
1



1



y1





x1T



B
0
C
















y =



2

;
X =

2

;
=




B



C



B

C


B
..
C

B
y
N
C



B
xT
C


B


C

B


C



B
N
C


B

D
C

@



A



@

A


B


C







T .






@


A
and each instance xn =  1; xn;1; : : : ; xn;D











In this instance, the number of input features
D=1.







Rather than working with this fully generalized, multivariate case, let us start by considering a simple linear regression model:
h (x) =  T x =  0 + 1x1

regression.py contains the skeleton code for the class PolynomialRegression. Objects of this class can be instantiated as model = PolynomialRegression (m) where m is the degree of the
2

m

T
. Setting
polynomial feature vector where the feature vector for instance n,  1; xn;1; xn;1; : : : ; xn;1



m = 1 instantiates an object where the feature vector for instance n,  1; xn;1

T .










    (b) Note that to take into account the intercept term ( 0), we can add an additional \feature" to each instance and set it to one, e.g. xi;0 = 1. This is equivalent to adding an additional rst column to X and setting it to all ones.

Modify PolynomialRegression.generate_polynomial_features(...) to create the matrix X for a simple linear model.

    (c) Before tackling the harder problem of training the regression model, complete PolynomialRegression.predict(...) to predict y from X and .

    (d) One way to solve linear regression is through gradient descent (GD).

Recall that the parameters of our model are the j values. These are the values we will adjust to minimize J( ). In gradient descent, each iteration performs the update

N
X
j    j    2    (h (xn)    yn) xn;j    (simultaneously update  j for all j):

n=1

With each step of gradient descent, we expect our updated parameters j to come closer to the parameters that will achieve the lowest value of J( ).




5

As we perform gradient descent, it is helpful to monitor the convergence by computing

the cost, i.e., the value of the objective function J. Complete PolynomialRegression.cost(...) to calculate J( ).

If you have implemented everything correctly, then the following code snippet should return 40:234.


train_data = load_data( regression_train.csv )

model = PolynomialRegression()

model.coef_ = np.zeros(2)

model.cost(train_data.X, train_data.y)


Next, implement the gradient descent step in PolynomialRegression.fit_GD(...). The loop structure has been written for you, and you only need to supply the updates to and the new predictions y^ = h (x) within each iteration.

We will use the following speci cations for the gradient descent algorithm: { We run the algorithm for 10; 000 iterations.

{ We terminate the algorithm ealier if the value of the objective function is unchanged across consecutive iterations.

{ We will use a  xed step size.

So far, you have used a default learning rate (or step size) of = 0:01. Try di erent = 10 4; 10 3; 10 2; 0:0407, and make a table of the coe cients, number of iterations until convergence (this number will be 10; 000 if the algorithm did not converge in a smaller number of iterations) and the nal value of the objective function. How do the coe cients compare? How quickly does each algorithm converge?

    (e) In class, we learned that the closed-form solution to linear regression is = (XT X) 1XT y:

Using this formula, you will get an exact solution in one calculation: there is no \loop until convergence" like in gradient descent.

Implement the closed-form solution PolynomialRegression.fit(...).

What is the closed-form solution? How do the coe cients and the cost compare to those obtained by GD? How quickly does the algorithm run compared to GD?

    (f) Finally, set a learning rate for GD that is a function of k (the number of iterations) (use k = 1+1k ) and converges to the same solution yielded by the closed-form optimization (minus possible rounding errors). Update PolynomialRegression.fit_GD(...) with your proposed learning rate. How long does it take the algorithm to converge with your proposed learning rate?



Polynomial Regression[7 pts]

Now let us consider the more complicated case of polynomial regression, where our hypothesis is

h (x) =  T  (x) =  0 + 1x + 2x2 + : : : +  mxm:


6

    (g) Recall that polynomial regression can be considered as an extension of linear regression in which we replace our input matrix X with


0
(x1)T
1



(x2)T


=
B
...

C
;

B
(x
N
)T
C


B



C


@



A

where    (x) is a function such that    j(x) = xj for j = 0; : : : ; m.

Update PolynomialRegression.generate_polynomial_features(...) to create an m + 1 dimensional feature vector for each instance.

(h) Given N training instances, it is always possible to obtain a \perfect t" (a t in which all the data points are exactly predicted) by setting the degree of the regression to N 1. Of course, we would expect such a t to generalize poorly. In the remainder of this problem, you will investigate the problem of over tting as a function of the degree of the polynomial, m. To measure over tting, we will use the Root-Mean-Square (RMS) error, de ned as

p

ERMS =    J( )=N;

where N is the number of instances.4

Why do you think we might prefer RMSE as a metric over J( )?

Implement PolynomialRegression.rms_error(...).

    (i) For m = 0; : : : ; 10, use the closed-form solver to determine the best- t polynomial regression model on the training data, and with this model, calculate the RMSE on both the training data and the test data. Generate a plot depicting how RMSE varies with model complexity (polynomial degree) { you should generate a single plot with both training and test error, and include this plot in your writeup. Which degree polynomial would you say best ts the data? Was there evidence of under/over tting the data? Use your plot to justify your answer.























    • Note that the RMSE as de ned is a biased estimator. To obtain an unbiased estimator, we would have to divide by n k, where k is the number of parameters tted (including the constant), so here, k = m + 1.

7

More products