Starting from:
$30

$24

Homework 5


    1. (PCA using MSE and population covariance matrix1) Assume that x is a zero-mean p dimensional random vector (E[x] = 0) with covariance matrix: (10 pts)


    • = E[xxT ]

We wish to estimate x with M    p principal directions as:

M
X
x^ =    iei

i=1

where ei’s are the orthonormal eigenvectors of the covariance matrix R and = [ 1; : : : ; p]T . Show that the minimization of the squared error:

J = kx    x^k2

with respect to    1; : : : ;  m yields:

i = eTi x; i = 1; 2; : : : ; M

as the principal component, that is, the projection of the data vector x onto the eigen-vector ei.

    2. Let p(xj!i) be arbitrary densities with means i and covariance matrices i | not necessarily normal | for i = 1; 2. Let y= wT x be a projection, and let the induced one-dimensional densities p(yj!i) have means i and variances i2. (15 pts)

        (a) Show that the criterion function

J1(w) =
(1    2)2

12+ 22

is maximized by

w = (  1 +    2) 1(  1    2)

(b) If P (!i) is the prior probability for !i, show that the criterion function

(1    2)2
J2(w) = P (!1) 12 + P (!2) 22


is maximized by

w = (P (!1)  1 + P (!2)  2) 1(  1    2)

    (c) Explain which of J(w1) and J(w2) is \closer" to the criterion that is used by Fisher’s LDA.


1Using the population covariance matrix instead of the scatter matrix simpli es the formulation here

1
Homework 5    EE 559, Instructor: Mohammad Reza Rajati



    3. Time Series Classi cation Part 1: Feature Creation/Extraction

Important Note: You will NOT submit this part with Homework 5. It was the programming assignment of Homework 4. However, you may want to submit the code for Homework 4 with Homework 5 again, since it might need the feature creation code. .

An interesting task in machine learning is classi cation of time series. In this problem, we will classify the activities of humans based on time series obtained by a Wireless Sensor Network.

        (a) Download the AReM data from: https://archive.ics.uci.edu/ml/datasets/ Activity+Recognition+system+based+on+Multisensor+data+fusion+\%28AReM\ %29 . The dataset contains 7 folders that represent seven types of activities. In each folder, there are multiple les each of which represents an instant of a human performing an activity.2 Each le containis 6 time series collected from activities of the same person, which are called avg rss12, var rss12, avg rss13, var rss13, vg rss23, and ar rss23. There are 88 instances in the dataset, each of which con-tains 6 time series and each time series has 480 consecutive values.

        (b) Keep datasets 1 and 2 in folders bending1 and bending 2, as well as datasets 1, 2, and 3 in other folders as test data and other datasets as train data.

        (c) Feature Extraction

Classi cation of time series usually needs extracting features from them. In this problem, we focus on time-domain features.

            i. Research what types of time-domain features are usually used in time series classi cation and list them (examples are minimum, maximum, mean, etc).

            ii. Extract the time-domain features minimum, maximum, mean, median, stan-dard deviation, rst quartile, and third quartile for all of the 6 time series in each instance. You are free to normalize/standardize features or use them directly.3

Your new dataset will look like this:

Instance
min1
max1
mean1
median1

1st quart6
3rd quart6
1















2















3















.
.
.
.
.

.
.
.
.
.
.
.
. . .
.
.
.
.
.
.
.

.
.
88















where, for example, 1st quart6, means the rst quartile of the sixth time series in each of the 88 instances.

    iii. Estimate the standard deviation of each of the time-domain features you extracted from the data. Then, use Python’s bootstrapped or any other method to build a 90% bootsrap con dence interval for the standard deviation of each feature.


2Some of the data  les need very minor cleaning. You can do it by Excel or Python.
3You are welcome to experiment to see if they make a di erence.

2
Homework 5    EE 559, Instructor: Mohammad Reza Rajati



            iv. Use your judgement to select the three most important time-domain features (one option may be min, mean, and max).

            v. Assume that you want to use the training set to classify bending from other activities, i.e. you have a binary classi cation problem. Depict scatter plots of the features you speci ed in 3(c)iv extracted from time series 1, 2, and 6 of each instance, and use color to distinguish bending vs. other activities. (See p. 129 of the ISLR textbook).4

    4. Time Series Classi cation Part 2: Binary and Multiclass Classi cation

        (a) Binary Classi cation Using Logistic Regression5

            i. Break each time series in your training set into two (approximately) equal length time series. Now instead of 6 time series for each of the training instances, you have 12 time series for each training instance. Repeat the experiment in 3(c)v, i.e depict scatter plots of the features extracted from both parts of the time series 1,2, and 12. Do you see any considerable di erence in the results with those of 3(c)v? (5 pts)

            ii. Break each time series in your training set into l 2 f1; 2; : : : ; 20g time series of approximately equal length and use logistic regression6 to solve the binary classi cation problem, using time-domain features. Remember that breaking each of the time series does not change the number of instances. It only changes the number of features for each instance. Calculate the p-values for your logistic regression parameters in each model corresponding to each value of l and re t a logistic regression model using your pruned set of features.7 Alternatively, you can use backward selection using sklearn.feature selection or glm in R. Use 5-fold cross-validation to determine the best value of the pair (l; p), where p is the number of features used in recursive feature elimination. Explain what the right way and the wrong way are to perform cross-validation in this problem.8 Obviously, use the right way! Also, you may encounter the problem of class imbalance, which may make some of your folds not having any instances of the rare class. In such a case, you can use strati ed cross validation. Research what it means and use it if needed. (15 pts)

In the following, you can see an example of applying Python’s Recursive


4You are welcome to repeat this experiment with other features as well as with time series 3, 4, and 5 in each instance.

5Some logistic regression packages have a built-in L2 regularization. To remove the e ect of L2 regular-ization, set = 0 or set the budget C ! 1 (i.e. a very large value).

    • If you encountered instability of the logistic regression problem because of linearly separable classes, modify the Max-Iter parameter in logistic regression to stop the algorithm immaturely and prevent from its instability.
7R calculates the p-values for logistic regression automatically. One way of calculating them in Python is to call R within Python. There are other ways to obtain the p-values as well.

8This is an interesting problem in which the number of features changes depending on the value of the parameter l that is selected via cross validation. Another example of such a problem is Principal Component Regression, where the number of principal components is selected via cross validation.



3
Homework 5    EE 559, Instructor: Mohammad Reza Rajati



Feature Elimination, which is a backward selection algorithm, to logistic re-gression.

# R e c u r s i v e  F e a t u r e  E l i m i n a t i o n
from
s k l e a r n  import
d a t a s e t s
from
s k l e a r n . f e a t u r e


s e l e c t i o n  import RFE





from
s k l e a r n . l i n e a r

m o d e l  import  L o g i s t i c R e g r e s s i o n





# l o a d  t h e  i r i s  d a t a s e t s


d a t a s e t = d a t a s e t s . l o a d

i r i s ( )

# c r e a t e
a  b a s e
c l a s s i f i e r
used  t o  e v a l u a t e  a  s u b s e t  o f  a t t r i b u t e s
model =
L o g i s t i c R e g r e s s i o n ( )

# c r e a t e
t h e RFE
model  and
s e l e c t
3  a t t r i b u t e s
r f e
= RFE( model ,
3 )




r f e
= r f e . f i t ( d a t a s e t . data ,
d a t a s e t . t a r g e t )
# summarize  t h e
s e l e c t i o n  o f  t h e
a t t r i b u t e s
p r i n t ( r f e . s u p p o r t


)








p r i n t ( r f e . r a n k i n g

)








        iii. Report the confusion matrix and show the ROC and AUC for your classi er on train data. Report the parameters of your logistic regression i’s as well as the p-values associated with them. (10 pts)

        iv. Test the classi er on the test set. Remember to break the time series in your test set into the same number of time series into which you broke your training set. Remember that the classi er has to be tested using the features extracted from the test set. Compare the accuracy on the test set with the cross-validation accuracy you obtained previously. (10 pts)

        v. Do your classes seem to be well-separated to cause instability in calculating logistic regression parameters?

        vi. From the confusion matrices you obtained, do you see imbalanced classes? If yes, build a logistic regression model based on case-control sampling and adjust its parameters. Report the confusion matrix, ROC, and AUC of the model. (10 pts)

    (b) Binary Classi cation Using L1-penalized logistic regression

        i. Repeat 4(a)ii using L1-penalized logistic regression,9 i.e. instead of using p-values for variable selection, use L1 regularization. Note that in this problem, you have to cross-validate for both l, the number of time series into which you break each of your instances, and , the weight of L1 penalty in your logistic regression objective function (or C, the budget). Packages usually perform cross-validation for automatically.10 (15 pts)

        ii. Compare the L1-penalized with variable selection using p-values. Which one performs better? Which one is easier to implement? (5 pts)

    (c) Multi-class Classi cation (The Realistic Case)


    • For L1-penalized logistic regression, you may want to use normalized/standardized features

10Using the package Liblinear is strongly recommended.

4
Homework 5    EE 559, Instructor: Mohammad Reza Rajati



i. Find the best l in the same way as you found it in 4(b)i to build an L1-penalized multinomial regression model to classify all activities in your train-ing set.11 Report your test error. Research how confusion matrices and ROC curves are de ned for multiclass classi cation and show them for this problem if possible.12 (10 pts)

    ii. Repeat 4(c)i using a Na ve Bayes’ classi er. Use both Gaussian and Multi-nomial pdfs and compare the results. (10 pts)

    iii. Create p Principal Components from features extracted from features you extracted from l time series. Cross validate on the (l; p) pair to build a Na ve Bayes’ classi er based on the PCA features to classify all activities in your data set. Report your test error and plot the scatterplot of the classes in your training data based on the rst and second principal components you found from features extracted from l time series, where l is the value you found using cross-validation. Show confusion matrices and ROC curves. (10 pts)

    iv. Which method is better for multi-class classi cation in this problem? (5 pts)








































11New versions of scikit learn allow using L1-penalty for multinomial regression.
    12 For example, the pROC package in R does the job.

5

More products