Starting from:
$25

$19

Homework 6


    (b) Estimate the prior probabilities based on frequency of occurrence of the prototypes in each class.

    (c) Use the estimates you have developed in above to nd the decision boundaries and regions for a Bayes minimum-error classi er based on k-nearest neighbors.


1
Homework 6    EE 559, Instructor: Mohammad Reza Rajati



        (d) Derive a classi er based on using KNN as a discriminative technique that esti-mates p(!ijx) directly using nearest neighbors, and compare it to the classi er you obtained in 2c. If there are ties, break them in favor of !2.

    3. Consider the following training data set:

x1 = [1; 0]T ; z1 =    1

x2 = [0; 1]T ; z2 =    1
x3 = [0;    1]; z3 =    1

x4 = [  1; 0]T ; z4 = 1

x5 = [0; 2]T ; z5 = 1
x6 = [0;    2]T ; z6 = 1

x7 = [  2; 0]T ; z7 = 1


Use following nonlinear transformation of the input vector x = [x1; x2]T to the trans-formed vector u = [’1(x); ’2(x)]T : ’1(x) = x22 2x1 + 3 and ’2(x) = x21 2x2 3. What is the equation of the optimal separating \hyperplane" in the u space? (15 pts)

4. Consider the following training data set : (25 pts)

x1 = [0; 0]T ; z1 =    1

x2 = [1; 0]T ; z2 = 1
x3 = [0;    1]; z3 = 1

x4 = [  1; 0]T ; z4 = 1



Note that in the following, you need to use equations that describe w and give rise to the dual optimization problem.

    (a) Write down the dual optimization problem for training a Support Vector Machine with this data set using the polynomial kernel function

(xi; xj) = (xTi xj + 1)2

    (b) Solve the optimization problem and nd the optimal i’s using results about quadratic forms and check the results with Wolfram Alpha or any software pack-age.

    (c) Show that the equation of the decision boundary in a kernel SVM wT u + w0 = 0 can be represented as g(x) = PNi=1 izi (xi; x) + w0.
    (d) We learned that for vectors that do not violate the margin1 (i.e. zj(wT uj + w0)
1 > 0), the Lagrange multiplier is zero, i.e.   j  = 0.  On the other hand, for


    • For simplicity, consider Kernel SVM with hard margins, i.e. no slack variables.

2
Homework 6    EE 559, Instructor: Mohammad Reza Rajati



vectors on the margin (zj(wT uj + w0) 1 = 0), j 6= 0. Show that, consequently, one can nd a vector xj for which j 6= 0 and calculate w0 as w0 = 1=zj
PNi=1   izi  (xi; xj).
        (e) Sketch the decision boundary for this data set based on parts (4c) and (4d).

    5. In the following gure, there are di erent SVMs with di erent decision boundaries. The training data is labeled as zi 2 f 1; 1g, represented as circles and squares respectively. Support vectors are drawn in solid circles. Determine which of the scenarios described below matches one of the 6 plots (note that one of the plots does not match any scenario). Each scenario should be matched to a unique plot. Explain your reason for matching each gure to each scenario. (10 pts)

























    (a) A soft-margin linear SVM with C = 0:02

    (b) A soft-margin linear SVM with C = 20

    (c) A hard-margin kernel SVM with  (xi; xj) = xTi xj + (xTi xj)2
(d)
A hard-margin kernel SVM with  (xi; xj) = exp(
5kxi
xjk2)
(e)
A hard-margin kernel SVM with  (xi; xj) = exp(

1
kxi
xjk2)



5



    6. Programming Part: Multi-class and Multi-Label Classi cation Using Sup-port Vector Machines

        (a) Download the Anuran Calls (MFCCs) Data Set from: https://archive.ics. uci.edu/ml/datasets/Anuran+Calls+%28MFCCs). Choose 70% of the data ran-domly as the training set.

        (b) Each instance has three labels: Families, Genus, and Species. Each of the labels has multiple classes. We wish to solve a multi-class and multi-label problem. One of the most important approaches to multi-class classi cation is to train a classi er for each label. We rst try this approach:

3
Homework 6    EE 559, Instructor: Mohammad Reza Rajati



    i. Research exact match and hamming score/ loss methods for evaluating multi-label classi cation and use them in evaluating the classi ers in this problem.

    ii. Train a SVM for each of the labels, using Gaussian kernels and one versus all classi ers. Determine the weight of the SVM penalty and the width of the Gaussian Kernel using 10 fold cross validation.2 You are welcome to try to solve the problem with both normalized3 and raw attributes and report the results. (15 pts)

    iii. Repeat 6(b)ii with L1-penalized SVMs.4 Remember to normalize the at-tributes. (10 pts)

    iv. Repeat 6(b)iii by using SMOTE or any other method you know to remedy class imbalance. Report your conclusions about the classi ers you trained. (10 pts)

    v. Extra Practice: Study the Classi er Chain method and apply it to the above problem.

    vi. Extra Practice: Research how confusion matrices, precision, recall, ROC, and AUC are de ned for multi-label classi cation and compute them for the classi ers you trained in above.
























2How to choose parameter ranges for SVMs? One can use wide ranges for the parameters and a ne grid (e.g. 1000 points) for cross validation; however,this method may be computationally expensive. An alternative way is to train the SVM with very large and very small parameters on the whole training data and nd very large and very small parameters for which the training accuracy is not below a threshold (e.g., 70%). Then one can select a xed number of parameters (e.g., 20) between those points for cross validation. For the penalty parameter, usually one has to consider increments in log( ). For example, if one found that the accuracy of a support vector machine will not be below 70% for = 10 3 and = 106, one has to choose log( ) 2 f 3; 2; : : : ; 4; 5; 6g. For the Gaussian Kernel parameter, one usually chooses linear increments,e.g.

2 f:1; :2; : : : ; 2g. When both and are to be chosen using cross-validation, combinations of very small and very large ’s and ’s that keep the accuracy above a threshold (e.g.70%) can be used to determine the ranges for and . Please note that these are very rough rules of thumb, not general procedures.
3It seems that this dataset is already normalized!
4The convention is to use L1 penalty with linear kernel.

4

More products