Starting from:
$30

$24

Homework 2 Solution

[4pts] Information Theory. The goal of this question is to help you become more familiar with the basic equalities and inequalities of information theory. They appear in many contexts in machine learning and elsewhere, so having some experience with them is quite helpful. We review some concepts from information theory, and ask you a few questions.



Recall the de nition of the entropy of a discrete random variable X with probability mass










function p: H(X) = Px p(x) log2 1 . Here the summation is over all possible values of




p(x)




x 2 X , which (for simplicity) we assume is nite. For example, X might be f1; 2; : : : ; Ng.




(a) [1pt] Prove that the entropy H(X) is non-negative.




An important concept in information theory is the relative entropy or the KL-divergence of two distributions p and q. It is de ned as




KL(pjjq) =
X
p(x)






p(x) log2




:


q(x)


x









The KL-divergence is one of the most commonly used measure of di erence (or divergence) between two distributions, and it regularly appears in information theory, machine learning, and statistics. For this question, you may assume p(x) 0 and q(x) 0 for all x.




If two distributions are close to each other, their KL divergence is small. If they are exactly the same, their KL divergence is zero. KL divergence is not a true distance metric (since it isn’t symmetric and doesn’t satisfy the triangle inequality), but we often use it as a measure of dissimilarity between two probability distributions.




[2pt] Prove that KL(pjjq) is non-negative. Hint: you may want to use Jensen’s Inequal-ity, which is described in the Appendix.



[1pt] The Information Gain or Mutual Information between X and Y is I(Y ; X) = H(Y ) H(Y jX). Show that
I(Y ; X) = KL(p(x; y)jjp(x)p(y));

P

where p(x) = y p(x; y) is the marginal distribution of X.




https://markus.teach.cs.toronto.edu/csc411-2018-09
http://www.cs.toronto.edu/~rgrosse/courses/csc411_f18/syllabus.pdf
1
CSC411 Homework 2










[2pts] Bene t of Averaging. Consider m estimators h1; : : : ; hm, each of which accepts an input x and produces an output y, i.e., yi = hi(x). These estimators might be generated through a Bagging procedure, but that is not necessary to the result that we want to prove.
Consider the squared error loss function L(y; t) =
1
(y t)2
. Show that the loss of the average
2
estimator














m








1








h(x) = m
Xi




=1
hi(x);














is smaller than the average loss of the estimators. That is, for any x and t, we have


1


m






L(h(x); t) m
Xi




=1
L(hi(x); t):















Hint: you may want to use Jensen’s Inequality, which is described in the Appendix.




[3pts] AdaBoost. The goal of this question is to show that the AdaBoost algorithm changes the weights in order to force the weak learner to focus on di cult data points. Here we consider the case that the target labels are from the set f 1; +1g and the weak learner also returns a classi er whose outputs belongs to f 1; +1g (instead of f0; 1g). Consider the t-th iteration of AdaBoost, where the weak learner is







2H




N












Xi


(i)
(i)


argmin












ht






wiIfh(x ) 6= t g;
h
























=1










the w-weighted classi cation error is
































N


















errt =




i=1 wiIfht(x(i)) 6= t(i)g
;




P










i=1 wi


























N












1






1
err








and the classi er coe cient is t =


log


Pt


. (Here, log denotes the natural logarithm.)
2


errt























AdaBoost changes the weights of each sample depending on whether the weak learner ht classi es it correctly or incorrectly. The updated weights for sample i is denoted by wi0 and is



wi0


wi exp
tt(i)ht(x(i)) :






Show that the error w.r.t. (w0
; : : : ; w0
) is exactly
1
. That is, show that
2
1






N




















P
N
w0 ht(x(i)) = t(i)
1
err0 =


i=1 iIf
i=1 wi0


6 g
=




:




2






N










t






P







































Note that here we use the weak learner of iteration t and evaluate it according to the new weights, which will be used to learn the t + 1-st weak learner. What is the interpretation of this result?




Tips:




Start from err0t and divide the summation to two sets of E = fi : ht(x(i)) 6= t(i)g and its complement Ec = fi : ht(x(i)) = t(i)g.




Note that

P

i2E wi = errt:




PN

i=1 wi




2
CSC411 Homework 2










Appendix: Convexity and Jensen’s Inequality. Here, we give some background on con-vexity which you may nd useful for some of the questions in this assignment. You may assume anything given here.




Convexity is an important concept in mathematics with many uses in machine learning. We brie y de ne convex set and function and some of their properties here. Using these properties are useful in solving some of the questions in the rest of this homework. If you are interested to know more about convexity, refer to Boyd and Vandenberghe, Convex Optimization, 2004.




A set C is convex if the line segment between any two points in C lies within C, i.e., if for any x1; x2 2 C and for any 0 1, we have

x1 + (1 )x2 2 C:




For example, a cube or sphere in Rd are convex sets, but a cross (a shape like X) is not.

A function f : Rd ! R is convex if its domain is a convex set and if for all x1; x2 in its domain, and for any 0 1, we have




f( x1 + (1 )x2) f(x1) + (1 )f(x2):




This inequality means that the line segment between (x1; f(x1)) and (x2; f(x2)) lies above the graph of f. A convex function looks like ‘. We say that f is concave if f is convex. A concave function looks like a.




Some examples of convex and concave functions are (you do not need to use most of them in your homework, but knowing them is useful):




Powers: xp is convex on the set of positive real numbers when p 1 or p 0. It is concave for 0 p 1.




Exponential: eax is convex on R, for any a 2 R.




Logarithm: log(x) is concave on the set of positive real numbers.




Norms: Every norm on Rd is convex.




Max function: f(x) = maxfx1; x2; : : : ; xdg is convex on Rd.




Log-sum-exp: The function f(x) = log(ex1 + : : : + exd ) is convex on Rd.




An important property of convex and concave functions, which you may need to use in your homework, is Jensen’s inequality. Jensen’s inequality states that if (x) is a convex function of x, we have




(E[X]) E[ (X)] :




In words, if we apply a convex function to the expectation of a random variable, it is less than or equal to the expected value of that convex function when its argument is the random variable. If the function is concave, the direction of the inequality is reversed.

Jensen’s inequality has a physical interpretation: Consider a set X = fx1; : : : ; xN g of points on R. Corresponding to each point, we have a probability p(xi). If we interpret the probability as mass, and we put an object with mass p(xi) at location (xi; (xi)), then the centre of gravity of these objects, which is in R2, is located at the point (E [X] ; E [ (X)]). If is convex ‘, the centre of gravity lies above the curve x 7! (x), and vice versa for a concave function a.










3

More products