Starting from:

$24.99

Homework #9 Solution

.  [16  points] Gaussian  Mixture  Models & EM

Consider a Gaussian  mixture  model with K components  (k ∈ {1, . . . , K }), each having mean



k
 
µk , variance  σ2,  and  mixture  weight  πk .   All these  are  parameters to  be learned,  and  we subsume  them  in the  set θ.  Further, we are given a dataset X  = {xi }, where xi ∈ R.  We also use Z = {zi } to denote the latent variables,  such that zi = k implies that xi is generated from the kth  Gaussian.

 

(a)  What  is the  log-likelihood of the  data  log p(X ; θ) according  to  the  Gaussian  Mixture

Model?  (use µk , σk , πk , K , xi , and X ).  Don’t use any abbreviations.

 

Your answer:

 

 

 

 

 

 

 



k
 
(b)  For learning θ using the EM algorithm, we need the conditional  distribution of the latent variables Z given the current estimate  of the parameters θ(t) (we will use the superscript (t) for parameter estimates  at step t). What  is the posterior probability p(zi = k|xi ; θ(t))? To simplify, wherever possible, use N (xi |µk , σk ) to denote  a Gaussian  distribution over xi ∈ R having mean µk  and variance  σ2.

Your answer:

 

 

 

 

 

(c)  Find  Ezi |xi ;θ(t) [log p(xi , zi ; θ)]. Denote  p(zi = k|xi ; θ

tion simplifications.

Your answer:


(t)


) as zik , and use all previous nota-

 

 

 

 

 

 

(d)    θ(t+1)  is obtained  as the maximizer  of PN     E [log p(xi , zi ; θ)]. Find µ(t+1) , π(t+1) ,and σ(t+1)i=1 zi |xi ;θ(t)    k   k        , by using your answer to the previous question.Your answer:

(e)  How are kMeans and Gaussian  Mixture  Model related?  (There  are three  conditions)

Your answer:

More products