Starting from:
$30

$24

Homework 4 Solution




Submission Instructions




It is recommended that you complete this exercises in Python 3 and submit your solutions as a Jupyter notebook.




You may use any other language, as long as you include a README with simple, clear instructions on how to run (and if necessary compile) your code.




Please upload all les (code, README, written answers, etc.) to blackboard in a single zip le named ff irstnameg flastnameg DS5230=DS4220 HW 4:zip.










Exercise 1 : Gaussian Mixture Model (70pts)







In the expectation maximization for Gaussian Mixture Model, we iterate between




E-step and M-step:




E-step


















































:= p(z


= k x
; )
/
p(x




z






= k; )p(z


= k)
/


p(xnjzn = k; )p(zn = k)




nk


n
j n




nj
n




n


P
K
p x
z


k; p z
k)






































k=1
(
nj
n =
) ( n =
(1)
M-step


































































N










































Xn


































Nk =






nk




















(2)
















=1








































1






N


















































Xn


































k = Nk
nkxn


















(3)












=1
















































































k =
1


N
nk(xn
k)(xn
k)T






(4)














Xn
























Nk








































=1

















































































(10pts) Explain how the cluster assignment is di erence between cluster assign-ment (in K-means) and E-step (in GMM).






1



(10pts) Explain how the M-step in GMM is di erent from the centriods update step in K-means.



(50pts) Now implement Expectation Maximization and apply it on 3 datasets, dataset1.txt, dataset2.txt, and dataset3.txt. Each dataset contains two columns of features and a third column with labels. To evaluate your clustering results, consider 3 di erent metrics:



Note that you may use functions from scikit-learn to compute metrics, but you need to implement GMM algorithm from scratch and are NOT allowed to directly call function from scikit-learn.




The normalized mutual information (NMI)




NMI(Y ; Z) = p




I(Y ; Z)



H(Y )H(Y )




scikit-learn: scklearn.metrics.normalized_mutual_info_score




The Calinski-Harabaz (CH) index tr(SB)=tr(SW ). scikit-learn: sklearn.metrics.calinski_harabaz_score




The Silhouette coe cient (SC) (see attached note). scikit-learn: sklearn.metrics.calinski_harabaz_score




As a simplifying assumption, you may take the covariance matrix for each cluster to be diagonal. Once again, your implementation should perform multiple restarts and accept initial values for the mean and variance as inputs. Initialize the mean for each of your clusters by sampling from a Gaussian distribution centered on a random point in your data. Initialize the variance along each dimension to a random fraction of the total variance in the data.




For each dataset, plot the log likelihood, the CH index, the SC, and the NMI as a function of K = 2; 3; 4; 5. Also include the scatter plots that you made for k-means, using zn = arg maxk nk to determine the cluster assignments.
















































2



Hint: When implementing your E-step, you need to be careful about numerical under ow and over ow when calculating




X

nk = p(xn; zn = k j )= p(xn; zn = l j ):

l




A common trick is to rst calculate the log probabilities

!
nk = log
p
x


; z
k
j
;
!
=
max !


:


(


n
n =


)
n
k
nk





Before exponentiation you can now rst subtract !n, which is equivalent to dividing both the numerator and denominator by exp !n

X

nk = exp(!nk !n)= exp(!nl !n):




l




Explain in your code comments why this helps prevent under ow/over ow.













Exercise 3 : Learning Title Topics using Latent Dirichlet Allocation (40pts)







In this section you will apply Latents Dirichlet Allocation (LDA) to the academic publication titles dataset (publications.txt), and compare the results with what you got when doing Principle Component Analysis (PCA). By comparison, hopefully you could see the similarity between them in the sense that LDA can also be regarded as some form of matrix factortization.




For PCA and LDA, you may use packages from scikit-learn and don’t need to implement them from scratch. are going to use scikit-learn to implement both algorithms, including converting to texts to word count vectors.







(0pt) Firstly you need to clean up the raw dataset. I wrote a script prep.py which does tokenization, stemming, ltering non-alphabets-numeric , numeric replacement, stop words removing. When it nishes, the texts are written into a le called titles prep.txt, so that you only need to do this preprocessing once and can import that le as the actual dataset.



In this question we only need to run the script prep.txt. You could either import the function in notebook or directly execute the python script in terminal. The purpose is to get you familiar with the basic text preprocessing if you are not. For the following questions please load titles prep.txt as the dataset.




(20pts) Apply LDA to the same dataset.






3



Step 1. Load the dataset and convert each title to word count vector using the the function CountVectorizer in scikit-learn. Set the parameter min df to be 800, then you vocabulary size will be 1023. (you can also try other methods to truncate the vocabulary and get a size roughly around 1000).







Step 2. Fit the word count vectors to lda and set the number of topics as 10, 20, 50, repsectively. For each experiment, print out the top 10 words for each topic in a descending order. Explain the trend and change as you change the number of principle components.




3. (5pts) Now apply PCA to the datset.




Step 1. Simiarly you need to convert the texts to word count vectors.




Step 2.By setting the number of principle components to 10, 20, 50, also list the top 10 words for each component (topic) in a descending order. Explain the trend and change as you change the number of principle components.




To be more speci c, you normalize a component eigenvector, and sort the absolute value of each element in it. Then you can retrive the top words with largest values using your vovabulary.




(5pts) Based on the results, do you think LDA outperforms pca in topic modeling task? Explain your reason for why or why not.























More products