Starting from:
$30

$24

Homework 1 Solution

Computing: To install Python and required libraries, see the instructions on the course web page.




Weekly homeworks are individual work. See the Course Information handout2 for detailed policies.




[4pts] Nearest Neighbours and the Curse of Dimensionality. In this question, you will verify the claim from lecture that \most" points in a high-dimensional space are far away from each other, and also approximately the same distance. There is a very neat proof of this fact which uses the properties of expectation and variance. If it’s been a long time since you’ve studied these, you may wish to review the Tutorial 1 slides3, or the Metacademy resources4.



[2pts] First, consider two independent univariate random variables X and Y sampled uniformly from the unit interval [0; 1]. Determine the expectation and variance of the



random variable Z, de ned as the squared distance Z = (X Y )2. You are allowed to evaluate integrals numerically (e.g. using scipy.integrate.quad or scipy.integrate.dblquad), but you should explain what integral(s) you are evaluating, and why.




(b) [2pts] Now suppose we sample two points independently from a unit cube in d dimen-sions. Observe that each coordinate is sampled independently from [0; 1], i.e. we can view this as sampling random variables X1; : : : ; Xd; Y1; : : : ; Yd independently from [0; 1]. The squared Euclidean distance can be written as R = Z1 + + Zd, where Zi = (Xi Yi)2. Using the properties of expectation and variance, determine E[R] and Var[R]. You may give your answer in terms of the dimension d, and E[Z] and Var[Z] (the answers from part (a)).




[for your own bene t, not to be handed in] Based on your answer to part (b), compare the mean and standard deviation of R to the maximum possible squared Eu-clidean distance (i.e. the distance between opposite corners of the cube). Why does this support the claim that in high dimensions, \most points are far away, and approximately the same distance"?






https://markus.teach.cs.toronto.edu/csc411-2018-09
http://www.cs.toronto.edu/~rgrosse/courses/csc411_f18/syllabus.pdf
https://www.cs.toronto.edu/~rgrosse/courses/csc411_f18/tutorials/tut1.pdf
https://metacademy.org/graphs/concepts/expectation_and_variance



1
CSC411 Homework 1










[5pts] Decision Trees. This question is taken from a project by Lisa Zhang and Michael Guerzhoy.



In this question, you will use the scikit-learn decision tree classi er to classify real vs. fake news headlines. The aim of this question is for you to read the scikit-learn API and get comfortable with training/validation splits.




We will use a dataset of 1298 \fake news" headlines (which mostly include headlines of articles classi ed as biased, etc.) and 1968 \real" news headlines, where the \fake news" headlines are from https://www.kaggle.com/mrisdal/fake-news/data and \real news" headlines are from https://www.kaggle.com/therohk/million-headlines. The data were cleaned by removing words from fake news titles that are not a part of the headline, removing special characters from the headlines, and restricting real news headlines to those after October 2016 containing the word \trump". For your interest, the cleaning script is available as clean_script.py on the course web page, but you do not need to run it. The cleaned-up data are available as clean_real.txt and clean_fake.txt on the course web page.




Each headline appears as a single line in the data le. Words in the headline are separated by spaces, so just use str.split() in Python to split the headlines into words.




You will build a decision tree to classify real vs. fake news headlines. Instead of coding the decision trees yourself, you will do what we normally do in practice | use an existing implementation. You should use the DecisionTreeClassifier included in sklearn. Note that guring out how to use this implementation is a part of the assignment.




Here’s a link to the documentation of sklearn.tree.DecisionTreeClassifier: http:// scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier. html




All code should be included in the le hw1_code.py which you submit through MarkUs.




[1pt] Write a function load_data which loads the data, preprocesses it using a vectorizer



(http://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_ extraction.text), and splits the entire dataset randomly into 70% training, 15% vali-dation, and 15% test examples.




[1pt] Write a function select_model which trains the decision tree classi er using at least 5 di erent values of max_depth, as well as two di erent split criteria (in-formation gain and Gini coe cient), evaluates the performance of each one on the validation set, and prints the resulting accuracies of each model. You should use DecisionTreeClassifier, but you should write the validation code yourself. Include the output of this function in your solution PDF (hw1_writeup.pdf).



[1pt] Now let’s stick with the hyperparameters which achieved the highest validation accuracy. Extract and visualize the rst two layers of the tree. Your visualization may look something like what is shown below, but it does not have to be an image: it is perfectly ne to display text. It may be hand-drawn. Include your visualization in your solution PDF (hw1_writeup.pdf).



























2
CSC411 Homework 1



























































































[2pts] Write a function compute_information_gain which computes the information gain of a split on the training data. That is, compute I(Y; xi), where Y is the random variable signifying whether the headline is real or fake, and xi is the keyword chosen for the split.



Report the outputs of this function for the topmost split from the previous part, and for several other keywords.






























































































3

More products