Starting from:
$30

$24

Homework Assignment 1 Solution

Red team, blue team: Identifying political persuasion on Reddit




Introduction




This assignment will give you experience with a social media corpus (i.e., a collection of posts from Reddit), Python programming, part-of-speech (PoS) tags, sentiment analysis, and machine learning with scikit-learn.




Your task is to split posts into sentences, tag them with a PoS tagger that we will provide, gather some feature information from each post, learn models, and use these to classify political persuasion. Sentiment analysis is an important topic in computational linguistics in which we quantify subjective aspects of language. These aspects can range from biases in social media for marketing, to a spectrum of cognitive behaviours for disease diagnosis.




Please check the course bulletin board for announcements and discussion pertaining to this assignment.




Reddit Corpus




We have curated data from Reddit by scraping subreddits, using Pushshift, by perceived political a liation. Table 1 shows the subreddits assigned to each of four categories: left-leaning, right-leaning, center/neutral, and ‘alternative facts’. Although the rst three (at least) are often viewed as ordinal segments on a unidimensional spectrum, here we treat these categories as nominal classes. Here, we use the terms ‘post’ and ‘comment’ interchangeably to mean ‘datum’, i.e., a short segment of user-produced text.




Left (598; 944)
Center (599; 872)
Right (600; 002)
Alt (200; 272)








twoXChromosomes (7; 720; 661)
news (2; 782; 9911)
theNewRight (19; 466)
conspiracy (6; 767; 099)
occupyWallStreet (397; 538)
politics (60; 354; 767)
whiteRights (118; 008)
911truth (79; 868)
lateStageCapitalism (634; 962)
energy (416; 926)
Libertarian (3; 886; 156)


progressive (246; 435)
canada (7; 225; 005)
AskTrumpSupporters (1; 007; 590)


socialism (1; 082; 305)
worldnews (38; 851; 904)
The Donald (21; 792; 999)


demsocialist (5269)
law (464; 236)
new right (25; 166)


Liberal (151; 350)


Conservative (1; 929; 977)






tea party (1976)
















Table 1: Subreddits assigned to each category, with the total posts in each. Since there are over 181M posts, we sample randomly within each category { the resulting number of available posts for each category in this assignment is shown on the top row.




These data are stored on the teach.cs servers under /u/cs401/A1/data/. To save space, these les should only be accessed from that directory (and not copied). All data are in the JSON format.
















Copyright c 2020, Frank Rudzicz, Chloe Pou-Prom. All rights reserved.




1
Each datum has several elds of interest, including:

ups:
the integer number of upvotes.
downs:
the integer number of downvotes.
score:
[ups downs]
controversiality:
a combination of the popularity of a post and the ratio between ups and downs.
subreddit:
the subreddit from which the post was sampled.
author:
the author ID.
body:
the main textual message of the post, and our primary interest.
id:
the unique identi er of the comment.








































































































































































2
Your tasks




Pre-processing, tokenizing, and tagging [18 marks]



The comments, as given, are not in a form amenable to feature extraction for classi cation { there is too much ‘noise’. Therefore, the rst step is to complete a Python program named a1 preproc.py, in accordance with Section 5, that will read subsets of JSON les, and for each comment perform the following steps, in order, on the ‘body’ eld of each selected comment:




Replace all newline characters with spaces.



Replace HTML character codes (i.e., &...;) with their ASCII equivalent (see http://www.asciitable.com).



Remove all URLs (i.e., tokens beginning with http or www).



Remove duplicate spaces between tokens.



Each token must now be separated by a single space.




Apply the following steps using spaCy (see below):



Tagging: Tag each token with its part-of-speech. A tagged token consists of a word, the ‘/’ symbol, and the tag (e.g., dog/NN). See below for information on how to use the tagging module. The tagger can make mistakes.




Lemmatization: Replace the token itself with the token.lemma . E.g., words/NNS becomes word/NNS. If the lemma begins with a dash (‘-’) when the token doesn’t (e.g., -PRON- for I, just keep the token.




Sentence segmentation: Add a newline between each sentence. For this assignment, we will use spaCy’s sentencizer component to segment sentences in a post. Remember to also mark the end of the post with a newline (watch out for duplicates!).







spaCy: spaCy is a Python library for natural language processing tasks, especially in information extraction. Here, we only use its ability to obtain part-of-speech tags and lemma, along with sentence segmentation. For example:







import spacy




nlp = spacy.load(’en_core_web_sm’, disable=[’parser’, ’ner’])




sentencizer = nlp.create_pipe("sentencizer")




nlp.add_pipe(sentencizer)




utt = nlp(u"I know words. I have the best words")




for sent in
utt.sents:


...
print(sent.text)


...
for
token in sent:


...


print(token.text,
token.lemma_, token.pos_, token.tag_, token.dep_,
...


token.shape_,
token.is_alpha, token.is_stop)

































3
Functionality: The a1 preproc.py program reads a subset of the (static) input JSON les, retains the elds you care about, including ‘id’, which you’ll soon use as a key to obtain pre-computed features, and ‘body’, which is text that you preprocess and replace before saving the result to an output le. To each comment, also add a cat eld, with the name of the le from which the comment was retrieved (e.g., ‘Left’, ‘Alt’,...).







The program takes three arguments: your student ID (mandatory), the output le (mandatory), and the maximum number of lines to sample from each category le (optional; default=10,000). For example, if you are student 999123456 and want to create preproc.json, you’d run:




python a1 preproc.py 999123456 -o preproc.json







The output of a1 preproc.py will be used in Task 2.







Your task: Copy the template from /u/cs401/A1/code/a1 preproc.py. There are two functions you need to modify:




In preproc1, ll out each if statement with the associated preprocessing step above.



In main, replace the lines marked with TODO with the code they describe. By default, args.a1 dir



points to /u/cs401/A1/, so load the data from args.a1 dir’s data subdirectory.







For this section, you may only use standard Python libraries, except for Step 5. For debugging, you are advised to either use a di erent input folder with your own JSON data, or pass strings directly to preproc1.




Subsampling: By default, you should only sample 10,000 lines from each of the Left, Center, Right, and Alt les, for a total of 40,000 lines. From each le, start sampling lines at index [ID % len(X)], where ID is your student ID, % is the modulo arithmetic operator, and len(X) is the number of comments in the given input le (i.e., len(data), once the JSON parse is done). Use circular list indexing if your start index is too close to the ‘end’.










































































































4
Feature extraction [22 marks]



The second step is to complete a Python program named a1 extractFeatures.py, in accordance with Section 5, that takes the preprocessed comments from Task 1, extracts features that are relevant to bias detection, and builds an npz data le that will be used to train models and classify comments in Task 3.







For each comment, you need to extract 173 features and write these, along with the category, to a single NumPy array. These features are listed below. Several of these features involve counting tokens based on their tags. For example, counting the number of adverbs in a comment involves counting the number of tokens that have been tagged as RB, RBR, or RBS. Table 4 explicitly de nes some of these features (many of which we have provided as constants in the template); other de nitions are available on CDF in /u/cs401/Wordlists/. You may copy and modify these les, but do not change their lenames.







Number of words in uppercase ( 3 letters long)



Number of rst-person pronouns



Number of second-person pronouns



Number of third-person pronouns



Number of coordinating conjunctions



Number of past-tense verbs



Number of future-tense verbs



Number of commas



Number of multi-character punctuation tokens



Number of common nouns



Number of proper nouns



Number of adverbs



Number of wh- words



Number of slang acronyms



Average length of sentences, in tokens



Average length of tokens, excluding punctuation-only tokens, in characters



Number of sentences.



Average of AoA (100-700) from Bristol, Gilhooly, and Logie norms



Average of IMG from Bristol, Gilhooly, and Logie norms



Average of FAM from Bristol, Gilhooly, and Logie norms



Standard deviation of AoA (100-700) from Bristol, Gilhooly, and Logie norms



Standard deviation of IMG from Bristol, Gilhooly, and Logie norms



Standard deviation of FAM from Bristol, Gilhooly, and Logie norms



Average of V.Mean.Sum from Warringer norms



Average of A.Mean.Sum from Warringer norms



Average of D.Mean.Sum from Warringer norms



Standard deviation of V.Mean.Sum from Warringer norms



Standard deviation of A.Mean.Sum from Warringer norms



Standard deviation of D.Mean.Sum from Warringer norms



30-173. LIWC/Receptiviti features




Note: All of the provided wordlists contain tokens in lowercase. After extracting feature 1 above, you may convert the tokens in the comment to lowercase as well. Take care to modify only the text and not the PoS tags , i.e, Dog/NN should become dog/NN and not dog/nn.

























5
Functionality: The a1 extractFeatures.py program reads a preprocessed JSON le and extracts







features for each comment therein, producing and saving a D 174 NumPy array, where the ith row is the features for the ith comment, followed by an integer for the class (0: Left, 1: Center, 2: Right, 3: Alt), as per the cat JSON eld.




The program takes two arguments: the input lename (i.e., the output of a1 preproc), and the output lename. For example, given input preproc.json and the desired output feats.npz, you’d run:







python a1 extractFeatures.py -i preproc.json -o feats.npz







The output of a1 extractFeatures.py will be used in Task 3.







Your task: Copy the template from /u/cs401/A1/code/a1 extractFeatures.py. There are two functions you need to modify:







In extract1, extract each the rst 29 of the aforementioned features from the input string. Features 30-173 should be extracted in extract2.



In main, call extract1 and extract2 on each datum and add the results (+ the class) to the feats



array.




When your feature extractor works to your satisfaction, build feats.npz, from all input data.




Norms: Lexical norms are aggregate subjective scores given to words by a large group of individuals.




Each type of norm assigns a numerical value to each word. Here, we use two sets of norms:




Bristol+GilhoolyLogie: These are found in /u/cs401/Wordlists/BristolNorms+GilhoolyLogie.csv,




speci cally the fourth, fth, and sixth columns. These measure the Age-of-acquisition (AoA), image-ability (IMG), and familiarity (FAM) of each word, which we can use to measure lexical complexity. More information can be found, for example, here.




Warringer: These are found in /u/cs401/Wordlists/Ratings Warriner et al.csv, speci cally the third, sixth, and ninth columns. These norms measure the valence (V), arousal (A), and dominance







of each word, according to the VAD model of human a ect and emotion. More information on this particular data set can be found here.



When you compute features 18-29, only consider those words that exist in the respective norms le. Assume the default value for all of the above features to be zero. Treat the mean and standard




deviation of zero words to be zero.




LIWC/Receptiviti: The Linguistic Inquiry & Word Count (LIWC) tool has been a standard in a variety of NLP research, especially around authorship and sentiment analysis. This tool provides 85 measures mostly related to word choice; more information can be found here. The company Receptiviti provides a superset of these features, which also includes 59 measures of personality derived from text. The company has graciously donated access to its API for the purposes of this course.







To simplify things, we have already extracted these 144 features for you. Simply copy the pre-computed features from the appropriate uncompressed npy les stored in /u/cs401/A1/feats/. Speci cally:




Comment IDs are stored in IDs.txt les (e.g., Alt IDs.txt). When processing a comment, nd the index (row) i of the ID in the appropriate ID text le, for the category, and copy the 144 elements of that row from the associated feats.dat.npy le.



The le feats.txt provides the names of these features, in the order provided. For this assignment, these names will su ce as to their meaning, but you are welcome to obtain your own API license from Receptiviti in order to get access to their documentation.





















6
Experiments and classi cation [30 marks]



The third step is to use the features extracted in Task 2 to classify comments using the scikit-learn machine learning package. Here, you will modify various hyper-parameters and interpret the results analytically. As everyone has di erent slices of the data, there are no expectations on overall accuracy, but you are expected to discuss your ndings with scienti c rigour. Copy the template from /u/cs401/A1/code/a1 classify.py and complete the main body and the functions for the following experiments according to the speci cations therein.







The program takes two arguments: the input feature le (the output of a1 extractFeatures), and an output directory.







python a1 classify.py -i feats.npz -o .







You should create the output directory if it doesn’t already exist. In main, you are expected to load the data from the input le, partition the input into a train and test set, and call the experiment functions in order. Use the train test split method to split the data into a random 80% for training and 20% for testing. For part 3.3, use the entire loaded data set.










3.1 Classi ers




Train the following 5 classi ers (see hyperlinks for API) with fit(X train, y train):




SGDClassifier: support vector machine with a linear kernel.



GaussianNB: a Gaussian naive Bayes classi er.



RandomForestClassifier: with a maximum depth of 5, and 10 estimators.



MLPClassifier: A feed-forward neural network, with = 0:05.



AdaBoostClassifier: with the default hyper-parameters.



Here, X train is the rst 173 columns of your training data, and y train is the last column. Obtain predicted labels with these classi ers using predict(X test), where X test is the rst 173 columns of your testing data. Obtain the 4 4 confusion matrix C using confusion matrix. Given that the element at row i, column j in C (i.e., ci;j) is the number of instances belonging to class i that were classi ed as class j, compute the following manually, using the associated function templates:










i ci;i






Accuracy : the total number of correctly classi ed instances over all classi cations: A =
Pi;j ci;j
.






: for each class , the fraction of cases that are truly class that were classi ed as , R( ) = c ; .
Recall




P
















Precision : for each class , the fraction of cases classi ed as that truly are , P ( ) =


c ;
. Pj c ;j










i ci;









P




Write the results to the text le a1 3.1.txt in the output directory. You must write to le using the format strings provided in the template. If you do not follow the format, you may receive a mark of zero. For each classi er, you will print the accuracy, recall, precision, and confusion matrix. You may include a written analysis if you are so inclined, but only after the results.







3.2 Amount of training data




Many researchers attribute the success of modern machine learning to the sheer volume of data that is now available. Modify the amount of data that is used to train your preferred classi er from above in ve increments: 1K, 5K, 10K, 15K, and 20K. These can be sampled arbitrarily from the training set in Section 3.1. Using only the classi cation algorithm with the highest accuracy from Section 3.1, report the accuracies of the classi er to the le a1 3.2.txt using the format string provided in the template. On one or more lines following the reported accuracies, comment on the changes to accuracy as the number of training samples increases, including at least two sentences on a possible explanation. Is there an expected trend? Do you see such a trend? Hypothesize as to why or why not.







7
3.3 Feature analysis




Certain features may be more or less useful for classi cation, and too many can lead to over tting or other problems. Here, you will select the best features for classi cation using SelectKBest according to the f classif metric as in:







from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif




selector = SelectKBest(f_classif, you_figure_it_out)




X_new = selector.fit_transform(X_train, y_train)




pp = selector.pvalues_




In the example above, pp stores the p-value associated with doing a 2 statistical test on each feature. A smaller value means the associated feature better separates the classes. Do this:




1. For the 32k training set and each number of features k = f5; 50g, nd the best k features according to this approach. Write the associated p-values to a1 3.3.txt using the format strings provided.







Train the best classi er from section 3.1 for each of the 1K training set and the 32K training set, using only the best k = 5 features. Write the accuracies on the full test set of both classi ers to a1 3.3.txt using the format strings provided.



Extract the indices of the top k = 5 features using the 1K training set and take the intersection with the k = 5 features using the 32K training set. Write using the format strings provided.



Format the top k = 5 feature indices extracted from the 32K training set to le using the format string provided.



Following the above, answer the following questions:



Provide names for the features found in the above intersection of the top k = 5 features. If any, provide a possible explanation as to why these features may be especially useful.



Are p-values generally higher or lower given more or less data? Why or why not?



Name the top 5 features chosen for the 32K training case. Hypothesize as to why those particular features might di erentiate the classes.






3.4 Cross-validation




Many papers in machine learning stick with a single subset of data for training and another for testing (occasionally with a third for validation). This may not be the most honest approach. Is the best classi er from Section 3.1 really the best? For each of the classi ers in Section 3.1, run 5-fold cross-validation given all the initially available data. Speci cally, use KFold. Set the shu e argument to true.




For each fold, obtain accuracy on the test partition after training on the rest for each classi er. Report the mean accuracy of each classi er across the 5 folds in the order speci ed in 3.1 to a1 3.4.txt using the format strings provided. Next, determine whether the accuracy of your best classi er, across the 5 folds, is signi cantly better than any others. I.e., given vectors a and b, one for each classi er, containing the accuracy values for each of the respective 5 folds, obtain the p-value from the output S, below:







from scipy import stats




S = stats.ttest_rel(a, b)




print(S.pvalue)




You should have 4 p-values. Report them using the provided format string in the same order as the accura-cies, excluding the self-comparison. For example, if the best classi er from 3.1 was the RandomForestClassifier, then the p-values should be reported in the order: 1 vs. 3, 2 vs. 3, 4 vs. 3, 5 vs. 3.










8
Bonus [15 marks]



We will give up to 15 bonus marks for innovative work going substantially beyond the minimal requirements. These marks can make up for marks lost in other sections of the assignment, but your overall mark for this assignment cannot exceed 100%. The obtainable bonus marks will depend on the complexity of the undertaking, and are at the discretion of the marker. Importantly, your bonus work should not a ect our ability to mark the main body of the assignment in any way.




You may decide to pursue any number of tasks of your own design related to this assignment, although you should consult with the instructor or the TA before embarking on such exploration. Certainly, the rest of the assignment takes higher priority. Some ideas:




Identify words that the PoS tagger tags incorrectly and add code that xes those mistakes. Does this code introduce new errors elsewhere? E.g., if you always tag dog as a noun to correct a mistake, you will encounter errors when dog should be a verb. How can you mitigate such errors?




Explore alternative features to those extracted in Task 2. What other kinds of variables would be useful in distinguishing a ect? Consider, for example, the Stanford Deep Learning for Sentiment Analysis. Test your features empirically as you did in Task 3 and discuss your ndings.




Explore alternative classi cation methods to those used in Task 3. Explore di erent hyper-parameters. Which hyper-parameters give the best empirical performance, and why?




Learn about topic modelling as in latent Dirichlet allocation. Are there topics that have an e ect on the accuracy of the system? E.g., is it easier to tell how someone feels about politicians or about events? People or companies? As there may be class imbalances in the groups, how would you go about evaluating this? Go about evaluating this.




General speci cations



As part of grading your assignment, the grader may run your programs and/or python les on test data and con gurations that you have not previously seen. This may be partially done automatically by scripts. It is therefore important that each of your programs precisely meets all the speci cations, including its name and the names of the les and functions that it uses. A program that cannot be evaluated because it varies from speci cations will receive zero marks on the relevant sections.




The ag --a1 dir can be used to specify an alternate location than /u/cs401/A1 to load data from, though the submitted les should be generated on the CDF machines. Do not hardwire the absolute address of your home directory within the program; the grader does not have access to this directory.




All your programs must contain adequate internal documentation to be clear to the graders.




We use Python version 3.7.




















































9
Submission requirements



This assignment is submitted electronically. You should submit:




All your code for a1 preproc.py, a1 extractFeatures.py, and a1 classify.py (including helper les, if any).



a1 3.1.txt: Report on classi ers.



a1 3.2.txt: Report on the amount of training data.



a1 3.3.txt: Report on feature analysis.



a1 3.4.txt: Report on 5-fold cross-validation.



Any lists of words that you modi ed from the original version.






In another le called ID (use the template on the course web page), provide the following information:




your rst and last name.



your student number.



your CDF/teach.cs login id.



your preferred contact email address.



whether you are an undergraduate or graduate.



this statement: By submitting this le, I declare that my electronic submission is my own work, and is in accordance with the University of Toronto Code of Behaviour on Academic Matters and the Code of Student Conduct, as well as the collaboration policies of this course.



You do not need to hand in any les other than those speci ed above. Submit your assignment on MarkUs. Do not tar or compress your les, and do not place your les in subdirectories. Do not format your discussion as a PDF or Word document | use plain text only.




Working outside the lab



If you want to do some or all of this assignment on your laptop or home computer, for example, you will have to do the extra work of downloading and installing the requisite software and data. If you take this route, you take on all associated risks. You are strongly advised to upload regular backups of your work to CDF/teach.cs, so that if your home machine fails or proves to be inadequate, you can immediately continue working on the assignment at CDF/teach.cs. When you have completed the assignment, you should try your programs out on CDF/teach.cs to make sure that they run correctly there. Any component that does not work on CDF will get zero marks.
































































10
Appendix: Tables







Tag
Name
Example
CC
Coordinating conjunction
and
CD
Cardinal number
three
DT
Determiner
the
EX
Existential there
there [is]
FW
Foreign word
d’oeuvre
IN
Preposition or subordinating
in, of, like


conjunction


JJ
Adjective
green, good
JJR
Adjective, comparative
greener, better
JJS
Adjective, superlative
greenest, best
LS
List item marker
(1)
MD
Modal
could, will
NN
Noun, singular or mass
table
NNS
Noun, plural
tables
NNP
Proper noun, singular
John
NNPS
Proper noun, plural
Vikings
PDT
Predeterminer
both [the boys]
POS
Possessive ending
’s, ’
PRP
Personal pronoun
I, he, it
PRP$
Possessive pronoun
my, his, its
RB
Adverb
however, usually, naturally, here, good
RBR
Adverb, comparative
better
RBS
Adverb, superlative
best
RP
Particle
[give] up
SYM
Symbol (mathematical or scienti c)
+
TO
to
to [go] to [him]
UH
Interjection
uh-huh
VB
Verb, base form
take
VBD
Verb, past tense
took
VBG
Verb, gerund or present participle
taking
VBN
Verb, past participle
taken
VBP
Verb, non-3rd-person singular present take
VBZ
Verb, 3rd-person singular present
takes
WDT
wh-determiner
which
WP
wh-pronoun
who, what
WP$
Possessive wh-pronoun
whose
WRB
wh-adverb
where, when






Table 2: The Penn part-of-speech tagset|words












































































11









Tag
Name
Example
#
Pound sign
$
$
Dollar sign
$
.
Sentence- nal punctuation
!,?,.



Comma



Colon, semi-colon, ellipsis



(Left bracket character




)Right bracket character




"Straight double quote




‘Left open single quote




\Left open double quote




’Right close single quote




"Right close double quote













Table 3: The Penn part-of-speech tagset|punctuation






















First person:




I, me, my, mine, we, us, our, ours




Second person:




you, your, yours, u, ur, urs




Third person:




he, him, his, she, her, hers, it, its, they, them, their, theirs




Future Tense:




’ll, will, gonna, going+to+VB




Common Nouns:




NN, NNS




Proper Nouns:




NNP, NNPS




Adverbs:




RB, RBR, RBS




wh-words :




WDT, WP, WP$, WRB




Modern slang acronyms:




smh, fwb, lmfao, lmao, lms, tbh, ro , wtf, b , wyd, lylc, brb, atm, imao, sml, btw, bw, imho, fyi, ppl, sob, ttyl, imo, ltr, thx, kk, omg, omfg, ttys, afn, bbs, cya, ez, f2f, gtr, ic, jk, k, ly, ya, nm, np, plz, ru, so, tc, tmi, ym, ur, u, sol, fml Consider also https://www.netlingo.com/acronyms.php, if you want, for no-bonus completion.










Table 4: Miscellaneous feature category speci cations.










12

More products