Starting from:
$30

$24

CS7643: Deep Learning Assignment 2




Convolutional Neural Networks (CNNs) are one of the major advancements in computer vision over the past decade. In this assignment, you will complete a simple CNN architecture from scratch and learn how to implement CNNs with PyTorch, one of the most commonly used deep learning frameworks. You will also run different experiments on imbalanced datasets to evaluate your model and techniques to deal with imbalanced data.




Python and dependencies




In this assignment, we will work with Python 3 . If you do not have a python distribution installed yet, we recommend installing Anaconda (or miniconda) with Python 3. We provide environment:yaml (present under




./part1-convnet) which contains a list of libraries needed to set the environ-ment for this assignment. You can use it to create a copy of conda environ-ment. Refer to the users’ manual for more details.




$ conda env create -f environment.yaml







Please note that our environment does NOT have PyTorch Installa-tion for you because you may use CPU/GPU or a different version of CUDA. To install PyTorch, please refer to the official documentation and select the options based on your local OS.




If you already have your own Python development environment, please refer to this file to find necessary libraries, which are used to set the same coding/grading environment.




Code Test




There are two ways (steps) that you can test your implementation:




Python Unit Tests: Some public unit tests are provided in the tests= in the assignment repository. You can test each part of your imple-mentation with these test cases by:



$ python -m unittest tests.<name_of_tests>







However, passing all local tests neither means your code is free of bugs nor guarantees that you will receive full credits for the coding section. Your code will be graded by GradeScope Autograder(see below for more

 














details). There will be additional tests on GradeScope which are not present in your local unit tests.




Gradescope Autograder: You may also submit your code as specified in Section 6 for testing. Gradescope would only reveal the results of public tests. Your final grades of the coding section are based on both public and hidden private tests. However, we do not recommend using Gradescope as your primary testing method during development because the private test cases will NOT be available to you at any time.



1 Implementing CNN from Scratch




You will work in ./part1-convnet for this part of the assignment.




1.1 Module Implementation




You will now learn how to build CNN from scratch. Typically, a convolutional neural network is composed of several different modules and these modules work together to make the network effective. For each module, you will implement a forward pass (computing forwarding results) and a backward pass (computing gradients). Therefore, your tasks are as follows:




Follow the instructions in the code to complete each module in ./mod-ules. Specifically, modules to be implemented are 2D convolution, 2D Max Pooling, ReLU, and Linear. These will be the building blocks of the full network. The file ./modules/conv_classifier.py ties each of the aforementioned modules together and is the subject of the next section.



1.2 Network Implementation




After finishing each module, it’s time to put things together to form a real convolutional neural network. Your task is:




Follow the instructions in the code to complete a CNN network in



./modules/conv_classifier.py. The network is constructed by a list of module definitions in order and should handle both forward and back-ward communication between modules.

 














1.3 Optimizer




You have implemented a simple SGD optimizer in assignment-1. In prac-tice, it is common to use a momentum term in SGD for better convergence. Specifically, we introduce a new velocity term vt and the update rule is as follows:




 vt = vt 1
@L


@w
w = w + vt





where denotes the momentum coefficient and denotes the learning rate




Follow the instructions in the code to complete SGD with momentum in ./optimizer/sgd.py.



You might have noticed that the training process of your implementation can be extremely slow. Therefore, we only want to deliberately overfit the model with a small portion of data to verify whether the model is learning something or not. First, you should download the dataset by




$ cd data




sh get_data . sh



cd . . /



Microsoft Windows 10 Only




C: \ a s s i g n m e n t f o l d e r > cd data




 C: \ a s s i g n m e n t f o l d e r \ data>
get_data . bat
C: \ a s s i g n m e n t f o l d e r \ data>
cd . .
You can then simply run:





$ python train.py







which trains a small CNN with only 50 samples in CIFAR-10 dataset. The script will make a plot on the training data only and be sure to include the plot in your report. Your final accuracy should be slightly under 0.9 with the given network in the script.




2 PyTorch




You will work in ./part2-pytorch for this part of the assignment. The main function in main.py contains the major logic of the code and you can run it by




$ python main.py --config configs/<name_of_config_file>.yaml




 














2.1 Training




The first thing of working with PyTorch is to get yourself familiarized with the basic training step of PyTorch.




1. Complete train and validate functions in main.py.




2.2 PyTorch Model




You will now implement some actual networks with PyTorch. We provide some starter files for you in ./models. The models for you to implement are as follows:




Two-Layer Network. This is the same network you have implemented from scratch in assignment 1. You will build the model with two fully connected layers and a sigmoid activation function in between the two layers. Please implement the model as instructed in ./model-s/twolayer.py



Vanilla Convolutional Neural Network. You will build the model with a convolution layer, a ReLU activation, a max-pooling layer, followed by a fully connected layer for classification. Your convolution layer should use 32 output channels, a kernel size of 7 with stride 1 and zero padding. You max-pooling should use a kernel size of 2 and stride of 2. The fully connected layer should have 10 output features. Please implement the model as instructed in ./models/cnn.py



Your Own Network. You are now free to build your own model. Notice that it’s okay for you to borrow some insights from existing well-known networks, however, directly using those networks as-is is NOT allowed. In other words, you have to build your model from scratch, which also means using any sort of pre-trained weights is also NOT allowed. Please implement your model in ./models/my_model.py



We provide you configuration files for these three models respectively. For Two-Layer Network and Vanilla CNN, you need to train the model without modifying the configuration file. The script automatically saves the weights of the best model at the end of training. We will evaluate your implementa-tion by loading your model weights and evaluating the model on CIFAR-10 test data. You should expect the accuracy of Two-Layer Network and Vanilla CNN to be around 0.3 and 0.4 respectively.




For your own network, you are free to tune any hyper-parameters to obtain better accuracy. Your final accuracy must be above 0.5 to receive at

 














least partial credit. Please refer to the GradeScope auto-test results for the requirement of full credits. All in all, please make sure the checkpoints of each model are saved into ./checkpoints.




3 Data Wrangling




So far we have worked with well-balanced datasets (samples of each class are evenly distributed). However, in practice, datasets are often not balanced. In this section, you will explore the limitation of standard training strategy on this type of dataset. This being an exploration, it is up to you to design experiments or tests to validate these methods are correct and effective.




You will work with an unbalanced version of CIFAR-10 in this section, and you should use the ResNet-32 model in ./models/resnet.py.




3.1 Class-Balanced Focal Loss




You will implement one possible solution to the imbalance problem: Class-Balanced Focal Loss. this CVPR-19 paper: Class-Balanced Loss Based on Effective Number of Samples. You may also refer to the original paper of Focal Loss for more details if you are interested. Please implement CB Focal Loss in ./losses/focal_loss.py.




NOTE: the CVPR-19 paper uses Sigmoid̲ CB focal loss (section 4). Soft-max CB focal loss is not described in the paper, but it is easy to derive from the mentioned papers. You are free to use sigmoid or softmax but be careful with the implementation (there are differences).




4 Deliverables




4.1 Code Submission




4.1.1 Part-1 ConvNet




Simply run bash collect_submission:sh or collect_submission:bat if running MS Windows 10 in part1-convnet, and upload the zip file to Grade-scope (part1-code). The zip file should contain: modules/, optimizer/, trainer.py, and train.py.




4.1.2 Part-2 PyTorch




Simply run bash collect_submission:sh or collect_submission:bat if running MS Windows 10 in part2-pytorch, and upload the zip file to Grade-

 














scope (part2-code). The zip file should contain: configs/, losses/, check-points/, models/, and main.py.




4.2 Write-up




Please follow the report template in the starter folder to complete your write-up. You will need to explain your design of the network from Section 2.2 and explain your experimental results from the data wrangling section.




Note: Explanations should go into why things work the way they do with proper deep learning theory. If you need more than one slide for a question, you are free to create new slides right after the given one.




You will need to export your report in pdf format and submit under ”As-signment 2 Writeup” in Gradescope. When submitting to Gradescope, make sure you select ALL corresponding slides for each question. Failing to do so will result in -1 point for each incorrectly tagged question, with future assignments having a more severe penalty.







































































































7

More products