Starting from:

$30

Mini-project for


    • Problem statement


The goal of the mini-projects is to implement a Noise2Noise model (Section 7.3 of the course). A Noise2Noise model is an image denoising network trained without a clean reference image. The original paper can be found at https://arxiv.org/abs/1803.04189.

The project has two parts, focusing on two di erent facets of deep learning. The rst one is to build a network that denoises using the PyTorch framework, in particular the torch.nn modules and autograd. The second one is to understand and build a framework, its constituent modules, that are the standard building blocks of deep networks without PyTorch’s autograd.


1.1    Submission instructions that apply to both projects

We will evaluate your codes with an automated testing suite, so your submission should adhere to the following structure.

Proj SCIPER1 SCIPER2 SCIPER3

Miniproject 1

init  .py

model.py

bestmodel.pth

Report 1.pdf

others

otherfile1.py

otherfile2.py

Miniproject 2

init  .py

model.py

bestmodel.pth

others

otherfile1.py

otherfile2.py

Report 2.pdf


Do not submit any IPython notebooks, as they will not be checked. others refers to all the codes that you want to include in your submission. You may choose to structure those di erently. The immutable ones are the two folders named Miniproject 1, Miniproject 2, model.py in both those folders with the speci c structure described above.


A zip le named Proj SCIPER1 SCIPER2 SCIPER3.zip must be uploaded to the Moodle of the course before Friday May 27th, 2022 23:59PM . This zip le should uncompress into two directories named Miniproject_1 and Miniproject_2, with the directory structure given above.


When needed, you should add comments to your source codes to facilitate their understanding.

The code must work in the VM provided for the course, in particular, it should not require additional software or libraries.


1
Exchange of code or report snippets between groups is forbidden. Using code or really anything that is not your original creation without citing it is called plagairism { it is wrong and will be treated very seriously. Every student should have a clear understanding of their group’s entire source code and report. This will be checked during the oral presentation.


    • Mini-project 1: Using the standard PyTorch framework


You have, at your disposal, the whole PyTorch framework, with all its powerful modules to implement the denoiser. You are free to explore the various modules available and build any network that you deem necessary. It should be implemented with PyTorch only, in particular, without using other external libraries such as scikit-learn or NumPy.

You should explore various parts of the training pipeline like data augmentation strategies, optimization methods, loss functions, etc.


Training

You have been provided a train data.pkl le with two tensors of the size 50000 3 H W . You can load this le with a command like:


    • import  torch

    • noisy_imgs_1 ,  n o i s y _ i m g s _ 2  =  torch . load ( ’t r a i n _ d a t a . pkl ’)

This constitutes the training data and corresponds to 50000 noisy pairs of images. Each of the 50000 pairs provided correspond to downsampled, pixelated images. Train a network that uses these two tensors to denoise i.e., reduce the e ects of downsampling on unseen images.

An additional val data.pkl is provided so that you can track your progress. You can load the validation le with a command like:


    • noisy_imgs ,  c l e a n _ i m g s  =  torch . load ( ’val_data . pkl ’)

Your proposed method and network will be evaluated on a di erent set of images than the ones given here. They will also be of size 3 H W , but their noise characteristics may vary.


Evaluation and Submission

The    nal version of your code should contain a model.py that will be imported by the testing pipeline.

This    le should contain a class :


1    ###  For  mini - project  1

    • class  Model () :

    • def  __init__ ( self )  ->  None :

4
##  i n s t a n t i a t e  model  +  o pt im iz er  +  loss  function  +  any  other  stuff  you  need
5
pass
6

7
def  l o a d _ p r e t r a i n e d _ m o d e l ( self )  ->  None :
8
##  This  loads  the  p a r a m e t e r s  saved  in  b es tm od el . pth  into  the  model
9
pass
10


    11 def  train ( self ,  train_input ,  train_target ,  n u m _ e p o c h s )  ->  None :

12
#: t r a i n _ i n p u t :  tensor
of
size  (N,  C,  H,  W)  c o n t a i n i n g  a  noisy  version  of  the  images

.


13
#: t r a i n _ t a r g e t :  tensor
of
size  (N,  C,  H,  W)  c o n t a i n i n g  another  noisy  version  of  the

same  images ,  which  only  differs  from  the  input  by  their  noise .
14
pass


15



16
def  predict ( self ,  t e s t _ i n p u t )
->  torch . Tensor :


2

17

#: t e s t _ i n p u t :  tensor
of  size  (N1
,  C,  H,  W)  with  values  in
range  0 -255  that  has  to

be
denoised  by  the  trained  or
the
loaded
network .


18

#: returns  a  tensor  of
the
size  (N1 ,  C,  H,
W)  with  values
in  range  0 -255.
19

pass














The Model class will serve as the entry-point to your code for evaluation. Its constructor should not take any inputs. The required interface of the Model is shown with comments about what each required function should do. In the Model class, the predict method takes as input a Tensor with values in range 0-255 and should return a Tensor with values in range 0-255.

Using this interface, a model will be trained, and will be evaluated on a test set. Additionally, you should provide your \best" model that you think will perform the best on an unseen test set. You should save this model under \bestmodel.pth" and the model class should provide a function load pretrained model that loads this model. For that purpose, you can have a look at model serialization methods for PyTorch at https://pytorch.org/ docs/stable/notes/serialization.html#saving-and-loading-torch-nn-modules.


Ensure that your code can runs on GPUs, and on CPU only systems, without any changes to the code. The easiest way to make this happen is to de ne a device as


    • device  =  torch . device ( " cuda "  if  torch . cuda . i s _ a v a i l a b l e ()  else  " cpu" )

and use this to de ne the storage location of each of your tensors. For example,


    • all_ones  =  torch . ones (2 ,  3) . to ( device )

will use GPU capabilities if PyTorch detects one, else seamlessly works on the CPU.

Evaluation of the submitted models will be based on the Peak Signal-to-Noise Ratio (PSNR) metric. An extremely simpli ed implementation of PSNR for a single input image, with pixel values in [0; 1], looks like


    • def  psnr ( denoised ,  g r o u n d _ t r u t h ) :

2
#  Peak
Signal  to
Noise  Ratio :  denoised  and
g r o u n d _ t r u t h  have  range  [0 ,  1]
3
mse  =
torch . mean
(( denoised  -  g r o u n d _ t r u t h )
**  2)

    • return  -10  *  torch . log10 ( mse  +  10** -8)

A network with 8 convolutional layers achieves 24 dB PSNR on the validation data provided (the function above returns PSNR in dB). An extremely simple benchmark achieves 23 dB, so if you are not reaching this you will want to assume that there is a problem with your implementation.

A suggestion: You can consider experimenting with a subset of the training data, and train on the full dataset after you’ve made your architectural, training choices etc. You can choose services like Google Colab for larger computation power.

We provide a test.py that uses a similar interface to the one that we will use for the grading, which trains your model and then evaluates it on a validation set. The training and testing should take at most 10 min on a computer with a small GPU (at most 45 min on a CPU). If your code does not follow the interface or crashes during evaluation, you will not get any points for the code submission.

More products