Starting from:
$35

$29

Programming Project Solution


Introduction




In this project, you are going to experience parallel programming with C/C++ using MPI library. You will implement a parallel algorithm for image denoising with the Ising model using Metropolis-Hastings algorithm. Throughout the document you will probably encounter models and methods you have never seen before. There will be things related to probability. But don't be afraid and embrace it. Because, surprisingly we will end up with a very simple algorithm with a single line equation.




 
The Ising Model




The Ising model, named after the physicist Ernst Ising, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic spins that can be in one of two states (+1 or -1). The spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors.




Did you understand how The Ising model works? No, me neither. But there are two important things to note in the de nition that we wrote in bold.




The Ising model models things that have two states (+1 or -1) Things interact with its neighbors




Not understanding how The Ising model works in physics won't stop us to apply it to images. If we assume that a black and white image is generated using The Ising model, then it means that if we take random black pixel from the image, it is more likely that this pixel is surrounded by black pixels (same for the white pixels). This is all we need to know about The Ising model. Now let's apply this to image denoising.




 
Image Denoising




Let the Z be a I J binary matrix that describes an image where each Zij is either +1 or 1 (i.e. white or black). By randomly ipping some of the pixels of Z, we obtain the noisy image X which we are observing. We assume that the noise-free image Z is generated from an Ising model (parametrized with ) and X is generated by ipping a pixel of Z with a small probability :













1









Z p(Z j )

Fij Be( )




Xij = ( 1)Fij Zij




where




p(Z
j
)
/
e E(Zj )






X


E(Z j ) =




ZijZkl
(i;j) (k;l)




Then we want to nd the posterior distribution of Z:




p(Z; X j ; )

p(Z j X; ; ) =







 
p(Z; X j ; )




 
p(Z j )p(X j Z; )

/ exp
0


ZijXij +
ZijZkl
1


@


X
X
A






ij
(i;j) (k;l)





















(1)




(2)




(3)













(4)




(5)



















(6)




(7)




(8)




(9)



where = 12 log 1 . In this formulation, higher implies lower noise on the X. Similarly, we expect more consistency between the neighbouring pixels if is higher.







"OK, what's going on", you thought probably. But, you really think that I'm on your side. Sorry, but these are the equations that you have to work with. ... ... ... Just kidding. Last part was true but let's go step by step and understand what is going on:




 
There is a black and white image Z that we represent its pixels with +1 and 1 (i.e. black or white). We don't have this image and this is the image that we want to achieve.




 
There is another black and white image X. This is the noisy version of Z. By noisy we mean that, some of the pixels of the image Z are ipped. Equations 2 and 3 model this noise relation. is the probability of a random ip. If = 0:1 then ten percent of the pixels of Z will be ipped to achieve the noisy image X. We start with this noisy image X, and try to reach the noise-free image Z.




 
We assume that the image Z is generated from an Ising model.




4. By combining all these 3 information, we can statistically show our model as p(Z j X; ; ). It can be read as the probability of Z given X, and . It returns a score for a candidate Z image that we propose. The score is higher if the candidate image is more likely to the original image (or more t to the model). So it is a distribution over all possible Z images and we want to nd the most probable Z image, in another word the Z image that gives the highest score from the posterior probability. However, this is not straight forward process because the posterior probability doesn't have a close form like Gaussian. Thus we can't nd the most probable image easily. We have to try all possible Z images, but this is not e cient due to that there are 2I J possibilities. Do we give up then? Of course not! Instead of nding the original image in an instant, we will approach to it step by step.




2






 
Metropolis-Hastings Markov Chain Monte Carlo




We have a function (the posterior distribution) that tells us which candidate Z image is more similar to the original image. Instead of choosing random images, it is obvious that we should start with the image X which is the only thing that is given to us. To reach the noise free image Z we need to make some modi cation to X. Metropolis-Hastings algorithm gives us a very simple approach to do this:




 
Choose a random pixel from the image X




 
Calculate an acceptance probability of ipping the pixel or not.




 
Flip this pixel with the probability of the acceptance probability that is calculated in the second step.




 
Repeat this process untill it converges.




That is basically all you need to do. Simple as I promised, right? Just one last equation that shows how to calculate the acceptance probability. And actually this one will be the only equation you are going to use. So every step we will only calculate how ipping a pixel will e ect our outcome. As you may guess, we will do it by dividing the posterior distributions of two possibilities ( ipping or not):




Assume a bit ip is proposed in pixel (i; j) at time step t, (a bit ip on Zij(t), i.e. Zij0 Zij(t)), then







(t) =


p(Z0 j X; ; )






























































p(Z(t) j X; ; )


P(i;j) (k;l)


(t) (t)






















=






(t)




















exp
Zij0Xij +










Zij0Zkl0














exp Zij Xij +


(i;j) (k;l) Zij Zkl








=




(t)




P








(t)


(t)




(t)


Xij


P
(i;j)
(k;l)
(t) (t)






exp


Zij








Zij


Zkl








exp Zij
Xij +
(i;j) (k;l) Zij
Zkl




















P




















1
=
exp 0 2 Zij(t)Xij 2
X


Zij(t)Zkl(t)


@




























A



(i;j) (k;l)










(10)







(11)










(12)







(13)


Notice that, the summation is over the all the pixels Zkl that are connected to a xed Zij. For ex-ample, if the randomly chosen pixel was (i; j) = (3; 4). Then (k; l) = f(2; 3); (2; 4); (2; 5); (3; 3); (3; 5); (4; 3); (4; 4); (4; 5)g




The pseudocode for the whole process is as follows and this is actually the only part you need to understand to do this project:




1. Initialize your and priors. Then = 12 log 1







2. Initialize Z(0) X




3. At time step t:




3.1.
Randomly choose a pixel (i; j).








3.2.
Propose a bit ip on Z(t)
, i.e. Z0
Z
(t).








ij
ij


ij






Calculate acceptance probability (t) = min n1;
p(Z0 X; ; )
o
3.3.
p(Z(t)jjX; ; )
3.4.
Z(t+1)
Z0, with probability (t)








3.5.
Z(t+1)
Z(t), otherwise
















Remarks:




 
We don't know the true values of and . According to our prior knowledge (by looking at the image or via our spidey sense) we are just throwing wild guesses. To remind what these values represent:




We expect more consistency between the neighbouring pixels if is higher.




is our original prior to show the noise probability. So higher implies higher noise on the X. is like the inverse of and we just come up with it to make equations more readable.




 
A probability can't be higher than 1. That is why we have min function on step 3.3. while calculating the acceptance probability.




 
Steps 3.4 and 3.5, demonstrate the accepting of a ip with the probability of accepting proba-bility. The most simple method to do that is to select a number from the uniform distribution between 0 and 1. If the selected number is less than the accepting probability, then ip.




 
The method is guaranteed to converge if it runs enough iteration. But the enough iteration can be huge according to your priors and how noisy and big the image is.











 
Parallel Image Denoising




In this project, our aim is to implement the parallel image denoising using MPI. Two di erent parallel approaches will be discussed. We will assume that the image is a square of size n n. There will be 1 master and p slave processors and we will assume that n is divisible by the number of slave processors p.




 
In the rst approach, each processor is responsible from a group of n=p adjacent rows. Each




processor works on (n=p n) pixels and these pixels are stored locally by the processor. When a pixel is inspected, the processor should inspect all of its neighbors. When the pixel is not on the boundary of two adjacent processors, information about the neighbor pixels are present to the processor. When a pixel on the boundary is inspected, the processor needs to obtain information from the adjacent processor. Consider the black pixel in the below gure. Processor 1 should communicate with Processor 2 in order to learn the status of the south neighbor of the black pixel. Therefore, each processor should communicate with the adjacent processor at every iteration. Information about those neighbor pixels of the boundary can be stored locally and updated at every iteration.













 
In the second approach, the grid is divided into (n=p n=p) blocks and each process is re-sponsible from a block. Now the updates are more trickier since each process has more than 1 adjacent process.





 
Implementation




 
For the MPI environment and tutorials please visit the web page. You can nd many other tutorials in the web.




 
You are expected to implement the rst approach. There will be bonus points for those who also implement the second approach.




 
The program will read the input from a text le and print the result in another text le. The input text le will be a 2D array representation of a black and white noisy image. The input le must only be read by master processor and distributed to slave (rest) processors by the master processor. The whole array should not be stored in each processor locally.




 
Start by distributing the input among the processesors and let each processor work on its pixels without any communication. Once you accomplish, add the communication.




 
Any functioning of the program regarding the whole program such as printing the output should be done by the master processor.





 
When two processors are exchanging information about the boundary cells, be careful to avoid deadlock. Before processing a pixel, you need to share boundary pixels between the processors. Also, before moving on the next iteration, make sure that all processors have nished their jobs.




 
The names of the input and output les and the values of and priors will be given on the command line, as well as the number of processes. An example execution for a Windows user that runs on 4 processors and uses input.txt and output.txt les with = 0.6 and = 0.1 would be:




mpiexec -n 4 project.exe input.txt output.txt 0.6 0.1




 
Important Nice Things




 
We prepared two python scripts for you:




image to text.py: Converts your image into noise-free project ready text input le. python image to text.py input image output le




make noise.py: Converts your noise-free project ready text input le into noisy project ready text input le. The code takes pi as a parameter that determines the noise rate. Try with your own images.




python make noise.py input le pi value output le




text to image.py: Converts your project ready text le into image. So you can see your noisy image or the output of your code.




python text to image.py input le output image




 
If you say "What is this wall of text with some fancy images? I need code! ". Then don't worry, we covered you too. We prepared a Jupyter Notebook for you which includes a REAL WORKING SEQUENTIAL VERSION of THIS METHOD. Hoorraay. Go check it out: https://github.com/suyunu/Markov-Chain-Monte-Carlo




 
You need to run the main loop for a fairly long time. In the example on the Jupyter Notebook,




to completely denoise a 15%( = 0:15) noisy image with 640 640 pixels we needed to iterate for nearly 5 million times.




 
Because this is somewhat a random algorithm you may not end up with the same outputs every time. Also more importantly, as our assumption of the real image coming from the Ising Model which is not really true, you will not end up with the exact same image with the original one, but a very close one.




 
Most importantly, I highly recommend you to take the course CmpE 548 Monte Carlo Methods to learn more about stu like that. It is fun.







 
Submission




 
The deadline for submitting the project is 26.12.2018 - 23:59. The deadline is strict. We will have demo sessions the following days. In the demo session, you are required to bring your own laptop and explain how you implemented the project.




 
This is an individual project. Your code should be original. Any similarity between submitted projects or to a source from the web will be accepted as cheating.




 
Your code should be su ciently commented and indented.





 
You should write a header comment in the source code le and list the following information. /*




Student Name: Ali Veli




Student Number: 2008123456




Compile Status: Compiling/Not Compiling




Program Status: Working/Not Working




Notes: Anything you want to say about your code that will be helpful in the grading process. */




 
You must prepare a document about the project, which is an important part of the project.




You should talk about your approach in solving the problem. Follow the guidelines given in the "Programming Project Documentation" link on https://www.cmpe.boun.edu.tr/~gungort/ informationstudents.htm




 
Submit your project and document as a compressed archive (zip or rar) to Moodle. Your archive le should be named in the following format: "name surname.ext". You don't need to submit any hard copies.




 
If you have any further questions, send an e-mail to burak.suyunu@boun.edu.tr





































































9

More products