Starting from:
$35

$29

Project 2: Face Morphing and Blending Solution




Instructions




This is an individual project. ’Individual’ means each student must hand in their own answers, and each student must write their own code in the homework. It is admissible for students to collaborate in solving problems. To help you actually learn the material, what you write down must be your own work, not copied from any other individual. You must also list the names of students (maximum two) you collaborated with.



You must submit your code online on Canvas. We recommend that you can include a README.txt file to help us execute your code correctly. Please place your code, resulting images and videos into the top level of a single folder (no subfolders please!) named <Pennkey_Project2.zip



Your submission folder should include the following:



– your .m or .py scripts for the required functions.




– .m or .py scripts for generating the face morphing video.




– any additional .m files with helper functions you code.




– the images you used.




– .avi files generated for each of the morph methods in face morphing.




This handout provides instructions for two versions of the code: MATLAB and Python. You are free to select either one of them for this project.



Feel free to create your own functions as and when needed to modularize the code. For MATLAB, ensure that each function is in a separate file and that all files are in the same directory. For python, add all functions in a helper.py file and import the file in all the required scripts.



Start early! If you get stuck, please post your questions on Piazza or come to office hours!



Overview




This project focuses on image morphing techniques. You will produce a "morph" animation of your face into another person’s face. This is mandatory. You can also morph your face into anything else that you wish. Use your creativity here!




You will need to generate 60 frames of animation. You can convert these image frames into a .avi movie.




A morph is a simultaneous warp of the image shape and a cross-dissolve of the image colors. The cross-dissolve is the easier part; controlling and doing the warp is the harder part. The warp is controlled by defining a correspondence between the two pictures. The correspondence should map eyes to eyes, mouth to mouth, chin to chin, ears to ears, etc., to get the smoothest transformations possible.




The triangulation used in Task 2 can be computed in any way you like, or can even be defined by hand. A Delaunay triangulation is a good choice. Recall that you need to generate only one triangulation and use it on both the sets of points. We recommend computing the triangulation at the midway shape (i.e. mean of the two point sets) to lessen the potential triangle deformations.







Compute the barycentric coordinate for each pixel in the corresponding triangle. Recall, the compu-tation involves solving the following equation:
0
ay
by
cy
10
b
1
=
0
y
1
(1)
@
ax
bx
cx


a




@
x
A


1
1
1
A@g A


1


where a, b, c are the three corners of triangle, (x, y) the pixel position, and a, b , g are its



0
ax
bx
cx
1


barycentric coordinate. Note you should only compute the matrix
ay
by
cy
and its inverse
only once per triangle.
@
1
1
1
A


Compute the corresponding pixel position in the source image: using the barycentric equation(eq.
0asx bsx csx1

@asy bsy csyA, and plug in same1),butwiththreecornersofthesametriangleinthesourceimage

1 1 1

barycentric coordinate (a, b , g) to compute the pixel position (xs, ys, zs). You need to convert the homogeneous coordinate (xs, ys, zs) to pixel coordinates by taking xs = xs/zs, ys = ys/zs.




Copy back the pixel value at xs, ys the original (source) image back to the target (intermediate) image. You can round the pixel location (xs, ys) or use bilinear interpolation.



3 Gradient Domain Blending




For this part of the project, you will be blending images in the gradient domain as described in the pa-per Poisson Image Editing by Patrick Perez, Michel Gangnet and Andrew Blake. It is a gradient-domain processing technique with numerous applications such as blending, non-photorealistic rendering, contrast enhancement, texture flattening and tone-mapping.for automatically and seamlessly blending two images together. The paper discusses this technique in Section 3 named Seamless Cloning, which you are strongly advised to thoroughly read and understand before starting this project.




The goal is to seamlessly blend an object from a source image into a target image. The simplest method would be to just copy and paste the pixels from one image directly into the other. However, this will create apparent seams, even if the backgrounds are alike. We need to get rid of these seams without visually tam-pering the source image.




Human vision is found to be more sensitive to gradients than absolute image intensities. We formulate this problem as finding values for the output pixels that maximally preserve the gradient of the source region without altering any of the background pixels.




To begin with, we define the image that we are changing as the target image, the image region that we cut and want to clone as the source region, and the pixels in the target image that will be seamlessly cloned with the source image as the replacement pixels.




Image Interpolation using a Guidance Vector Field



ZZ

min |— f − v|2 with f |∂ W = f ⇤|∂ W (2)




f W




where — = [ ∂∂x , ∂∂y ] is the gradient operator, f is the function of the blending image, f ⇤ is the function of the target image, v is the vector field or the gradient field of the source image, W is the region of blending and ∂ W is the boundary of the blending region.




We solve this interpolation problem (Poisson equations) for each color channel independently.




Discrete Poisson Solver



The variational problem in equation (6) is discretized to obtain a quadratic optimization problem.




min
Â
( f
p −
f
q −
v
pq
)2, with f
p
= f ⇤ for all p
2 ∂ W
(3)
f










p




|W hp,q TW6=0/i

























where Np is the set of 4-connected neighbors for pixel p, hp, qi denote a pixel pair such that q 2 Np,




p is the value of f at p and vpq = gp − gq for all hp, qi.
The solution satisfies the following simultaneous linear equations:




for all, p
,
N
f
p

Â
f
q
=
Â
f ⇤ +
Â
v
pq
(4)


2 W |


p|




q
















q2Np TW






q2Np T∂ W


q2Np






|Np| fp − Â fq = Â vpq for pixels interior to W, i.e. Np ⇢ W
(5)
q2Np


q2Np






















We need to solve for fp from the given set of simultaneous linear equations. If we form all the fp as a vector x, then the given set of equations can be converted into a linear system of Ax = b. Please do note that not all of fq is unknown. It is possible that q 2 Np and also q 2 ∂ W, in which case, fq = fq⇤ and it becomes a known parameter.




3.1 Align the Source Image and Create its Mask




First, you need to align the source image and target image. Please use any image editor to adjust the size and position of the source image, ensuring that the region of the target image you want to replace is well-aligned with the source image. Then, save the resized source image and the coordinate of its top left corner as an offset. From now on, source image will refer to the resized source image.




Complete the following function to create an image mask - a logical matrix representing the pixels you want to replace in the source image. A value of 1 means that we will be using the pixel whereas a value of 0 means that the pixel will not be used.




We recommend using MATLAB’s function imfreehand and createMask.




[mask]=maskImage(img)




(INPUT) img: h ⇥ w ⇥ 3 matrix representing the source image.



(OUTPUT) mask: h ⇥ w matrix representing the logical mask






3.2 Index the Pixels




The intensity of the replacement pixels in the target pixel can be found using the linear system Ax = b. But, not all the pixels need to be computed. Only the pixels masked as 1 in the logical mask will be used to blend. In order to reduce the number of calculations, you need to index the replacement pixels such that each element in x represents one replacement pixel. As shown in Figure 3, the yellow locations are the replacement pixels (indexed from left to right).




Complete the following function to obtain the indexes of the replacement pixels:




[indexes] = getIndexes(mask, targetH, targetW, offsetX, offsetY)




(INPUT) mask: The logical matrix h ⇥ w representing the replacement region.



(INPUT) targetH: The height of the target image, h0



(INPUT) targetW: The width of the target image, w0



(INPUT) offsetX: The x-axis offset of the source image with respect to the target image.



(INPUT) offsetY: The y-axis offset of the source image with respect to the target image.



(OUTPUT) indexes: h0 ⇥ w0 matrix representing the indices of each replacement pixel. The value 0 means that is not a replacement pixel.



(a) Source Image (b) Target Image










(c) Blended Image




3.3 Compute the Coefficient Matrix




As described in the section 3.2, the intensities of the replacement pixels are obtained by solving Ax = b. In this section, you need to generate the the Coefficient Matrix A. Please note that the Coefficient Matrix is of size N ⇥ N , where N is the number of replacement pixels. In order to reduce the memory of this matrix, you will have to use a sparse matrix.




Complete the following function to compute the Coefficient Matrix:




[coeffA] = getCoefficientMatrix(indexes)




(INPUT) indexes: h0 ⇥ w0 matrix representing the indices of each replacement pixel.



(OUTPUT) coeffA: an N ⇥ N sparse matrix representing the Coefficient Matrix, where N is the number of replacement pixels.






3.4 Compute the Solution Vector




Complete the following function to generate the solution vector b in the linear system Ax = b. [solVectorb] = getSolutionVect(indexes, source, target, offsetX, offsetY)




(INPUT) indexes: h0 ⇥ w0 matrix representing the indices of each replacement pixel.



(INPUT) source: h ⇥ w matrix representing one color channel of the source image.



(INPUT) target: h0 ⇥ w0 matrix representing one color channel of target image.



(INPUT) offsetX: The x-axis offset of the source image with respect to the target image.



3.2 Indexing the Pixels







(INPUT) offsetY: The y-axis offset of the source image with respect to the target image.



(OUTPUT) solVectorb: 1 ⇥ N vector representing the solution vector.



3.5 Seamlessly Clone the Image




Once you have obtained A and b as stated above, solve for vector x. You will need to replace the pixels in question with the updated intensity i.e. clone the image and obtain the resulting image. Complete the following function to obtained the composite image:




[resultImg] = reconstructImg(indexes, red, green, blue, targetImg)




(INPUT) indexes: h0 ⇥ w0 matrix representing the indices of each replacement pixel.



(INPUT) red: 1 ⇥ N vector representing the intensity of the red channel replacement pixel.



(INPUT) green: 1 ⇥ N vector representing the intensity of the green channel replacement pixel.



(INPUT) blue: 1 ⇥ N vector representing the intensity of the blue channel replacement pixel.



(INPUT) targetImg: h0 ⇥ w0 ⇥ 3 matrix representing the target image.



(OUTPUT) resultImg: h0 ⇥ w0 ⇥ 3 matrix representing the resulting cloned image.



3.6 Wrapper Function




After you complete all the above functions, you will need to write a wrapper function and a demo script. In




this function named seamlessCloningPoisson.m, call getIndexes.m, getCoefficientMatrix.m, getSolutionVect.m and reconstructImg.m and solve the linear system. We recommend the MATLAB function mldivide.




[resultImg] = seamlessCloningPoisson(sourceImg, targetImg, mask, offsetX, offsetY)




(INPUT) sourceImg: h ⇥ w ⇥ 3 matrix representing the source image.



(INPUT) targetImg: h0 ⇥ w0 ⇥ 3 matrix representing the target image.



(INPUT) mask: The logical matrix h ⇥ w representing the replacement region.



(INPUT) offsetX: The x-axis offset of the source image with respect to the target image.



(INPUT) offsetY: The y-axis offset of the source image with respect to the target image.



(OUTPUT) resultImg: h0 ⇥ w0 ⇥ 3 matrix representing the resulting cloned image.



Finally, write a script to generate your blended image using seamlessCloningPoisson.m and




maskImage.m

Please use your creativity while creating cloned images.




4 Extra Credit Challenge:




The following challenges are extra credit challenges. Implementation of the following challenges are op-tional.




NOTE: The following challenges are the ones that we have in mind so far. We may release more before the deadline. The latest potential release date is a week before the deadline. Stay tuned on Piazza!




4.1 Image Morphing Via Thin Plate Spline



Goal: Implement the same function as in Task 2 except using the Thin Plate Spline model.




For this part, you need to compute thin-plate-spline that maps from the feature points in the interme-diate shape (B) to the corresponding points in original image (A). Recall you need two of them - one for the x coordinate and one for the y coordinate. A thin plate spline has the following form:.






·


·
p
'||





||
(






i=1






f (x, y) = a1 + ax


x + ay


y + ÂwiU


(xi, yi)


(x, y),




(6)



where U(r) = −r2log(r2).




We know there is some thin-plate spline (TPS) transform that can map the corresponding feature points in image (B) back to image (A), using the same TPS transform we will transform all of the pixels of image (B) into image (A), and copy back their pixel value.




You need to implement three functions:




– Thin-plate parameter estimation:




[a1,ax,ay,w] = est_tps(interim_pts, source_pts)




⇤ (INPUT) interim_pts: N ⇥ 2 matrix, each row representing corresponding point position (x, y) in second image.




⇤ (INPUT) source_pts: N ⇥ 1 vector representing corresponding point position x or y in first image.




⇤ (OUTPUT) a1: double, TPS parameter.




⇤ (OUTPUT) ax: double, TPS parameter.




⇤ (OUTPUT) ay: double, TPS parameter.




⇤ (OUTPUT) w: N ⇥ 1 vector, TPS parameters.




Recall the solution of the TPS model requires solving the following equation:










0
w2
1


0
v2
1










w1






v1





PT
0
B


C


B
0
C






◆B
ax C


B
C




K
P
B
...
C


B
...
C




B
ay
C
=
B
0
C
(7)


B
wp
C
B
vp
C






B
a1
C


B
0
C








B
C


B
C








B


C


B


C


where




@


A


@


A






















Ki j = U(||(xi, yi) − (x j, y j)||),
(8)



vi = f (xi, yi), and ith row of P is (xi, yi, 1). K is a matrix of size p by p, and P is a matrix of size p by 3. In order to have a stable solution you need to compute the solution using:

0w11

BBw2CC

BB ... CC ✓ K

BwpC inv

B C = ( PT

@ay A




a1










P◆ 0

0v11

Bv2C

B C

+ l ? I(p + 3, p + 3)) BBvpCC (9)

BB 0 CC

@ 0 A




0



where I(p + 3, p + 3) is an identity matrix of size p + 3 and l ≥ 0 (usually close to zero).




NOTE: You need to compute two TPS, one by plugging in the x-coordinates as vi, and one by plugging in the y-coordinates as vi.




– morphed_im = obtain_morphed_tps(im_source, a1_x, ax_x, ay_x, w_x, a1_y, ax_y, ay_y, w_y, interim_pts, sz)




⇤ (INPUT) im_source: Hs ⇥Ws ⇥ 3 matrix representing the source image.




⇤ (INPUT) a1_x, ax_x, ay_x, w_x: the parameters solved when doing est_tps in the x direc-tion.




⇤ (INPUT) a1_y, ax_y, ay_y, w_y: the parameters solved when doing est_tps in the y direc-tion.




⇤ (INPUT) interim_pts: N ⇥ 2 matrix, each row representing corresponding point position (x, y) in target image.




⇤ (INPUT) sz: 1 ⇥ 2 vector representing the target image size (Ht ,Wt ).




⇤ (OUTPUT) morphed_im: Ht ⇥Wt ⇥ 3 matrix representing the morphed image.




– morphed_im = morph_tps(im1, im2, im1_pts, im2_pts, warp_frac, dissolve_frac)




⇤ (INPUT) im1: target image




⇤ (INPUT) im2: source image




⇤ (INPUT) im1_pts: correspondences coordinates in the target image




⇤ (INPUT) im2_pts: correspondences coordinates in the source image




⇤ (INPUT) warp_frac: a vector contains warping parameters




⇤ (INPUT) dissolve_frac: a vector contains cross dissolve parameters




⇤ (OUTPUT) morphed_im: a set of morphed images obtained from different warp and dis-solve parameters. The size should be [number of images, image height, image Width, color channel number]







In this step, you need transform all the pixels in image (B) by the TPS model, and read back the pixel value in image (A) directly. The position of the pixels in image A is generated by using equation 2.




4.2 Face Blending:



Using the sequence of images during image morphing, seamlessly clone a source face onto a target face and create a video. The recommended format is .avi.



Test and Submission



Use different kinds of images. Face to face morphing is mandatory but get creative! Morph objects to objects, even face to objects. Creative morphs will receive the honor of public recognition on Piazza.



We have provided a test script for MATLAB and Python for this project. Extract the contents of the test script to the same directory as your functions and run Test_script.m/py in MAT-LAB/Python. When grading, we’ll be calling your functions in the same manner, so make sure they work as you’d expect on the sample in the test script.



Collect all your source code files and test images into a folder named as <Pennkey_Project2. Zip this folder and submit it to Canvas. Any break in this rule will lead to a failure in the test script. Only submit codes pertaining to your language of implementation. For example: If you choose to do the project in Python, do not submit the MATLAB folder containing the MATLAB starter codes.

More products