Starting from:
$30

$24

Computer Vision II HW5 Solved




1.2    Problem 1 (Math): Point on Line Closest to the Origin (5 points)
Given a line l = (a, b, c)⊤, show that the point on l that is closest to the origin is the point

        ◦ = (−ac, −bc, a2 + b2)⊤ (Hint: this calculation is needed in the two-view optimal triangulation method used below).

“ ““Write your solution here.”“ ”






1
1.3    Problem 2 (Programming): Feature Detection (20 points)

Download input data from the course website. The file IMG_5030.jpeg contains image 1 and the file IMG_5031.jpeg contains image 2.

For each input image, calculate an image where each pixel value is the minor eigenvalue of the gradient matrix
N =
∑wI I
w∑ I2


∑ Ix2
∑ Ix Iy


w
x y
w  y

where w is the window about the pixel, and Ix and Iy are the gradient images in the x and y direction, respectively. Calculate the gradient images using the fivepoint central difference oper-ator. Set resulting values that are below a specified threshold value to zero (hint: calculating the mean instead of the sum in N allows for adjusting the size of the window without changing the threshold value). Apply an operation that suppresses (sets to 0) local (i.e., about a window) non-maximum pixel values in the minor eigenvalue image. Vary these parameters such that 1350–1400 features are detected in each image. For resulting nonzero pixel values, determine the subpixel feature coordinate using the Forstner corner point operator.

Report your final values for:

    • the size of the feature detection window (i.e. the size of the window used to calculate the elements in the gradient matrix N)
    • the minor eigenvalue threshold value

    • the size of the local nonmaximum suppression window

    • the resulting number of features detected (i.e. corners) in each image.

Display figures for:

    • original images with detected features, where the detected features are indicated by a square window (the size of the detection window) about the features

A typical implementation takes around 30 seconds. If yours takes more than 60, you may lose points.


    • ]: %matplotlib inline import numpy as np
import matplotlib.pyplot as plt import matplotlib.patches as patches
from scipy.signal import convolve2d as conv2d import scipy.ndimage

def ImageGradient(I):

    • inputs:

    • I is the input image (may be mxn for Grayscale or mxnx3 for RGB)

    • 

    • outputs:

    • Ix is the derivative of the magnitude of the image w.r.t. x

    • Iy is the derivative of the magnitude of the image w.r.t. y


2

m, n = I.shape[:2]

"""your code here"""


return Ix, Iy


def MinorEigenvalueImage(Ix, Iy, w):

    • Calculate the minor eigenvalue image J

    • 

    • inputs:

    • Ix is the derivative of the magnitude of the image w.r.t. x

    • Iy is the derivative of the magnitude of the image w.r.t. y

    • w is the size of the window used to compute the gradient matrix N

    • 

    • outputs:

    • J0 is the mxn minor eigenvalue image of N before thresholding

m, n = Ix.shape[:2]

J0 = np.zeros((m,n))

#Calculate your minor eigenvalue image J0.

"""your code here"""


return J0

def NMS(J, w_nms):

    • Apply nonmaximum supression to J using window w_nms

    • 

    • inputs:

    • J is the minor eigenvalue image input image after thresholding

    • w_nms is the size of the local nonmaximum suppression window

    • 

    • outputs:

    • J2 is the mxn resulting image after applying nonmaximum suppression

    • 

J2 = J.copy()

"""your code here"""


return J2





3

def ForstnerCornerDetector(Ix, Iy, w, t, w_nms):

    • Calculate the minor eigenvalue image J

    • Threshold J

    • Run non-maxima suppression on the thresholded J

    • Gather the coordinates of the nonzero pixels in J

    • Then compute the sub pixel location of each point using the Forstner␣ →operator

    • 

    • inputs:

    • Ix is the derivative of the magnitude of the image w.r.t. x

    • Iy is the derivative of the magnitude of the image w.r.t. y

    • w is the size of the window used to compute the gradient matrix N

    • t is the minor eigenvalue threshold

    • w_nms is the size of the local nonmaximum suppression window

#

    • outputs:

    • C is the number of corners detected in each image

    • pts is the 2xC array of coordinates of subpixel accurate corners

    • found using the Forstner corner detector

    • J0 is the mxn minor eigenvalue image of N before thresholding

    • J1 is the mxn minor eigenvalue image of N after thresholding

    • J2 is the mxn minor eigenvalue image of N after thresholding and NMS

m, n = Ix.shape[:2]

J0 = np.zeros((m,n))

J1 = np.zeros((m,n))

#Calculate your minor eigenvalue image J0 and its thresholded version J1.

"""your code here"""


#Run non-maxima suppression on your thresholded minor eigenvalue image. J2 = NMS(J1, w_nms)

#Detect corners.

"""your code here"""


return C, pts, J0, J1, J2


# feature detection

def RunFeatureDetection(I, w, t, w_nms):

Ix, Iy = ImageGradient(I)

C, pts, J0, J1, J2 = ForstnerCornerDetector(Ix, Iy, w, t, w_nms)

return C, pts, J0, J1, J2




4

    • ]: from PIL import Image import time

# input images

I1 = np.array(Image.open('IMG_5030.jpeg'), dtype='float')/255.

I2 = np.array(Image.open('IMG_5031.jpeg'), dtype='float')/255.

    • parameters to tune w = 15
t = 1

w_nms = 1

tic = time.time()

# run feature detection algorithm on input images

C1, pts1, J1_0, J1_1, J1_2 = RunFeatureDetection(I1, w, t, w_nms) C2, pts2, J2_0, J2_1, J2_2 = RunFeatureDetection(I2, w, t, w_nms) toc = time.time() - tic

print('took %f secs'%toc)

# display results

plt.figure(figsize=(14,24))

    • show corners on original images ax = plt.subplot(1,2,1) plt.imshow(I1)

for i in range(C1): # draw rectangles of size w around corners x,y = pts1[:,i] ax.add_patch(patches.Rectangle((x-w/2,y-w/2),w,w, fill=False))

    • plt.plot(pts1[0,:], pts1[1,:], '.b') # display subpixel corners plt.title('Found %d Corners'%C1)

ax = plt.subplot(1,2,2)

plt.imshow(I2)

for i in range(C2):

x,y = pts2[:,i]

ax.add_patch(patches.Rectangle((x-w/2,y-w/2),w,w, fill=False))

    • plt.plot(pts2[0,:], pts2[1,:], '.b') plt.title('Found %d Corners'%C2)

plt.show()

Final values for parameters

    • w =

    • t =

    • w_nms =

5

    • C1 =

    • C2 =

1.4    Problem 3 (Programming): Feature matching (15 points)

Determine the set of one-to-one putative feature correspondences by performing a brute-force search for the greatest correlation coefficient value (in the range [-1, 1]) between the detected features in image 1 and the detected features in image 2. Only allow matches that are above a specified correlation coefficient threshold value (note that calculating the correlation coefficient allows for adjusting the size of the matching window without changing the threshold value). Fur-ther, only allow matches that are above a specified distance ratio threshold value, where distance is measured to the next best match for a given feature. Vary these parameters such that 225-275 putative feature correspondences are established. Optional: constrain the search to coordinates in image 2 that are within a proximity of the detected feature coordinates in image 1.

Note: You must center each window at the sub-pixel corner coordinates while computing nor-malized cross correlation; otherwise, you will lose points.

Report your final values for:

    • the size of the matching window

    • the correlation coefficient threshold

    • the distance ratio threshold

    • the size of the proximity window (if used)

    • the resulting number of putative feature correspondences (i.e. matched features)

Display figures for:

    • pair of images, where the matched features are indicated by a square window (the size of the matching window) about the feature and a line segment is drawn from the feature to the coordinates of the corresponding feature in the other image

A typical implementation takes around 40 seconds. If yours takes more than 80 seconds, you may lose points.


    • ]: def NCC(I1, I2, pts1, pts2, w, p):

        ◦ compute the normalized cross correlation between image patches I1, I2

        ◦ result should be in the range [-1,1]

#

    • Do ensure that windows are centered at the sub-pixel co-ordinates

    • while computing normalized cross correlation.

#

    • inputs:

    • I1, I2 are the input images

    • pts1, pts2 are the point to be matched

    • w is the size of the matching window to compute correlation coefficients

    • p is the size of the proximity window

#

# output:



6

    • normalized cross correlation matrix of scores between all windows in

    • image 1 and all windows in image 2

"""your code here"""

return scores


def Match(scores, t, d):

    • perform the one-to-one correspondence matching on the correlation␣ →coefficient matrix

    • 

    • inputs:

    • scores is the NCC matrix

    • t is the correlation coefficient threshold

    • d distance ration threshold

#

    • output:

    • 2xM array of the feature coordinates in image 1 and image 2,

    • where M is the number of matches.

"""your code here"""

inds = []

return inds




def RunFeatureMatching(I1, I2, pts1, pts2, w, t, d, p):

    • inputs:

    • I1, I2 are the input images

    • pts1, pts2 are the point to be matched

    • w is the size of the matching window to compute correlation coefficients

    • t is the correlation coefficient threshold

    • d distance ration threshold

    • p is the size of the proximity window

#

    • outputs:

    • inds is a 2xk matrix of matches where inds[0,i] indexs a point pts1

    • and inds[1,i] indexs a point in pts2, where k is the number of matches

scores = NCC(I1, I2, pts1, pts2, w)

inds = Match(scores, t, d, p)

return inds


    • ]: # parameters to tune w = 15


7

t = 1

d = 1

p = np.inf

tic = time.time()

    • run the feature matching algorithm on the input images and detected features inds = RunFeatureMatching(I1, I2, pts1, pts2, w, t, d, p)
toc = time.time() - tic

print('took %f secs'%toc)

    • create new matrices of points which contain only the matched features match1 = pts1[:,inds[0,:]]
match2 = pts2[:,inds[1,:]]

    • # display the results

plt.figure(figsize=(14,24))

ax1 = plt.subplot(1,2,1)

ax2 = plt.subplot(1,2,2)

ax1.imshow(I1)

ax2.imshow(I2)

plt.title('Found %d Putative Matches'%match1.shape[1])

for i in range(match1.shape[1]):

x1,y1 = match1[:,i]

x2,y2 = match2[:,i]

ax1.plot([x1, x2],[y1, y2],'-r')

ax1.add_patch(patches.Rectangle((x1-w/2,y1-w/2),w,w, fill=False))

ax2.plot([x2, x1],[y2, y1],'-r')

ax2.add_patch(patches.Rectangle((x2-w/2,y2-w/2),w,w, fill=False))

plt.show()

print('unique points in image 1: %d'%np.unique(inds[0,:]).shape[0])

print('unique points in image 2: %d'%np.unique(inds[1,:]).shape[0])


Final values for parameters

    • w =

    • t =

    • d =

    • p =

    • num_matches =

1.5    Problem 4 (Programming): Outlier Rejection (20 points)

The resulting set of putative point correspondences should contain both inlier and outlier cor-respondences (i.e., false matches). Determine the set of inlier point correspondences using the M-estimator Sample Consensus (MSAC) algorithm, where the maximum number of attempts to

8

find a consensus set is determined adaptively. For each trial, you must use the 7-point algorithm (as described in lecture) to estimate the fundamental matrix, resulting in 1 or 3 solutions. Calculate the (squared) Sampson error as a first order approximation to the geometric error.

Hint: this problem has codimension 1

Also: fix a random seed in your MSAC. If I cannot reproduce your results, you will lose points. But don’t try to find a good seed as I will run your code using different seeds. Try to get 170-220 inliers every time you run MSAC.

Report your values for:

    • the probability p that as least one of the random samples does not contain any outliers

    • the probability α that a given point is an inlier

    • the resulting number of inliers

    • the number of attempts to find the consensus set

    • the tolerance for inliers

    • the cost threshold

    • random seed

Display figures for:

    • pair of images, where the inlier features in each of the images are indicated by a square window about the feature and a line segment is drawn from the feature to the coordinates of the corresponding feature in the other image


[ ]: from scipy.stats import chi2

def DisplayResults(F, title):

print(title+' =')

print(F/np.linalg.norm(F)*np.sign(F[-1,-1]))

def MSAC(pts1, pts2, thresh, tol, p):

    • Inputs:

    • pts1 - matched feature correspondences in image 1

    • pts2 - matched feature correspondences in image 2

    • thresh - cost threshold

    • tol - reprojection error tolerance

    • p - probability that as least one of the random samples does not␣ →contain any outliers

    • 

    • Output:

    • consensus_min_cost - final cost from MSAC

    • consensus_min_cost_model - fundamental matrix F

    • inliers - list of indices of the inliers corresponding to input data

    • trials - number of attempts taken to find consensus set

"""your code here"""




9



trials = 0

max_trials = np.inf

consensus_min_cost = np.inf

consensus_min_cost_model = np.zeros((3,3))

inliers = np.random.randint(0, 200, size=100)

return consensus_min_cost, consensus_min_cost_model, inliers, trials


    • MSAC parameters thresh = 0

tol = 0 p = 0 alpha = 0

tic=time.time()

cost_MSAC, F_MSAC, inliers, trials = MSAC(match1, match2, thresh, tol, p)

    • choose just the inliers xin1 = match1[:,inliers] xin2 = match2[:,inliers]

toc=time.time()

time_total=toc-tic

# display the results

print('took %f secs'%time_total)

print('%d iterations'%trials)

print('inlier count: ',len(inliers))

print('inliers: ',inliers)

print('MSAC Cost = %.9f'%cost_MSAC)

DisplayResults(F_MSAC, 'F_MSAC')

# display the figures

plt.figure(figsize=(14,8))

ax1 = plt.subplot(1,2,1)

ax2 = plt.subplot(1,2,2)

ax1.imshow(I1)

ax2.imshow(I2)

for i in range(xin1.shape[1]):

x1,y1 = xin1[:,i]

x2,y2 = xin2[:,i]

ax1.plot([x1, x2],[y1, y2],'-r')

ax1.add_patch(patches.Rectangle((x1-w/2,y1-w/2),w,w, fill=False))


10

ax2.plot([x2, x1],[y2, y1],'-r')

ax2.add_patch(patches.Rectangle((x2-w/2,y2-w/2),w,w, fill=False))

plt.show()

Final values for parameters

    • random seed =

    • p =

    • α =

    • tolerance =

    • threshold =

    • num_inliers =

    • num_attempts =

    • consensus_min_cost =

1.6    Problem 5 (Programming): Linear Estimation of the Fundamental Matrix (15 points)

Estimate the fundamental matrix FDLT from the resulting set of inlier correspondences using the direct linear transformation (DLT) algorithm (with data normalization). Include the numerical values of the resulting FDLT, scaled such that ||FDLT||Fro = 1


    • ]: def Homogenize(x):

        ◦ converts points from inhomogeneous to homogeneous coordinates return np.vstack((x,np.ones((1,x.shape[1]))))

def Dehomogenize(x):

    • converts points from homogeneous to inhomogeneous coordinates return x[:-1]/x[-1]

def DLT(x1, x2, normalize=True):

    • Inputs:

    • x1 - inhomogeneous inlier correspondences in image 1

    • x2 - inhomogeneous inlier correspondences in image 2

    • normalize - if True, apply data normalization to x1 and x2

    • 

    • Outputs:

    • F - the DLT estimate of the fundamental matrix

"""your code here"""

    • data normalization if normalize:
x1 = x1

x2 = x2

    • data denormalization


11

if normalize:

x1 = x1

x2 = x2


F = np.eye(3)

return F


    • compute the linear estimate with data normalization print ('DLT with Data Normalization') time_start=time.time()

F_DLT = DLT(xin1, xin2, normalize=True) time_total=time.time()-time_start

    • display the resulting F_DLT, scaled with its frobenius norm DisplayResults(F_DLT, 'F_DLT')

1.7    Problem 6 (Programming): Nonlinear Estimation of the Fundamental Matrix (70 points)

Retrieve the camera projection matrices P = [I | 0] and P′ = [M | v], where M is full rank, from
    • DLT. Use the resulting camera projection matrix P′ associated with the second image and the triangulated 3D points as an initial estimate to an iterative estimation method, specifically the sparse Levenberg-Marquardt algorithm, to determine the Maximum Likelihood estimate of the fundamental matrix F = [v]× M that minimizes the reprojection error. The initial estimate of the 3D points must be determined using the two-view optimal triangulation method described in lecture (algorithm 12.1 in the Hartley & Zisserman book, but use the ray-plane intersection method for the final step instead of the homogeneous method). Additionally, you must parameterize the camera projection matrix P′ associated with the second image and the homogeneous 3D scene points that are being adjusted using the parameterization of homogeneous vectors.

Report the initial cost (i.e. cost at iteration 0) and the cost at the end of each successive iteration. Show the numerical values for the final estimate of the fundamental matrix FLM, scaled such that

||FLM||Fro = 1.


[ ]: from scipy.linalg import block_diag

def LM(F, x1, x2, max_iters, lam):

    • Input:

    • F - DLT estimate of the fundamental matrix

    • x1 - inhomogeneous inlier points in image 1

    • x2 - inhomogeneous inlier points in image 2

    • max_iters - maximum number of iterations

    • lam - lambda parameter

    • Output:

    • F - Final fundamental matrix obtained after convergence



12



"""your code here"""

cost = np.inf

print ('iter %03d Cost %.9f'%(0, cost))

for i in range(max_iters):

print ('iter %03d Cost %.9f'%(i+1, cost))


return F

    • LM hyperparameters lam = .001 max_iters = 10

    • Run LM initialized by DLT estimate print ('Sparse LM') time_start=time.time()

F_LM = LM(F_DLT, xin1, xin2, max_iters, lam) time_total=time.time()-time_start print('took %f secs'%time_total)

    • display the resulting F_LM, scaled with its frobenius norm DisplayResults(F_LM, 'F_LM')

1.8    Problem 7 (Programming): Point to Line Mapping (10 points)

Qualitatively determine the accuracy of FLM by mapping points in image 1 to epipolar lines in image 2. Identify three distinct corners distributed in image 1 that are not in the set of inlier correspondences, visually approximate their pixel coordinates x{1,2,3}, and map them to epipolar lines l′ {1,2,3} = FLMx{1,2,3} in the second image under the fundamental matrix FLM.

Include a figure containing the pair of images, where the three points in image 1 are indicated by a square (or circle) about the feature and the corresponding epipolar lines are drawn in image 2. Comment on the qualitative accuracy of the mapping. (Hint: each line l′ i should pass through the point x′ i in image 2 that corresponds to the point xi in image 1).

    • ]: # Store your three points in image 1 in variable xchosen1

        ◦ Store the corresponding epipolar lines in variable epi_lines

        ◦ You can modify the code to display the figures, to highlight the corresponding␣ →point in image 2.

        ◦ You will have to find the pixel co-ordinates of the

        ◦ corresponding point in image 2 manually, as we are explicitly choosing␣

→outliers(find the real matching point

#    and not the one your code outputs). The epipolar lines should


13

    • pass close by or through these points.

    • 

"""your code here"""


# display the figures

plt.figure(figsize=(28,16))

ax1 = plt.subplot(1,2,1)

ax2 = plt.subplot(1,2,2)

ax1.imshow(I1)

ax2.imshow(I2)

im_height, im_width = I1.shape[:2]

x_ax = np.linspace(0, im_width, im_width*10)

colors = ['red', 'blue', 'yellow']

for i in range(xchosen1.shape[1]):

a, b, c = epi_lines[:, i]

xx, yy = [], []

for xval in x_ax:

yval = -(a/b)*xval - c/b

if yval > 0 and yval < im_width:

xx.append(xval)

yy.append(yval)

x1,y1 = xchosen1[:,i]

ax1.add_patch(patches.Rectangle((x1-w/2,y1-w/2),w,w, fill=True,␣ →color=colors[i]))

ax2.plot(xx,yy,'-r', color=colors[i])

plt.show()

“ ““Comment on your results here.”“ ”

[ ]: 























14

More products