Starting from:

$35

Project 3: Visual-Inertial SLAM Solved

Problems

In square brackets are the points assigned to each part.

    1. Implement visual-inertial simultaneous localization and mapping (SLAM) using an extended Kalman lter (EKF) in Python. You are provided with synchronized measurements from an inertial measurement unit (IMU) and a stereo camera as well as the intrinsic camera calibration and the extrinsic calibration between the two sensors, specifying the transformation from the IMU to the left camera frame. The data includes:

        ◦ IMU Measurements: linear velocity vt 2 R3 and angular velocity !t 2 R3 measured in the body frame of the IMU.

        ◦ Time Stamps: time stamps  t in UNIX standard seconds-since-the-epoch January 1, 1970.

        ◦ Intrinsic Calibration: stereo baseline b (in meters) and camera calibration matrix:

    • 3

f su
0

cu
K = 4
0
f s
v
c1v 5 :

0
0


    • Extrinsic Calibration: the transformation I TC 2 SE(3) from the left camera to the IMU frame.

Implement an EKF prediction step based on SE(3) kinematics with IMU measurements and an EKF up-date step based on the stereo-camera observation model with feature observations to perform localization and mapping. In detail, you should complete the following tasks:

    (a) [15 pts] IMU Localization via EKF Prediction: Implement the EKF prediction step based on the SE(3) kinematics and the linear and angular velocity measuremetns to estimate the pose Tt 2 SE(3) of the IMU over time t.

    (b) [15 pts] Landmark Mapping via EKF Update: assume that the predicted IMU trajectory from part (a) above is correct and focus on estimating the landmark positions m 2 R3 M of visual features observed in the images. The pixel coordinates zt 2 R4 M of detected visual features with correspondences between the left and the right camera frames are provided in the data (see Fig. 1). The landmarks mi that are not observable at time t have an associated measurement of

zt;i =    1    1    1    1 > :


    • https://www.ieee.org/conferences_events/conferences/publishing/templates.html


1
ECE 276A: Sensing & Estimation in Robotics    Due: 11:59 pm, 03/13/2022

























Figure 1: Visual features matched across the left-right camera frames (left) and across time (right).


There are many features in the dataset but you do not need to use all to obtain good results. You can think of ways to keep the computational complexity manageable. You should implement an EKF with the unknown landmark positions m 2 R3 M as a state and perform EKF update steps after every visual observation zt in order to keep track of the mean and covariance of m. Note that we are assuming that the landmarks are static so it is not necessary to implement a prediction step for the landmarks. Since the sensor does not move su ciently along the z-axis, the estimation of the z coordinate of the landmarks will not be very good. This is expected and you should not worry about it. Focus on estimating the landmark xy coordinates well.

        (c) [20 pts] Visual-Inertial SLAM: combine the IMU prediction step from part (a) with the landmark update step from part (b) and implement an IMU update step based on the stereo-camera observation model to obtain a complete visual-inertial SLAM algorithm.

    2. Write a project report describing your approach to the visual-inertial SLAM problem. Your report should include the following sections:

            ▪ [5 pts] Introduction: discuss what the problem is, why it is important, and present a brief overview of your approach.

            ▪ [10 pts] Problem Formulation: state the problem you are trying to solve in mathematical terms. This section should be short and clear and should rigorously de ne the quantities you are interested in but should not present your solution.

            ▪ [20 pts] Technical Approach: describe your approach to visual-inertial localization and mapping.

            ▪ [15 pts] Results: present your results, and discuss them { what worked, what did not, and why. Make sure your results include plots clearly showing the estimated robot trajectory as well as the estimated 2-D positions of the visual features. If you have videos do include them in the zip le and refer to them in your report!












2

More products