$29
Objectives
In this lab, students will localize a mobile robot using three di erent localization algorithms:
• Dead reckoning
• Extended Kalman lter
• Particle lter
This lab is designed to give some understanding of how localization works and how various factors can contribute to the di culty of this task. Using various onboard sensors and a known map, the goal is to estimate the robot’s pose (x; y; ). Accurate pose estimation is critical for most mobile robot tasks, and choosing the \best" (or most appropriate) localization algorithm depends on the speci c application.
Required Code
• feedbackLin.m
• depthPredict.m
• readStoreSensorData.m
• intersectPoint.m (if used for depthPredict)
• limitCmds.m
• hGPS.m
• backupBump.m
• HjacGPS.m,
HjacDepth.m
• visitWaypoints.m
• EKF.m
• integrateOdom.m
• PF.m
• GjacDiffdrive.m
• motionControl.m
Required plots (to be shown to the TAs at the beginning of the lab)
• A plot of the truthPose, deadReck and ekfMu trajectories for a run of the simulator with GPS data (can be the same plot as in 4(d) in HW 4)
• A plot of the truthPose, deadReck and ekfMu trajectories for a run of the simulator with depth data (can be the same plot as in 5(a) in HW 4)
• A plot of the truthPose and the \best" particle trajectories for a run of the simulator with depth data (can be the same plot as in 6(d) in HW 4)
• Lab Manual
1.1 Set-up - Remote student
(a) Join the lab Zoom session (link on Canvas).
(b) Open the Twitch stream: https://www.twitch.tv/cu mae amr
(c) (One remote group member) create and share a Google drive or Box folder with the rest of the group. Add subfolders for each group member to put their code in.
(d) Put all the required les in the shared folder.
(e) Put the le lab2WallMap.mat from Canvas in the shared folder.
1.2 Station Set-up - In-Person student
(a) Join the lab Zoom session (link on Canvas) on the lab computer and share your screen.
(b) Open the shared folder created by one of the online group members. Add your les.
(c) Create a local folder on the lab computer and copy all the les there.
(d) Open Matlab and change the working directory of Matlab to be the folder that contains your les. Make sure to periodically save data to your online folder.
(e) Unplug your iRobot Create. Make sure to only pick up the robot from the bottom and not the sensor platform. Be careful not to hit the markers on the top of the robot. Put it on the oor next to your lab station.
(f) Take note of the name of your robot.
(g) Turn on the robot by pressing the power button. The Raspberry Pi takes about 20-30 seconds to boot.
(h) In your MATLAB Command Window, run Robot = CreatePiInit(‘robotName’) where robotName is the robot name from step (f). The initialization process creates the variable Robot, which contains the port con gurations for interfacing with the Create and the added sensors and the robot name; it has ve elements:
◦ Robot.Name contains the robot name.
◦ Robot.OL Client used for getting the robot pose ground truth from the Optitrack system.
◦ Robot.CreatePort used for sending commands and reading sensors from the robot.
◦ Robot.DistPort used for getting depth information from the realsense camera.
◦ Robot.TagPort used for getting tag information from the realsense camera.
(i) Check that you have connected to the robot properly by running BeepRoomba(Robot.CreatePort). You should hear your robot beep.
(j) Put your robot on the eld. Make sure you can access the robot’s localization information by running [x,y,theta] = OverheadLocalizationCreate(Robot).
(k) Put the robot in front of a tag, at least one foot away. Run RealSenseTag(Robot.TagPort) and RealSenseDist(Robot.DistPort). Make sure you can get tag and depth information.
(l) If any of steps i-k fail, disconnect from the robot (run CreatePiShutdown(Robot)), shut it down, close Matlab, restart Matlab, turn the robot on and go back to step h.
(m) Assume that the Realsense o set (the location of the sensor in the robot- xed frame) is (0,8cm), i.e. the x-axis o set is 0 and the y-axis o set is 8 cm.
Important:
• If the control function exits with an error, make sure to stop the robot by typing in the command window: SetFwdVelAngVelCreate(Robot.CreatePort, 0, 0)
• When you are done working with the robot, or you wish to restart Matlab or the connection to the robot, rst run CreatePiShutdown(Robot) to disconnect properly from the robot.
1.3 Extended Kalman Filter
(a) Load the map lab2WallMap.mat into the Matlab workspace. The variable map contains the coordi-nates of the walls, where each row speci es the endpoints of a wall: [x1; y1; x2; y2].
(b) Con gure your control program in the following way:
◦ Utilize deterministic controls that are independent of the robot pose (backupBump.m for exam-ple, NOT visitWaypoints.m).
◦ Compute a dead reckoning pose estimate.
◦ Compute an extended Kalman Filter pose estimate using depth sensors. How are you dealing with the delays in the depth information?
(c) Place your robot on the eld: place it such that it is at least one foot away from all walls. Initialize both the dead reckoning pose (dataStore.deadReck) and lter estimate (dataStore.ekfMu) to the true robot pose (by calling overhead localization) . Set the initial extended Kalman lter covariance (dataStore.ekfSigma) to [0:05; 0; 0; 0; 0:05; 0; 0; 0; 0:1].
(d) Set R (process noise covariance matrix) and Q (measurement noise covariance matrix) to \reason-able" values (For example, for depth, you could use the you estimated for your Lab #1 post-lab assignment and assign the diagonal elements of Q to be 2)
(e) We highly recommend you plot the robot’s pose estimate from dataStore.ekfMu and 1 ellipse from dataStore.ekfSigma in real-time, to make sure your code is working as expected (although it’s not required).
(f) Run your control program for 1{2 minutes. Make sure to use the limitCmd.m function with a reasonable value. If you are planning to do the extra credit Beacon EKF, make sure the robot sees a few beacons as it is moving.
(g) Access the sensor data in the MATLAB Command Window by typing: global dataStore; Make sure all the sensor data has been properly recorded, and save (make sure to save R and Q as well). (In dataStore, you should have non-empty elds: truthPose, rsdepth, odometry, bump, beacon, deadReck, ekfMu and ekfSigma).
(h) Repeat (b-g) for each member of your group, choose di erent values for Q (measurement noise covariance matrix) for each member.
(i) EKF with \GPS" data : re-con gure your control program such that the EKF uses GPS (noisy truthPose) instead of the depth sensor. Repeat (b-g) to collect data for a run with GPS data. You only need to run this lter once.
1.4 Particle Filter
(a) Con gure your control program in the following way:
◦ Utilize deterministic controls that are independent of the robot pose (backupBump.m for exam-ple, NOT visitWaypoints.m).
◦ Compute a dead reckoning pose estimate.
◦ Compute a particle lter pose estimate using depth sensors and 50 particles. How are you dealing with the delays in the depth information?
(b) Place your robot on the eld: place it such that it is at least one foot away from all walls. Initialize both pose estimates (deadreckon and particle lter) to the true robot pose. Initialize every particle in the lter to the same pose.
(c) We highly recommend you plot the particles in real-time to make sure your code is working the way you think it should (although it’s not required).
(d) Run your control program for 2 minutes.
(e) Access the sensor data in the MATLAB Command Window by typing: global dataStore; Make sure at least 60 time stamps have been recorded for the particle lter (i.e. the lter ran at least 60 times), Depending on how e ciently your particle lter runs, you may need to run the program for a longer period of time.
(f) Make sure all the sensor data has been properly recorded, and save to le (be sure to save R and Q as well). In dataStore, you should have non-empty elds truthPose, rsdepth, odometry, bump, deadReck and particles.
(g) Repeat for each member of your group with a di erent particle set size (e.g. 100, 200, 500).
1.5 Navigate while Localizing - Optional
In the nal competition, you will need to localize while moving. If you have time at the end of the lab, try running one of the lters while trying to reach points in the workspace space.
(a) Con gure your control program in the following way:
◦ Change your control function to visitWaypoints.m).
◦ Decide on two waypoints the robot should reach.
◦ Compute a localization solution based on one of the previous sections (i.e. EKF with depth or PF with depth. You can also try EKF with GPS but that information will not be available at the nal competition).
◦ Make sure your calculation of the control is using the localization solution from the lter and NOT truthPose!
(b) Place your robot on the eld: place it such that it is at least one foot away from all walls. Initialize both the dead reckoning pose (dataStore.deadReck) and lter estimates. Choose appropriate noise parameters.
(c) We highly recommend you plot the robot’s pose estimate in real-time, to make sure your code is working as expected.
(d) Run your control program until you reach the waypoints, or your robot looks hopelessly lost. Make sure to use the limitCmd.m function with a reasonable value.
1.6 Clean up
When you are done collecting all the data and after you make sure you have everything that you need for the lab report:
(a) Run CreatePiShutdown(Robot) to disconnect from the robot.
(b) Turn o the robot, return it to the lab station and plug it in.
(c) Make sure to leave your lab station clean, without any papers or other items. Follow COVID-19 protocols for sanitizing your station.
• Post-Lab Assignment
Remember to submit your assignment as a group on Gradescope. To form a group:
1. One individual from the group submits the PDF on Gradescope.
2. When you click on your submitted assignment, there will be a section denoted by "Group" below the assignment name. Click on "View or edit group."
3. Add student(s) by selecting their name(s) from the drop-down menu under "Add Student."
2.1 Time delays (10 points)
1. In the lab, what was the longest delay you encountered with the depth information?
2. In the following sections you will plot and analyze your localization solutions. How are you taking the delay in depth information into account when performing pose estimation?
2.2 Extended Kalman Filter (80 points)
(a) Plot each run from Part 1.3 (one run for GPS, one run per group member for depth, each run in a di erent gure). On the same plot display the environment walls, the true robot pose (from dataStore.truthPose), the integrated robot pose (from dataStore.deadReck), and the estimated pose (from dataStore.ekfMu). Also plot the 1 ellipse (for x/y pose only) from dataStore.ekfSigma { you can plot this at only a few points (rather than every point) so it is easier to visualize.
(b) For the runs using depth information what did you observe? Was the EKF able to maintain a reasonable pose estimate? Why or why not?
(c) For the runs using GPS what did you observe? Was the EKF able to maintain a reasonable pose estimate? Why or why not?
(d) Choose one run with the depth information. Using the same initialization, R and Q, for each team member estimate the pose of the robot over time using their EKF code (you should run the same data with the di erent EKF implementations). Plot the estimated trajectories and the truth pose on the same gure. Were all trajectories the same? why or why not?
(e) Choose one run with the depth information. Initialize the EKF estimate to an incorrect initial pose. Run the EKF again with the recorded data. What do you observe? Was the robot ever able to recover its true pose? Why or why not?
(f) For the run with the GPS information, initialize the EKF estimate to an incorrect initial pose. Run the EKF again with the recorded data. What do you observe? Was the robot ever able to recover its true pose? Why or why not?
(g) How did your choice of Q a ect your pose estimate? Was this expected?
(h) How does the EKF estimate compare to dead reckoning? Is this what you expected?
(i) What can you say about the strengths and weaknesses of dead reckoning? Under what circumstances might dead reckoning be the best choice for localization?
(j) What can you say about the strengths and weaknesses of the EKF? Under what circumstances might an EKF be the best choice for localization?
(k) What di erences did you observe between the simulated results from the homework and those from the physical robot? Is this what you expected?
2.3 Particle Filter (40 points)
(a) For each member’s data, plot the initial particle set, nal particle set, true robot trajectory, and the environment walls.
(b) For each member’s data, choose the nal particle with the highest weight and plot it’s trajectory. How does it compare to the true trajectory?
(c) In the homework you were asked to describe a di erent method for extracting a single pose estimate from a particle lter than simply taking the highest weighted particle. Describe your method and plot the corresponding trajectory. How does this compare to the true trajectory? Does this method perform better than choosing the highest weighted particle?
(d) Assume that the nal particle set can be represented by a single Gaussian distribution (just x and y, ignore ). Plot the particles and the Gaussian pdf (using mesh.m, contour.m or surf.m). How well does a single Gaussian represent the nal particle set?
(e) What are the strengths and weaknesses of using a particle lter? Under what circumstances might a particle lter be the best choice for localization?
(f) How well did the particle lter estimate the true robot trajectory, compared to the EKF? Why?
(g) What di erences did you observe between the simulated results from the homework and those from the physical robot? Were these di erences expected?
2.4 Beacon EKF (Extra Credit - 50 points)
(a) Adjust your EKF function to use Beacons as the measurement. This will involve writing a function hBeacon.m to predict the location of a beacon, given a position in the map and a set of known beacon positions. The beacon positions are provided in lab2Beacon.mat as the variable beaconLoc (each row gives [x; y; tagN um] for a single beacon). You will also need to nd the Jacobian (either analytically or using nite di erences) Hbeacont at a particular x. Explain how your group calculated h(xt) and Hbeacont.
(b) Using the stored data (speci cally, the odometry and beacon elds) from one of your EKF runs in the lab, initialize the estimated position to the truth pose and run your beacon EKF on the stored data to obtain a series of estimated pose values. What values did you use for Q and R? Justify your choices.
(c) Plot your estimated pose from the beacon EKF, along with the estimated pose from your original EKF (using either GPS or depth, depending on which data set you chose), and the \true" pose of the robot. For the nal pose estimate, plot the 1 ellipse for the beacon EKF and for the original EKF (only do this for the nal time-step).
(d) Comment on the di erences in performance for the two di erent EKFs. Which performed better, and why? How would changing your Q matrix a ect the results?
2.5 O ine Particle Filter (Extra Credit - 15 points)
(a) Using one of the data sets obtained for particle lter in the lab (speci cally, the stored data for depth and odometry), re-run your particle lter with a much larger number of particles (e.g. 5000, 10000), and obtain a new series of particle estimates. Plot the new position estimates (based on the highest weighted particle at each time step), along with the estimate from your original particle lter in the lab (again, based on the highest weighted particle), and the \true" position of the robot (from dataStore.truthPose).
(b) Comment on the di erences between the two runs of the particle lter. Did one run signi cantly slower than the other? Why or why not? Did one produce a better estimate than the other? Why or why not?
2.6 Lab preparation (mandatory, not graded)
(a) Which robot did you use?
(b) For the code you wrote as part of Homework 4, were there any coding errors that were not discovered when using the simulator?
(c) How much time did you spend debugging your code in the lab?
(d) Did you observe any behaviors in the lab that you did not expect, or that did not match the simulated behaviors?
(e) What was the contribution of each group member to the lab and the report?