$29
Objectives
In this lab, students will generate an occupancy grid map using di erent types of sensors. This lab is designed to give some understanding of how mapping works and how various factors can contribute to the di culty of this task. The robot is equipped with a depth sensor that measures the distance to obstacles and bump sensors to detect collisions. The overhead localization system provides truth pose in [x,y, ].
Required Code
• freedriveProgram.m
• readStoreSensorData.m
• driveArrows.m
• logOddsBump.m
•
driveArrows.fig
•
limitCmds.m
• logOddsDepth.m
Prelab - Creating control code for the lab
Edit the provided le freedriveProgram.m. Add your occupancy grid calculations (including map initialization and necessary logging) and plotting. We highly recommend that you plot the robot trajectory as it is moving, especially given the delays in Twitch.
Required plots (to be shown to the TAs at the beginning of the lab)
• A plot of the Bump occupancy map from Homework 5.
• A plot of the Depth occupancy map from Homework 5.
• Lab Manual
1.1 Set-up - Remote student
(a) Join the lab Zoom session (link on Canvas).
(b) Open the Twitch stream: https://www.twitch.tv/cu mae amr (link also on Canvas)
(c) (One remote group member) create and share a Google drive or Box folder with the rest of the group. Add subfolders for each group member to put their code in.
(d) Put all the required les in the shared folder.
(e) Put the le lab3Map.mat from Canvas in the shared folder.
1.2 Station Set-up - In-Person student
(a) Join the lab Zoom session (link on Canvas) on the lab computer and share your screen.
(b) Open the shared folder created by one of the online group members. Add your les.
(c) Create a local folder on the lab computer and copy all the les there.
(d) Open Matlab and change the working directory of Matlab to be the folder that contains your les. Make sure to periodically save data to your online folder.
(e) Unplug your iRobot Create. Make sure to only pick up the robot from the bottom and not the sensor platform. Be careful not to hit the markers on the top of the robot. Put it on the oor next to your lab station.
(f) Take note of the name of your robot.
(g) Turn on the robot by pressing the power button. The Raspberry Pi takes about 20-30 seconds to boot.
(h) In your MATLAB Command Window, run Robot = CreatePiInit(‘robotName’) where robotName is the robot name from step (f). The initialization process creates the variable Robot, which contains the port con gurations for interfacing with the Create and the added sensors and the robot name; it has ve elements:
◦ Robot.Name contains the robot name.
◦ Robot.OL Client used for getting the robot pose ground truth from the Optitrack system.
◦ Robot.CreatePort used for sending commands and reading sensors from the robot.
◦ Robot.DistPort used for getting depth information from the realsense camera.
◦ Robot.TagPort used for getting tag information from the realsense camera.
(i) Check that you have connected to the robot properly by running BeepRoomba(Robot.CreatePort). You should hear your robot beep.
(j) Put your robot on the eld. Make sure you can access the robot’s localization information by running [x,y,theta] = OverheadLocalizationCreate(Robot).
(k) Put the robot in front of a tag, at least one foot away. Run RealSenseTag(Robot.TagPort) and RealSenseDist(Robot.DistPort). Make sure you can get tag and depth information.
(l) If any of steps i-k fail, disconnect from the robot (run CreatePiShutdown(Robot)), shut it down, close Matlab, restart Matlab, turn the robot on and go back to step h.
(m) Assume that the Realsense o set (the location of the sensor in the robot- xed frame) is (0,8cm), i.e. the x-axis o set is 0 and the y-axis o set is 8 cm.
Important:
◦ If the control function exits with an error, make sure to stop the robot by typing in the command window: SetFwdVelAngVelCreate(Robot.CreatePort, 0, 0)
◦ When you are done working with the robot, or you wish to restart Matlab or the connection to the robot, rst run CreatePiShutdown(Robot) to disconnect properly from the robot.
1.3 Driving the Robot Around
(a) The function driveArrows.m opens a GUI and allows you to drive the robot around manually. If you are plotting any other gures, make sure to call figure(h) immediately after, to keep that GUI in focus. Run your control program (or freedriveProgram.m) and use the up/down/left/right arrow keys (or W/S/A/D keys) to control the robot’s forward and angular ve-locities. To stop the robot, press the Space bar or Enter. Practice driving the robot around the eld. Notice that the robot’s velocity is limited.
(b) After driving the robot around a bit, exit the control program (Ctrl+C) and close the driveArrows GUI. Type global datastore; and make sure all the appropriate sensor data was saved (truthPose, bump, rsdepth). You may want to plot the robot’s trajectory and make sure it is the path you manually drove.
1.4 Occupancy Grid with Bump Sensor
Using the code you wrote for homework (logOddsBump.m), you will build an occupancy grid map using data from the robot’s pose and bump sensors.
(a) Initialize a 16 28 occupancy grid of the environment with p0(occ) = 0:5 as the prior occupancy probability for all cells (i.e. ‘0(occ) = 0). The boundaries of the environment in the x direction are
[ 2; 2] and y direction are [ 3:5; 3:5] (each grid cell is 0:25 0:25).
(b) Within your control program, call logOddsBump.m to update the occupancy grid at each time step with your new pose and bump measurements. You may want to add a new eld to datastore for the occupancy grid, so you can plot it later.
(c) Have your program plot the updated occupancy grid in real time so you can watch how your map evolves. On the same gure, plot the robot’s full trajectory.
(d) Place your robot on the eld and start your control program. Drive around the environment, fre-quently bumping into walls. You may even (politely) bump into other robots. Save your data to le. Make sure all the sensor data, including the depth data, is saved.
(e) Repeat for each member of your group, using a di erent grid size. Keep in mind that a ner grid may take longer to run.
1.5 Occupancy Grid with Depth
Using the code you wrote for homework (logOddsDepth.m), you will build an occupancy grid map using data from the robot’s pose and depth sensors.
(a) Initialize a 22 12 occupancy grid of the same environment with p0(occ) = 0:5 as the prior occupancy probability for all cells (i.e. ‘0(occ) = 0).
(b) Within your control program, call logOddsDepth.m to update the occupancy grid at each time step with your new pose and depth measurements.
(c) Have your program plot the updated occupancy grid in real time so you can watch how your map evolves. (If your program runs particularly slowly, you may prefer to calculate your occupancy grid update o ine after you’ve collected all the data, but at least try to run it in real time.) On the same gure, plot the robot’s full trajectory.
(d) Place your robot on the eld and start your control program. Drive around the environment, and try to \see" as much of the map as possible. Save your data to le.
(e) Repeat for each member of your group, using a di erent grid size. Keep in mind that a ner grid may take longer to run.
• Post-Lab Assignment
Remember to submit your assignment as a group on Gradescope. To form a group:
1. One individual from the group submits the PDF on Gradescope.
2. When you click on your submitted assignment, there will be a section denoted by "Group" below the assignment name. Click on "View or edit group."
3. Add student(s) by selecting their name(s) from the drop-down menu under "Add Student."
2.1 Occupancy Grid with Bump Sensor (25 Points)
(a) Using the data you collected from Section 1.4, plot the nal occupancy grid using the bump sensors (this should be the same as you saw in lab). On the same gure, plot the robot’s trajectory. The le lab3Map.mat contains the coordinates of the walls in the environment. Plot the walls on the occupancy grid.
(b) How \good" was the map generated by the bump sensor? Comment on how complete or incomplete the map is (compared to truth).
(c) How did the grid resolution a ect the map? What are the pros and cons of using a ner grid with the bump sensor?
(d) Describe how each group member modeled the bump sensor within logOddsBump.m (e.g. what values were used for p(occjbump) and p(occj:bump))? Did anyone’s algorithm seem to work \better"? Why or why not?
(e) What happened when you bumped into another robot? (If you didn’t bump another robot, what would you expect to see in your data?) Why?
2.2 Occupancy Grid with Depth (30 Points)
(a) Using the data you collected from Section 1.5, plot the nal occupancy grid using the depth sensors (this should be the same as you saw in lab). On the same gure, plot the robot’s trajectory and the environment walls.
(b) How \good" was the map generated by the depth sensors? Comment on how complete or incomplete the map is (compared to truth).
(c) How did the grid resolution a ect the map? What are the pros and cons of using a ner grid with the depth sensor?
(d) Describe how each group member modeled the depth sensors within logOddsDepth.m (e.g. what was p(occjzdepth))? Whose algorithm seemed to work \best"? Why or why not?
(e) What happened when you \saw" another robot with your sensor? (If you didn’t see another robot, what would you expect to see in your data?) Why?
(f) What would you expect to happen to your map if there were lots of moving objects in the environ-ment? How might you adjust your sensor model to account for this?
2.3 Lab preparation (mandatory, not graded)
(a) Which robot did you use?
(b) For the code you wrote as part of the homework, were there any coding errors that were not discovered when using the simulator?
(c) How much time did you spend debugging your code in the lab?
(d) Did you observe any behaviors in the lab that you did not expect, or that did not match the simulated behaviors?
(e) What was the contribution of each group member to the lab and the report?