$29
• Introduction
In this lab, we will implement a localization system for a mobile robot with a planar LIDAR range sensor. Localization systems are important elements of all robots that need to act. Before you can choose actions, you need to know where you are relative to your environment.
There are many types of localizers. Some use RGB camera sensors and attempt to recognize the viewpoint from which they are seeing a scene, which remains a di cult problem in the general case. Others look at pre-crafted features in the environment, like ArUco tags, which are easy to detect and placed in known locations. These systems are robust and reliable for controlled environments like factories, but infeasible for uncontrolled environments (like most of the world).
We will implement a common type of localizer, called a Monte Carlo Localizer, which is based on a proba-bilistic lter called a particle lter. Our sensory input will be a dense planar LIDAR scan of our surroundings, and we will try to localize against a map generated with the same kind of sensor. Using this sensory infor-mation, and integrating it over time, we can gain an accurate probabilistic understanding of where our robot is relative to an existing map.
• Theory
2.1 Localization
A localizer is a system that takes a series of actions ut, observations zt and a map m and tries to estimate the global pose xt of the robot relative to the map.
2.2 Particle Filters
A particle lter is a nonparametric Bayesian lter, which approximates the posterior distribution bel(xt) using a nite number of parameters, called particles. We call our collection of particles Xt = fx(1)t; : : : ; x(tn)g, where each particle x(ti) represents a sample drawn from the posterior. More simply, each particle is a concrete hypothesis of the global pose of the robot.
Lets examine the algorithm for a particle lter line by line, speci cally focused on the case of localization.
1
Algorithm 1: Particle Filter
input : Xt 1, ut, zt
output: Xt
• Xt = Xt = ;;
• for i = 1 ! n do
• Sample x(ti) p(xtjut; x(ti)1);
• wt(i) = p(ztjx(ti));
• Add f(x(ti); wt(i))g to Xt; 6 end
7 for i = 1 ! n do
• Draw i with probability / wt(i);
• Add x(ti) to Xt 10 end
From the input and output, we can immediately see that the particle lter is a recursive Bayesian lter, that is, it takes a current estimate of the state, along with some new information, and produces a new state estimate. As discussed in class, recursive lters are often simpler to implement and more memory e cient than lters that require an entire history of sensory inputs.
Line 3 applies the motion model to each pose hypothesis x(ti). Given some motion information ut from the robot, we change each pose by moving it in accordance with our understanding of how the robot moves. More information on motion models is presented in Section 2.4.
Line 4 calculates an importance weighting for each particle based on the sensor model. A sensor model tells us how likely the given measurement zt is from each hypothesis x(ti), given our known map m. More information on sensor models is presented in Section 2.3.
Lines 8 and 9 take the updated hypotheses from the sensor and motion models and probabilistically resample the particles to match the importance weight of each sample.
After running this algorithm once, we see that our particles Xt approximate the new posterior p(xtjzt; ut; xt 1).
For a more complete presentation of the general particle lter, including a mathematical derivation of its properties as an estimator and discussion on sampling bias and particle deprivation problems, please refer to Thrun’s Probabilistic Robotics Section 4.3.
2.3 Motion Models
In this lab, we will consider an odometry motion model. The alternative, a velocity motion model was presented in class. Please refer to your lecture notes for the di erence between the two.
A motion model speci es the probability distribution p(xtjxt 1; ut). That is, given our current pose and some control input, we can predict the likelihood of ending up in various next states. For the particle lter algorithm, however, we don’t need to compute the exact probabilities over the state space, we just need to be able to draw samples from this posterior.
Odometry is a measure of how far a robot has driven. Oftentimes, a rotary encoder is attached to the wheel axle of a robot, and the ticks are integrated per wheel to get estimate the position of the robot. Understanding the relationship between the rotary encoder and the pose of the robot requires knowledge of the robot’s kinematics, i.e. how large each wheel is, and where the wheels are relative to each other.
Note that odometry is a sensor measurement, while a motion model integrates control input. We work around this by treating a pair of odometry-estimated poses as a control input ut = (xt; xt 1). Practically,
2
this is really the di erence between these two poses x = ( x; y; ).
Odometric sensors using rotary encoders cannot observe many common types of error, such as wheel slippage. Imagine a robot with one wheel resting on ice. The wheel may turn, and we will integrate information as robot motion, but in reality the robot has not moved because of the slippery surface.
To model uncertainty caused by sensor noise and possible slip, we perturb each dimension of the change in pose by a zero-mean Gaussian N (0; 2) with variance 2 a tunable parameter. By sampling the perturbation in each dimension, we can sample a new pose from the motion-model posterior
xt p(xtjxt 1; ut)
or, more concretely,
xt = xt 1 + x + wx
where the noise wx is described as follows
wx = (wx; wy; w )
w N (0; 2)
x x
w N (0; 2)
y y
w N (0; 2)
2.4 Sensor Models
In our context, each observation zt = fz(1)t; z(2)t; : : : ; z(tn)g is a planar LIDAR scan of n rays, where each ray z(ti) = (rt(i); t(i)) consists of a measured distance rt(i) and an angle from the sensor frame t(i).
Given the map m, these observations become conditionally independent, so our posterior can be broken apart as follows
n
p(ztjxt; m) = Y p(z(ti)jxt; m)
i=1
For a particular ray z(ti), we can simulate a corresponding ray z^(ti) = (^rt(i); t(i)) given a pose (or pose hypothesis) xt, and a map m by raycasting in the map until we hit an obstacle. We can get this distance by using Bresenham’s algorithm, or a similar line-tracing algorithm, along the discretised occupancy grid which represents our map.
We will use a simpli ed sensor model which models the probability of a laser hit using a single dimensional Gaussian centered on the simulated laser return. Given a particle hypothesis xt, we know what each ray of a simulated scan would look like. We can thus compute, per ray, the probability of our actual measurement
z(ti) as
(i)
(i)
2
(i)
(i)
(i)
1
(rt
r^t
)
2
2 2
p(zt
xt; m) =
(rt
; r^t
; hit) =
e
hit
j
N
p
2 hit2
The variance of this distribution, hit2, is an intrinsic tunable noise parameter which encompasses sensor error and map error.
In practice, beam sensors like LIDAR have more complex models. See Probabilistic Robotics Section 6.3 for a detailed discussion of other sources of discrepancy from map simulated returns, including unexpected objects, failures, and special cases at the maximum range of the sensor.
3
2.5 Monte Carlo Localization
After integrating information through the particle lter, we now have almost enough information to localize a robot. Note that Xt now gives a collection of samples approximating the posterior bel(xt). However, to plan and act using a robot, we need a concrete guess as to the robot’s state.
We can concretize the distribution in a simple way by assuming it is unimodal, and taking the average pose of all particles in the lter
x^t =
n ;
n ;
n
!
n
xt(i)
n
yt(i)
n
t(i)
X
X
Xi
i=1
i=1
=1
• Implementation
You will write a single ROS node in Python, called mcl545 node. The skeleton code will open a pre-recorded bag le for you and subscribes to odometry and scan information for you.
Search the code in catkin ws/src/usc545mcl/bin/usc545mcl.py for 3 comments titled YOUR CODE HERE for implementation instructions. This is all of the code you will need to write.
Before you can execute your code, cd to the directory into which you cloned your repo, and then (you only need to do this once)
sudo ./setup.sh
cd catkin ws
catkin build
then, to execute your lab code, still in the catkin ws directory, run
source devel/setup.bash
roslaunch usc545mcl usc545mcl.launch
• Output
Answer the following questions in 3-5 sentences each in a lab document.
1. How does each of the model variance parameters a ect the performance of the localizer? Give possible reasons for the behavior you see. The parameters are de ned at the top of the le.
2. How does the number of particles a ect the behavior of the localizer? The number of particles can be modi ed at the top of the le.
3. Can a particle lter with a single particle perform well? Why or why not? What if it starts in the correct position?
Include the error graph generated by the localizer in your writeup document.
• Extra Credit
The noise model for the motion model presented in Section 2.3 is overly simplistic. You can nd a better noise model in Section 5.4 of Probabilistic Robotics. Explain why the noise model presented in this lab has poor properties. Implement the better noise model.
4
There are more components involved in an accurate beam sensor model. They can be found in Section 6.3 of Probabilistic Robotics. Implement the complete beam sensor model from this section.
The particle lter is powerful because it can model complex, multimodal posterior distributions. Av-eraging all particles assumes a unimodal distribution and may yield very unrealistic estimates. Use a clustering algorithm to estimate the particles in the largest mode, and compute their average as a more realistic estimate to concretize the posterior.
5