Introduction
Localization is the problem of determining and tracking the robot's
position, relative to a map of its environment.
The histogram filter is a localization algorithm that
maintains an estimate of the robot's state, expressed as collection
of probabilities for individual cells.
The particle filter is a localization algorithm that maintains
an estimate of the robot's state, expressed as collection of
samples.
These filters are:
- passive
- local
- probabilistic

Basic idea of probabilistic localization


Probabilistic models
Recall the state transition function: \[ x_k = f(x_{k-1}, u_{k-1}, \theta_{k-1}) \] and the observation function \[ y_k = h(x_k, \psi_k). \] If errors are random, we can express the same information using conditional probabilities:- Transition model: $P(x_k \mid x_{k-1}, u_{k-1})$
- Observation model: $P(y_k \mid x_k)$
Posterior distribution
Recall that the goal of probabilistic localization is to keep track of q a probability distribution over the state space, given the history: \[ P(x_k \mid u_1,\ldots,u_k, y_1,\ldots,y_k) \] This is sometimes called the posterior distribution. Knowing this distribution provides an answer to the question “Where am I?”, especially if the probability mass is concentrated around one state.Posterior updates
We can get an update equation for the posterior distribution using Bayes rule, along with marginalization: \sm{ \begin{multline*} P(x_k | u_1, \ldots, u_{k-1}, y_1, \ldots, y_k) = \alpha_k P(y_k \mid x_k) \sum_{x_{k-1} \in X} \left[ P(x_k | x_{k-1}, u_{k-1}) P(x_{k-1} \mid u_1,\ldots,u_{k-2},y_1,\ldots,y_{k-1}) \right] \end{multline*}}Missing parts
The previous page doesn't tell us:- How to represent the transition distribution.
- How to represent the observation distribution.
- How to represent the posterior distribution.
Transition distributions
Example: Assume errors occur in rotation and translation separately.


Observation distributions
Example: Range sensor. Sources of error: noise, crosstalk, reflections, dynamic obstacles (e.g. people), maximum range, etc.
Posterior distributions: Histogram filter
One option: Histogram filter- Make a finite collection of cells.
- Track the probability for each cell.
- Use the update equation directly.

Posterior distributions: Particle filtering
Example: Particle filter- Track a finite set of samples, called particles, each with a state $x$ and a weight $\omega$.

Particle filter: Initialization
Start with a set $\chi$ of $M$ particles based on the robot's initial knowledge. Each particle is a single state.Particle filter: Actions
When an action $u_k$ is executed, update $\chi$ via forward projection. For each particle $\left(x^{(m)}_{k-1}, \omega^{(m)}_{k-1}\right)$ in $\chi_{k-1}$, choose a state at random according to \[ P(x_k | x^{(m)}_{k-1}, u_k).\]Particle filter: Observations
When an observation $y_k$ is received, assign weights the particles. One simple weighting approach: The weight of a particle at $x_k^{(m)}$ is proportional to: \[ P(y_k | x_k^{(m)}) \] Key idea: Particles more consistent with the observation get higher weights.Particle filter: Resampling
Occasionally, we resample the particles to concentrate them in areas more likely to be the correct state. Repeat $M$ times: Randomly choose (with replacement) a particle, using probabilities proportional to the weight of each particle.Example






[Wolfram Burgard, TUN]
Particle filtering for passive global localization
