Introduction
Localization is the problem of determining and tracking the robot's
position, relative to a map of its environment.
Localization is a big topic for mobile robots.
- Several different kinds of problems share the name “localization.”
- In many cases, the details of the algorithm vary depending on what sensors the robot has.
Types of localization problems
Passive vs. Active- Passive localization: Interpret sensor data to determine the robot's state.
- Active localization: Choose actions designed to help determine the robot's state.
- Local localization (“pose maintenance”): Start with good information about the robot's state. Keep track of this knowledge as the robot moves around.
- Global localization: Start with no information about the robot's state. Try to eliminate this uncertainty.
Today: Active Global Localization
Suppose we have a robot in a polygonal environment, equipped with a range sensor and a compass. (...but without sensor noise.) State space: $\mathbb{R}^2$ (position only; no orientation)
Today, we'll talk about the
- active
- global
This is called Dudek-Romanik-Whitesides localization, after the
roboticists that first solved it.
Sensor information: Visibility polygon
We can think of the range sensor as providing the visibility polygon of the robot's state $x$ in its environment.
- The robot is located at the origin in this frame.
- Visibility polygon vertices are expressed relative to the robot's position.
Possible states
Q: What can the robot conclude after its first sensor reading? A: The robot receives a visibility polygon. It knows it is at one of the positions that has this visibility polygon.
Hypothesis generation algorithm
Input:- Environment polygon. ($n$ vertices)
- Visibility polygon. ($m$ vertices)
- Set of states in this environment that have this visibility polygon.
- Can be much faster if we have multiple queries in the same environment polygon. (Space-time tradeoff.)
Hypothesis elimination
Now that we have a small number of candidates, how can we determine which candidate is the correct state? To solve this problem, we need a decision tree.- Internal nodes: Motion by the robot, followed by sensing the visibility polygon.
- Edges: A visibility polygon returned by the sensor. (How do we know the number of edges is finite?)
- Leaf nodes: A known final position for the robot.
Bad news
The hypothesis elimination problem is NP-hard if we want the shortest path that localizes the robot.

Visibility cell decomposition
So let's forget about optimality and just think about getting some kind of solution. What kinds of motions are helpful? Visibility cell decomposition: Divide the environment polygon by drawing rays- outward from each pair of mutually visible vertices, and
- outward from the incident edges of each reflex vertex.
When the robot crosses one of these boundaries its visibility
polygon gains or loses a vertex.

Environment overlays
How does the visibility cell decomposition help us?- Imagine an overlay, with one copy of the environment for each candidate.
- Translate each copy so that the candidate state is at the origin.



Hypothesis elimination algorithm
Algorithm:- Compute the overlay (including visibility cell decompositions) for all of the candidate states.
- Choose a destination that moves to a different cell in some, but not all, of the overlay layers.
- Sense the visibility polygon the robot's new position.
- Eliminate candidates that are not consistent with this sensor reading.
- Repeat until only one candidate remains.