CSCE 452/752 Robotics and Spatial Intelligence, Fall 2025

12. Localization 2: The Kalman Filter

Introduction

Localization is the problem of determining and tracking the robot's position, relative to a map of its environment.
The Kalman Filter is a localization algorithm that maintains an estimate of the robot's state, expressed as a mean and covariance matrix.
The Kalman Filter is: For linear-Gaussian systems, it maintains an exact representation of the probability of the current state. \[ \newcommand{\R}{\mathbb{R}} \]

Actions subject to error

Recall how we can model actions that experience error.
State transition function: \[ x_{k+1} = f(x_k, u_k, \theta_k) \]
element space of all intuition known?
State $x$ $X$ Where is the robot? no
Action $u$ $U$ What did the robot do? yes
Action Error $\theta$ $\Theta$ How accurately? no

Simple example

A one dimensional system with additive actions and additive error: \[X = U = \Theta = \R\] \[x_{k+1} = x_k + u_k + \theta_k\]
a simple 1D system with error

Observations

Remember how we can model observations that provide partial information about the current state, subject to error.
Observation function: \[ y_k = h(x_k, \psi_k) \]
element space of all intuition known?
Observation $y$ $Y$ sensor data yes
Observation Error $\psi$ $\Psi$ How accurately? no

Example continued

Example: Same as previous, with: \[ Y = \psi = \R \] \[ y_k = x_k + \psi_k\]
a simple 1D system with error

Probabilistic Localization

Problem: The robot does not generally know $x$!
Solution: Compute a probability density over the state space: \[ p(x_k \mid u_1,\ldots,u_k, y_1,\ldots,y_k) \] This is expected or average case reasoning.

Linear-Gaussian systems

One special class of robot models that is very well-understood are the Linear-Gaussian (LG) systems.
  1. Linear transitions: $x_{k+1} = Ax_k + Bu_k + G\theta_k$
  2. Linear sensing: $y_{k} = Cx_k + H\psi_k$
  3. Gaussian noise: Both $\theta_k$ and $\psi_k$ are chosen randomly according to independent, zero-mean Gaussian densities with covariance matrices $\Sigma_\theta$ and $\Sigma_\psi$ respectively.
$A$, $B$, $C$, $G$, and $H$ are matrices with the appropriate dimensions.

Who cares?

LG systems are special because we can prove that the probability density \[ p(x_k \mid u_1,\ldots,u_k, y_1,\ldots,y_k) \] is always a Gaussian, no matter what actions the robot takes nor what observations it receives.
This distribution can be represented by:

Kalman filter

The Kalman filter is a passive, local, probabilistic localization algorithm that computes the Gaussian probability density over states in an LG system.
Inputs:
  • $\mu_k$
  • $\Sigma_k$
  • $u_k$
  • $y_k$
  • $A$, $B$, $C$, $G$, $H$
  • $\Sigma_\theta$, $\Sigma_\psi$
  
Outputs:
  • $\mu_{k+1}$
  • $\Sigma_{k+1}$

Kalman update rule

To execute the Kalman filter, there are four steps.
  1. Compute $\Sigma^\prime_{k+1}$, an intermediate covariance that accounts for $u_k$ but not $y_k$: \[ \Sigma^\prime_{k+1} = A \Sigma_k A^\top + G \Sigma_\theta G^\top \]
  2. Compute $L_{k+1}$, a partial result that makes the equations simpler: \[ L_{k+1} = \Sigma^\prime_{k+1}C^\top ( C \Sigma^\prime_{k+1} C^\top + H \Sigma_\psi H )^{-1} \]
  1. Compute $\mu_{k+1}$: \[ \mu_{k+1} = A \mu_k + B u_k + L_{k+1}( y_k - C(A \mu_k + B u_k) ) \]
  2. Compute $\Sigma_{k+1}$: \[\Sigma_{k+1} = (I - L_{k+1}C)\Sigma^\prime_{k+1} \]

Extended Kalman filter

What happens if we don't have a linear system? \[ x_{k+1} = f(x_k, u_k) + \theta_k \] \[ y_k = h(x_k) + \psi_k \]
Good news: If we make a linear approximation the system (by taking partial derivatives of $f$ and $h$), then we can still use the Kalman filter updates.
This is called the Extended Kalman Filter (EKF).
Bad news: If we do this, we lose the guarantee that we are representing $ P(x_k \mid u_1,\ldots,u_k, y_1,\ldots,y_k) $ exactly. In statistical terms, we no longer have an optimal estimator. Additional good news: In spite of this lack of guarantees, the EKF generally works well in practice, especially when the non-linearities are not too great.