25  Sensor Fusion Practice Exercises

In 60 Seconds

This chapter provides hands-on practice exercises covering the full spectrum of sensor fusion techniques: Kalman filter implementation for position tracking, complementary filters for IMU orientation, multi-sensor data quality assessment with outlier rejection, and hierarchical fusion architecture design. Each exercise builds practical skills for real-world IoT deployments.

Learning Objectives

After completing these exercises, you will be able to:

  • Implement Kalman filters for position tracking
  • Design complementary filters for IMU orientation
  • Build multi-sensor data quality assessments
  • Create hierarchical sensor fusion architectures

Key Concepts

  • Fusion algorithm implementation: The process of translating a mathematical fusion model (Kalman equations, complementary filter formula) into working code that processes real or simulated sensor data.
  • Simulation-based validation: Testing a fusion algorithm against synthetic sensor data with known ground truth, allowing quantitative evaluation of estimation accuracy before deploying on real hardware.
  • Monte Carlo evaluation: Running a fusion algorithm hundreds of times with randomly perturbed sensor parameters to assess robustness and identify failure modes statistically.
  • Root mean square error (RMSE): A standard metric for fusion accuracy measuring the average magnitude of estimation error across all time steps; lower RMSE indicates better fusion performance.
  • Sensor noise model: A mathematical description of how a sensor’s errors are distributed (typically Gaussian with specified mean and variance), used to tune fusion filter parameters.

Sensor fusion is like asking multiple witnesses about the same event – each one saw something slightly different, but by combining their accounts, you get a much clearer picture of what actually happened. In IoT, individual sensors are noisy and imperfect, but when you combine readings from multiple sensors using mathematical techniques, the result is far more accurate than any single sensor alone. These exercises walk you through the core techniques, starting simple and building up to more complex systems.

25.1 Practice Exercises

Objective: Implement a 1D Kalman filter to fuse noisy GPS position measurements with accelerometer-based velocity estimates.

Tasks:

  1. Generate synthetic data: true position following constant velocity motion, GPS measurements with sigma=5m noise, accelerometer with sigma=0.5 m/s2 noise
  2. Implement Kalman filter with state [position, velocity]: prediction step (x_k = F x_{k-1}, P_k = F P_{k-1} F^T + Q), update step (K = P H^T (H P H^T + R)^-1, x = x + K (z - H x))
  3. Tune parameters: process noise Q (model uncertainty), measurement noise R (sensor variance), initial state x_0 and covariance P_0
  4. Compare performance: plot true position, noisy GPS, Kalman estimate; calculate RMS error for each

Expected Outcome: Kalman filter achieves 2-3m RMS error (40-50% better than GPS alone at 5m). Understand that filter smooths noise while tracking motion trends. Learn parameter tuning: too-small Q makes filter overconfident in its model (slow to adapt to real dynamics), too-large R makes filter distrust and ignore measurements.

Objective: Implement a complementary filter to fuse gyroscope and accelerometer data for drift-free orientation estimation.

Tasks:

  1. Collect IMU data from sensor (MPU6050 or phone): gyroscope (angular velocity w) and accelerometer (gravity direction) at 100 Hz
  2. Implement gyroscope-only integration: theta_k = theta_{k-1} + w * dt; observe drift over 60 seconds
  3. Implement accelerometer-only: theta = atan2(accel_y, accel_z); observe noise from vibrations
  4. Implement complementary filter: theta = alpha * (theta + w * dt) + (1-alpha) * theta_accel with alpha=0.98

Expected Outcome: Complementary filter achieves <1 deg steady-state error with rapid response to rotations. Understand frequency separation: gyroscope (high-pass) + accelerometer (low-pass). Learn alpha tuning: higher alpha = trust gyro more, lower alpha = trust accel more.

Objective: Implement outlier detection and sensor health monitoring to ensure fusion reliability.

Tasks:

  1. Simulate 3 temperature sensors: Sensor A (sigma=2C), Sensor B (sigma=1C), Sensor C (sigma=0.5C, but occasional outliers)
  2. Implement inverse variance weighting: w_i = (1/sigma_i^2) / sum(1/sigma_j^2), fused = sum(w_i * measurement_i)
  3. Add Mahalanobis distance outlier detection: d^2 = (z - mu)^2 / sigma^2; reject if d^2 > 3.841 (chi-squared 95% threshold for 1 degree of freedom)
  4. Inject outliers (Sensor C reads 50C when truth is 22C) and verify filter rejects outliers

Expected Outcome: Fusion weights Sensor C highest under normal conditions. When outliers occur, Mahalanobis detector rejects them, fusion uses only A and B. Understand robustness: system continues even if sensors fail.

Try It: Inverse Variance Weight Calculator

Objective: Design and implement a multi-level fusion system combining low-level, feature-level, and decision-level fusion.

Tasks:

  1. Low-level fusion: Combine 3 redundant temperature sensors using Kalman filter -> single fused estimate
  2. Feature-level fusion: Extract features from accelerometer and gyroscope -> concatenate into feature vector [accel_mean, accel_std, gyro_mean, gyro_std]
  3. Decision-level fusion: Train classifiers on each sensor independently -> combine decisions using majority voting
  4. Compare performance: measure accuracy, latency, and computational cost for each level

Expected Outcome: Low-level fusion provides optimal accuracy but high computation. Feature-level enables ML classification with 80-90% accuracy. Decision-level is most flexible but suboptimal accuracy. Understand trade-offs: choose level based on application requirements.

25.2 Videos

This video explains the intuition behind Kalman filters for sensor fusion, including the predict-update cycle and optimal weighting.

Learn how particle filters handle non-linear, non-Gaussian systems for indoor localization and tracking.

This video covers machine learning approaches for sensor signal processing, including feature extraction and classification.

25.3 Resources

25.3.1 Libraries

25.3.2 Papers

  • Madgwick, S. (2010). “An efficient orientation filter for IMU and MARG sensor arrays”
  • Welch, G. & Bishop, G. (2006). “An Introduction to the Kalman Filter”
  • Hall, D. L. & Llinas, J. (2001). “Handbook of Multisensor Data Fusion”

25.3.3 Books

  • “Kalman Filtering: Theory and Practice Using MATLAB” by Grewal & Andrews
  • “Multisensor Data Fusion” by Martin Liggins et al.

Practice makes perfect – even for sensors!

25.3.4 The Sensor Squad Adventure: Training Day

It was Training Day at Sensor Squad Academy, and Coach Controller had set up FOUR challenges for the team!

Challenge 1 – The Tracking Race: Sammy the Sensor had to follow a toy car around a track. GPS Gloria shouted positions every second: “The car is at marker 10… now marker 12!” But Gloria was not very accurate. Accel Andy added, “I felt the car speed up!” By combining Gloria’s positions with Andy’s speed sensing, Sammy could track the car MUCH better than either alone. “That is a Kalman filter!” said Coach Controller.

Challenge 2 – The Balance Beam: Lila the LED had to walk across a balance beam while holding a tray. Gyro Greg said, “You just tilted left!” and Accel Annie said, “Gravity says you are leaning right!” By listening to Greg 98% and Annie 2%, Lila stayed perfectly balanced. “Complementary filter!” cheered the crowd.

Challenge 3 – The Temperature Test: Max the Microcontroller had three thermometers, but one of them sometimes showed crazy readings like 500 degrees! Max learned to check: “If a reading is WAY different from the others, it is probably wrong – throw it out!” The other two thermometers, weighted by their accuracy, gave a great answer.

Challenge 4 – The Big Picture: Bella the Battery had to combine clues from ALL the sensors in the school – temperature, motion, light, and sound – to figure out which classrooms were empty. Some sensors gave raw numbers, some gave features like “loud” or “quiet,” and some gave decisions like “occupied” or “empty.” Bella learned to combine them in LAYERS!

“Remember,” said Coach Controller, “practice these four skills and you can build ANY sensor fusion system!”

25.3.5 Key Words for Kids

Word What It Means
Kalman Filter A math trick that combines position and speed sensors for better tracking
Complementary Filter Combining a fast-but-drifty sensor with a slow-but-steady sensor
Outlier Rejection Throwing out readings that are obviously wrong
Hierarchical Fusion Combining sensor data in layers, from raw to features to decisions
Key Takeaway

Sensor fusion skills are best learned through hands-on practice. Start with a 1D Kalman filter for position tracking to build intuition about the predict-update cycle. Then implement a complementary filter for IMU orientation to understand frequency-domain sensor complementarity. Add outlier detection to handle real-world sensor failures, and finally design hierarchical architectures for complex multi-sensor systems.

A quadcopter drone uses an MPU6050 IMU (accelerometer + gyroscope) for stabilization. The flight controller needs pitch and roll angles updated at 100 Hz with <1° accuracy. Implement a complementary filter combining both sensors.

Sensor Characteristics:

  • Accelerometer: Measures gravity direction → computes pitch/roll from tilt
    • Pros: No drift, absolute reference
    • Cons: Noisy from vibrations (±5° noise at 100 Hz)
  • Gyroscope: Measures angular velocity → integrates to angle
    • Pros: Fast response, smooth (±0.1° noise)
    • Cons: Drifts 2-5°/minute from bias

Complementary Filter Implementation:

How does a complementary filter balance gyroscope drift vs accelerometer noise?

The complementary filter combines two sensors with opposite error characteristics using a single parameter \(\alpha\) (trust factor):

Filter Equation: \[ \theta_{k} = \alpha(\theta_{k-1} + \omega \Delta t) + (1 - \alpha)\theta_{\text{accel}} \]

Where: - \(\alpha = 0.98\) = Trust gyroscope 98%, accelerometer 2% - \(\omega\) = Angular velocity from gyroscope (rad/s) - \(\Delta t = 0.01\) s (100 Hz update rate) - \(\theta_{\text{accel}} = \arctan2(a_y, a_z)\) = Angle from accelerometer

Frequency Response shows why this works: \[ f_{\text{cutoff}} = \frac{1 - \alpha}{2\pi \Delta t} = \frac{1 - 0.98}{2\pi \times 0.01} = 0.32 \text{ Hz} \]

Below 0.32 Hz (slow drift): Accelerometer dominates → corrects gyro bias Above 0.32 Hz (vibration): Gyroscope dominates → filters accel noise

Drift Correction Time Constant: \[ \tau = -\frac{\Delta t}{\ln(\alpha)} = -\frac{0.01}{\ln(0.98)} = 0.50 \text{ seconds} \]

This means a 10° gyro error reduces to 3.7° after 0.5 seconds, 1.4° after 1 second, and 0.5° after 1.5 seconds – fast enough for drone stabilization while filtering vibrations.

Try It: Complementary Filter Alpha Explorer

// Global state
float pitch = 0, roll = 0;  // Estimated angles
const float alpha = 0.98;    // Trust gyro 98%, accel 2%
const float dt = 0.01;       // 100 Hz = 10 ms update rate

void update_attitude() {
  // 1. Read sensors
  float accel_x, accel_y, accel_z;  // m/s^2
  float gyro_x, gyro_y, gyro_z;      // rad/s
  read_mpu6050(&accel_x, &accel_y, &accel_z,
               &gyro_x, &gyro_y, &gyro_z);

  // 2. Compute pitch/roll from accelerometer (gravity direction)
  float pitch_accel = atan2(accel_y, accel_z) * 180 / PI;
  float roll_accel = atan2(-accel_x, sqrt(accel_y*accel_y + accel_z*accel_z)) * 180 / PI;

  // 3. Integrate gyroscope (predict angle from angular velocity)
  float pitch_gyro = pitch + gyro_x * dt * 180 / PI;
  float roll_gyro = roll + gyro_y * dt * 180 / PI;

  // 4. Complementary filter (high-pass gyro + low-pass accel)
  pitch = alpha * pitch_gyro + (1 - alpha) * pitch_accel;
  roll = alpha * roll_gyro + (1 - alpha) * roll_accel;
}

Frequency Domain Explanation:

  • High-pass filter (gyro): Trust gyroscope for fast changes (motor corrections, wind gusts)
    • Transfer function: H_gyro(s) = alpha * s / (s + omega_c) where omega_c = (1-alpha)/dt
    • Passes frequencies above cutoff (~0.32 Hz for alpha=0.98, dt=0.01)
  • Low-pass filter (accel): Trust accelerometer for slow drift correction
    • Transfer function: H_accel(s) = omega_c / (s + omega_c)
    • Passes frequencies below cutoff (~0.32 Hz)

Tuning Alpha Parameter:

Alpha Gyro Weight Accel Weight Response Time Drift Correction Best For
0.90 90% 10% Slower Fast Calm flight (quick drift correction)
0.98 98% 2% Medium Medium Stability (balanced)
0.995 99.5% 0.5% Fast Slow Racing drones (trust gyro, smooth response)

Measured Performance (100 Hz update, alpha=0.98):

  • Steady-state error: <0.5° (accel corrects gyro drift)
  • Noise: ±0.3° (gyro smooths accel vibrations)
  • Settling time: 2 seconds after 20° disturbance
  • CPU: <1% (simple arithmetic, no matrix operations)

Comparison to Kalman Filter:

  • Complementary filter: Simple, near-optimal accuracy, 5% CPU
  • Kalman filter: Optimal (under Gaussian assumptions), 25% CPU, requires tuning Q/R matrices

Result: Complementary filter achieves <1° accuracy requirement with minimal computational cost, making it ideal for resource-constrained embedded flight controllers.

Algorithm Complexity Accuracy CPU Cost RAM When to Use
Weighted Average Very Low 70-80% <1% <1 KB Redundant sensors (same type), static weights acceptable
Complementary Filter Low 90-95% 1-5% <1 KB 2 sensors with complementary error characteristics (fast/noisy + slow/accurate)
Kalman Filter Medium 95-99% 5-20% 10-100 KB Optimal fusion needed, sensors have known noise models (Gaussian)
Extended Kalman Filter (EKF) High 95-99% 20-50% 50-500 KB Non-linear system dynamics (e.g., GPS/IMU for UAVs)
Particle Filter Very High 90-98% 50-80% 500 KB-5 MB Non-Gaussian noise, multi-modal distributions (indoor localization)
LSTM Autoencoder Extreme 85-95% GPU required 10-100 MB Collective anomalies, pattern-based fusion (predictive maintenance)

Decision Tree:

  1. Are sensors measuring the same quantity (redundant)?
    • Yes + Equal quality → Simple average
    • Yes + Different quality → Weighted average (inverse variance weighting)
    • No → Continue
  2. Do you have 2 sensors with complementary errors? (one fast/noisy, one slow/accurate)
    • Yes + Embedded system (<10% CPU budget) → Complementary Filter
    • Yes + Need optimal performance → Kalman Filter
    • No → Continue
  3. Are sensor error characteristics Gaussian and well-modeled?
    • Yes + Linear dynamics → Kalman Filter
    • Yes + Non-linear dynamics → Extended Kalman Filter (EKF)
    • No → Continue
  4. Is error distribution non-Gaussian or multi-modal?
    • Yes + Real-time constraints → Particle Filter (if GPU available)
    • Yes + Offline processing → Batch Kalman Smoother
    • No → Continue
  5. Do you need to fuse >5 heterogeneous sensors?
    • Yes + Pattern-based → LSTM/Deep Learning
    • Yes + Model-based → Multi-sensor Kalman Filter

Quick Selection Guide by Application:

Application Recommended Algorithm Rationale
Drone/UAV stabilization Complementary Filter Fast, accurate enough, low CPU
Autonomous vehicle positioning EKF (GPS+IMU+wheel odometry) Non-linear, requires optimal fusion
Indoor robot localization Particle Filter Multi-modal (position ambiguity in rooms)
Fitness tracker step counting Weighted average (accel + gyro) Simple, low power
Industrial vibration monitoring Kalman Filter Known Gaussian sensor noise
Smart home temperature control Weighted average (3 temp sensors) Redundant sensors, static weights
Common Mistake: Using Kalman Filter Without Understanding Q and R Matrices

The Error: An engineer copies a Kalman filter tutorial code for GPS/IMU fusion but gets worse performance than using GPS alone. The filter either ignores GPS updates (trusts IMU too much) or oscillates wildly (trusts GPS too much).

The Problem - Mistuned Process Noise (Q) and Measurement Noise (R):

Scenario: GPS/IMU fusion for vehicle position tracking - GPS: σ = 5 meters accuracy - IMU: integrates acceleration to position, accumulates drift

Default Tutorial Code:

Q = np.eye(4) * 0.1  # Process noise (copied from tutorial)
R = np.eye(2) * 1.0  # Measurement noise (copied from tutorial)

What Happens:

  • Q too small (0.1) → Filter assumes vehicle dynamics are very predictable
    • When vehicle brakes suddenly, filter ignores it (thinks it’s noise)
    • Position estimate drifts from reality
  • R too small (1.0 when GPS variance is actually 25) → Filter over-trusts GPS
    • Treats GPS as having sigma = 1m accuracy when actual sigma = 5m
    • Filter oscillates, tracking noise instead of true position

Correct Approach - Tune Q and R Based on Physics:

# Process noise Q: How much can vehicle state change between updates?
# State: [x, y, vx, vy]
# At dt=0.1s, max acceleration a=5 m/s^2
max_position_change = 0.5 * a * dt**2  # = 0.025 m
max_velocity_change = a * dt            # = 0.5 m/s

Q = np.array([
  [max_position_change**2, 0, 0, 0],
  [0, max_position_change**2, 0, 0],
  [0, 0, max_velocity_change**2, 0],
  [0, 0, 0, max_velocity_change**2]
])  # Process noise matched to vehicle dynamics

# Measurement noise R: GPS specification sheet says σ=5m
R = np.array([
  [5.0**2, 0],      # GPS x variance = 25 m^2
  [0, 5.0**2]       # GPS y variance = 25 m^2
])  # Measurement noise from datasheet

Results After Tuning:

  • Before: RMS error = 8.5 m (worse than GPS alone at 5 m)
  • After: RMS error = 2.3 m (2x better than GPS)
  • Filter now trusts GPS appropriately and tracks sudden maneuvers

Key Lesson:

  • Q (process noise): Models how much your state can change between updates (physics-based)
    • Too small → Filter rigid, ignores real dynamics, poor tracking
    • Too large → Filter trusts measurements over model, noisy estimates
  • R (measurement noise): Models sensor accuracy (from datasheet or calibration)
    • Too small → Filter over-trusts noisy sensor, amplifies noise
    • Too large → Filter ignores sensor, drifts from reality

Tuning Guidelines:

  1. Start with R from sensor datasheets (manufacturer specs)
  2. Tune Q experimentally: Record true trajectory, try Q values from 0.01 to 10.0, pick minimum RMS error
  3. Validate: Test on unseen data, ensure filter performs consistently

Never copy Q/R values from tutorials without understanding their physical meaning - they are application-specific!

Concept Relationships

Applies Concepts From:

Leads To:

Practice Resources:

Advanced Topics:

Reference:

25.4 What’s Next

If you want to… Read this
Study the theoretical underpinnings of Kalman filtering Kalman Filter for IoT
Explore complementary filter as a simpler alternative Complementary Filter and IMU Fusion
Understand fusion architectures for system design Data Fusion Architectures
Apply fusion in real IoT applications Data Fusion Applications
Return to the module overview Data Fusion Introduction