This chapter provides hands-on practice exercises covering the full spectrum of sensor fusion techniques: Kalman filter implementation for position tracking, complementary filters for IMU orientation, multi-sensor data quality assessment with outlier rejection, and hierarchical fusion architecture design. Each exercise builds practical skills for real-world IoT deployments.
Learning Objectives
After completing these exercises, you will be able to:
Implement Kalman filters for position tracking
Design complementary filters for IMU orientation
Build multi-sensor data quality assessments
Create hierarchical sensor fusion architectures
Key Concepts
Fusion algorithm implementation: The process of translating a mathematical fusion model (Kalman equations, complementary filter formula) into working code that processes real or simulated sensor data.
Simulation-based validation: Testing a fusion algorithm against synthetic sensor data with known ground truth, allowing quantitative evaluation of estimation accuracy before deploying on real hardware.
Monte Carlo evaluation: Running a fusion algorithm hundreds of times with randomly perturbed sensor parameters to assess robustness and identify failure modes statistically.
Root mean square error (RMSE): A standard metric for fusion accuracy measuring the average magnitude of estimation error across all time steps; lower RMSE indicates better fusion performance.
Sensor noise model: A mathematical description of how a sensor’s errors are distributed (typically Gaussian with specified mean and variance), used to tune fusion filter parameters.
For Beginners: Sensor Fusion Practice Exercises
Sensor fusion is like asking multiple witnesses about the same event – each one saw something slightly different, but by combining their accounts, you get a much clearer picture of what actually happened. In IoT, individual sensors are noisy and imperfect, but when you combine readings from multiple sensors using mathematical techniques, the result is far more accurate than any single sensor alone. These exercises walk you through the core techniques, starting simple and building up to more complex systems.
25.1 Practice Exercises
Exercise 1: Kalman Filter Implementation for Position Tracking
Objective: Implement a 1D Kalman filter to fuse noisy GPS position measurements with accelerometer-based velocity estimates.
Tasks:
Generate synthetic data: true position following constant velocity motion, GPS measurements with sigma=5m noise, accelerometer with sigma=0.5 m/s2 noise
Implement Kalman filter with state [position, velocity]: prediction step (x_k = F x_{k-1}, P_k = F P_{k-1} F^T + Q), update step (K = P H^T (H P H^T + R)^-1, x = x + K (z - H x))
Tune parameters: process noise Q (model uncertainty), measurement noise R (sensor variance), initial state x_0 and covariance P_0
Compare performance: plot true position, noisy GPS, Kalman estimate; calculate RMS error for each
Expected Outcome: Kalman filter achieves 2-3m RMS error (40-50% better than GPS alone at 5m). Understand that filter smooths noise while tracking motion trends. Learn parameter tuning: too-small Q makes filter overconfident in its model (slow to adapt to real dynamics), too-large R makes filter distrust and ignore measurements.
Exercise 2: Complementary Filter for IMU Orientation
Objective: Implement a complementary filter to fuse gyroscope and accelerometer data for drift-free orientation estimation.
Tasks:
Collect IMU data from sensor (MPU6050 or phone): gyroscope (angular velocity w) and accelerometer (gravity direction) at 100 Hz
Implement gyroscope-only integration: theta_k = theta_{k-1} + w * dt; observe drift over 60 seconds
Implement accelerometer-only: theta = atan2(accel_y, accel_z); observe noise from vibrations
Implement complementary filter: theta = alpha * (theta + w * dt) + (1-alpha) * theta_accel with alpha=0.98
Expected Outcome: Complementary filter achieves <1 deg steady-state error with rapid response to rotations. Understand frequency separation: gyroscope (high-pass) + accelerometer (low-pass). Learn alpha tuning: higher alpha = trust gyro more, lower alpha = trust accel more.
Exercise 3: Multi-Sensor Data Quality Assessment
Objective: Implement outlier detection and sensor health monitoring to ensure fusion reliability.
Tasks:
Simulate 3 temperature sensors: Sensor A (sigma=2C), Sensor B (sigma=1C), Sensor C (sigma=0.5C, but occasional outliers)
Add Mahalanobis distance outlier detection: d^2 = (z - mu)^2 / sigma^2; reject if d^2 > 3.841 (chi-squared 95% threshold for 1 degree of freedom)
Inject outliers (Sensor C reads 50C when truth is 22C) and verify filter rejects outliers
Expected Outcome: Fusion weights Sensor C highest under normal conditions. When outliers occur, Mahalanobis detector rejects them, fusion uses only A and B. Understand robustness: system continues even if sensors fail.
Try It: Inverse Variance Weight Calculator
Show code
viewof sigma_A = Inputs.range([0.1,10], {value:2.0,step:0.1,label:"Sensor A noise, sigma_A (C)"})viewof sigma_B = Inputs.range([0.1,10], {value:1.0,step:0.1,label:"Sensor B noise, sigma_B (C)"})viewof sigma_C = Inputs.range([0.1,10], {value:0.5,step:0.1,label:"Sensor C noise, sigma_C (C)"})
Show code
{const w_A =1/ (sigma_A * sigma_A);const w_B =1/ (sigma_B * sigma_B);const w_C =1/ (sigma_C * sigma_C);const total = w_A + w_B + w_C;const pct_A = (w_A / total *100).toFixed(1);const pct_B = (w_B / total *100).toFixed(1);const pct_C = (w_C / total *100).toFixed(1);returnhtml`<div style="background: var(--bs-light, #f8f9fa); padding: 1em; border-left: 4px solid #3498DB; border-radius: 4px; font-family: sans-serif;"> <strong>Inverse Variance Weights:</strong><br> Sensor A: w = 1/${sigma_A.toFixed(1)}^2 = ${w_A.toFixed(3)} → <strong>${pct_A}%</strong><br> Sensor B: w = 1/${sigma_B.toFixed(1)}^2 = ${w_B.toFixed(3)} → <strong>${pct_B}%</strong><br> Sensor C: w = 1/${sigma_C.toFixed(1)}^2 = ${w_C.toFixed(3)} → <strong>${pct_C}%</strong><br> <em>Lower noise = higher weight. The most precise sensor dominates the fused estimate.</em> </div>`;}
Objective: Design and implement a multi-level fusion system combining low-level, feature-level, and decision-level fusion.
Tasks:
Low-level fusion: Combine 3 redundant temperature sensors using Kalman filter -> single fused estimate
Feature-level fusion: Extract features from accelerometer and gyroscope -> concatenate into feature vector [accel_mean, accel_std, gyro_mean, gyro_std]
Decision-level fusion: Train classifiers on each sensor independently -> combine decisions using majority voting
Compare performance: measure accuracy, latency, and computational cost for each level
Expected Outcome: Low-level fusion provides optimal accuracy but high computation. Feature-level enables ML classification with 80-90% accuracy. Decision-level is most flexible but suboptimal accuracy. Understand trade-offs: choose level based on application requirements.
Quiz: Sensor Fusion Concepts
Check Your Understanding: Fusion Algorithm Selection
25.2 Videos
Video: Understanding Kalman Filters
This video explains the intuition behind Kalman filters for sensor fusion, including the predict-update cycle and optimal weighting.
Video: Particle Filter Explained
Learn how particle filters handle non-linear, non-Gaussian systems for indoor localization and tracking.
Video: Signal Processing ML for Sensor Data
This video covers machine learning approaches for sensor signal processing, including feature extraction and classification.
Madgwick, S. (2010). “An efficient orientation filter for IMU and MARG sensor arrays”
Welch, G. & Bishop, G. (2006). “An Introduction to the Kalman Filter”
Hall, D. L. & Llinas, J. (2001). “Handbook of Multisensor Data Fusion”
25.3.3 Books
“Kalman Filtering: Theory and Practice Using MATLAB” by Grewal & Andrews
“Multisensor Data Fusion” by Martin Liggins et al.
For Kids: Meet the Sensor Squad!
Practice makes perfect – even for sensors!
25.3.4 The Sensor Squad Adventure: Training Day
It was Training Day at Sensor Squad Academy, and Coach Controller had set up FOUR challenges for the team!
Challenge 1 – The Tracking Race: Sammy the Sensor had to follow a toy car around a track. GPS Gloria shouted positions every second: “The car is at marker 10… now marker 12!” But Gloria was not very accurate. Accel Andy added, “I felt the car speed up!” By combining Gloria’s positions with Andy’s speed sensing, Sammy could track the car MUCH better than either alone. “That is a Kalman filter!” said Coach Controller.
Challenge 2 – The Balance Beam: Lila the LED had to walk across a balance beam while holding a tray. Gyro Greg said, “You just tilted left!” and Accel Annie said, “Gravity says you are leaning right!” By listening to Greg 98% and Annie 2%, Lila stayed perfectly balanced. “Complementary filter!” cheered the crowd.
Challenge 3 – The Temperature Test: Max the Microcontroller had three thermometers, but one of them sometimes showed crazy readings like 500 degrees! Max learned to check: “If a reading is WAY different from the others, it is probably wrong – throw it out!” The other two thermometers, weighted by their accuracy, gave a great answer.
Challenge 4 – The Big Picture: Bella the Battery had to combine clues from ALL the sensors in the school – temperature, motion, light, and sound – to figure out which classrooms were empty. Some sensors gave raw numbers, some gave features like “loud” or “quiet,” and some gave decisions like “occupied” or “empty.” Bella learned to combine them in LAYERS!
“Remember,” said Coach Controller, “practice these four skills and you can build ANY sensor fusion system!”
25.3.5 Key Words for Kids
Word
What It Means
Kalman Filter
A math trick that combines position and speed sensors for better tracking
Complementary Filter
Combining a fast-but-drifty sensor with a slow-but-steady sensor
Outlier Rejection
Throwing out readings that are obviously wrong
Hierarchical Fusion
Combining sensor data in layers, from raw to features to decisions
Key Takeaway
Sensor fusion skills are best learned through hands-on practice. Start with a 1D Kalman filter for position tracking to build intuition about the predict-update cycle. Then implement a complementary filter for IMU orientation to understand frequency-domain sensor complementarity. Add outlier detection to handle real-world sensor failures, and finally design hierarchical architectures for complex multi-sensor systems.
Worked Example: Implementing Complementary Filter for Drone Orientation
A quadcopter drone uses an MPU6050 IMU (accelerometer + gyroscope) for stabilization. The flight controller needs pitch and roll angles updated at 100 Hz with <1° accuracy. Implement a complementary filter combining both sensors.
Sensor Characteristics:
Accelerometer: Measures gravity direction → computes pitch/roll from tilt
Pros: No drift, absolute reference
Cons: Noisy from vibrations (±5° noise at 100 Hz)
Gyroscope: Measures angular velocity → integrates to angle
Pros: Fast response, smooth (±0.1° noise)
Cons: Drifts 2-5°/minute from bias
Complementary Filter Implementation:
Putting Numbers to It
How does a complementary filter balance gyroscope drift vs accelerometer noise?
The complementary filter combines two sensors with opposite error characteristics using a single parameter \(\alpha\) (trust factor):
This means a 10° gyro error reduces to 3.7° after 0.5 seconds, 1.4° after 1 second, and 0.5° after 1.5 seconds – fast enough for drone stabilization while filtering vibrations.
Result: Complementary filter achieves <1° accuracy requirement with minimal computational cost, making it ideal for resource-constrained embedded flight controllers.
Common Mistake: Using Kalman Filter Without Understanding Q and R Matrices
The Error: An engineer copies a Kalman filter tutorial code for GPS/IMU fusion but gets worse performance than using GPS alone. The filter either ignores GPS updates (trusts IMU too much) or oscillates wildly (trusts GPS too much).
The Problem - Mistuned Process Noise (Q) and Measurement Noise (R):
Scenario: GPS/IMU fusion for vehicle position tracking - GPS: σ = 5 meters accuracy - IMU: integrates acceleration to position, accumulates drift
Default Tutorial Code:
Q = np.eye(4) *0.1# Process noise (copied from tutorial)R = np.eye(2) *1.0# Measurement noise (copied from tutorial)
What Happens:
Q too small (0.1) → Filter assumes vehicle dynamics are very predictable
When vehicle brakes suddenly, filter ignores it (thinks it’s noise)
Position estimate drifts from reality
R too small (1.0 when GPS variance is actually 25) → Filter over-trusts GPS
Treats GPS as having sigma = 1m accuracy when actual sigma = 5m
Filter oscillates, tracking noise instead of true position
Correct Approach - Tune Q and R Based on Physics:
# Process noise Q: How much can vehicle state change between updates?# State: [x, y, vx, vy]# At dt=0.1s, max acceleration a=5 m/s^2max_position_change =0.5* a * dt**2# = 0.025 mmax_velocity_change = a * dt # = 0.5 m/sQ = np.array([ [max_position_change**2, 0, 0, 0], [0, max_position_change**2, 0, 0], [0, 0, max_velocity_change**2, 0], [0, 0, 0, max_velocity_change**2]]) # Process noise matched to vehicle dynamics# Measurement noise R: GPS specification sheet says σ=5mR = np.array([ [5.0**2, 0], # GPS x variance = 25 m^2 [0, 5.0**2] # GPS y variance = 25 m^2]) # Measurement noise from datasheet
Results After Tuning:
Before: RMS error = 8.5 m (worse than GPS alone at 5 m)
After: RMS error = 2.3 m (2x better than GPS)
Filter now trusts GPS appropriately and tracks sudden maneuvers
Key Lesson:
Q (process noise): Models how much your state can change between updates (physics-based)
Too small → Filter rigid, ignores real dynamics, poor tracking
Too large → Filter trusts measurements over model, noisy estimates
R (measurement noise): Models sensor accuracy (from datasheet or calibration)
Too small → Filter over-trusts noisy sensor, amplifies noise
Too large → Filter ignores sensor, drifts from reality
Tuning Guidelines:
Start with R from sensor datasheets (manufacturer specs)
Tune Q experimentally: Record true trajectory, try Q values from 0.01 to 10.0, pick minimum RMS error
Validate: Test on unseen data, ensure filter performs consistently
Never copy Q/R values from tutorials without understanding their physical meaning - they are application-specific!
Quiz Navigator - Data analytics quizzes ## Common Pitfalls {.unnumbered}
1. Testing fusion algorithms only on clean, noise-free data
A Kalman filter tuned on perfect simulated data will fail in production where sensor noise, outliers, and calibration errors dominate. Always test with realistic noise models and inject known fault conditions.
2. Using the wrong noise covariance matrices in Kalman filters
The Q (process noise) and R (measurement noise) matrices are the most critical tuning parameters. Setting them incorrectly causes the filter to either ignore valid measurements (R too low) or trust them too much (R too high). Tune from real sensor data.
3. Implementing fusion without establishing a baseline
Without a single-sensor baseline to compare against, it is impossible to quantify the benefit of adding sensor fusion. Always implement and measure the simplest single-sensor approach first.
4. Not accounting for sensor temporal alignment in exercises
When working with real multi-sensor data, different sensors sample at different rates and timestamps may not align. Interpolate or re-sample to a common timebase before applying any fusion algorithm.