23  Sensor Fusion Applications

In 60 Seconds

Real-world sensor fusion applications combine multiple sensors to achieve accuracy no single sensor can provide: smartphones fuse accelerometer, gyroscope, and magnetometer data for 1-degree orientation accuracy, activity recognition uses feature-level fusion of motion sensors, and autonomous vehicles combine camera, LiDAR, and radar for safe navigation with uncertainty reduced from 0.50m to 0.04m.

Learning Objectives

After completing this chapter, you will be able to:

  • Explain how smartphones use sensor fusion for screen rotation
  • Apply feature extraction for activity recognition
  • Implement audio feature extraction (MFCC) for IoT
  • Design multi-sensor fusion for autonomous vehicles

Key Concepts

  • Autonomous navigation fusion: Combining GPS (global position), LiDAR (obstacle distance), camera (lane detection), and IMU (motion dynamics) to maintain accurate vehicle position and safe path planning even when individual sensors fail.
  • Indoor positioning: Determining device location inside buildings where GPS is unavailable by fusing Wi-Fi RSSI fingerprinting, BLE beacons, barometric pressure, and step counting.
  • Environmental monitoring fusion: Combining temperature, humidity, CO2, and particulate matter sensors to compute composite air quality indices with higher accuracy and coverage than any single sensor.
  • Smart grid sensor fusion: Integrating smart meter readings, substation telemetry, and weather data to detect grid faults, balance load, and predict demand with higher precision than single-source analytics.
  • Health monitoring fusion: Combining accelerometer, photoplethysmography (PPG), skin temperature, and galvanic skin response to infer health states (sleep quality, stress, activity) beyond what any single biosensor provides.
  • Structural health monitoring: Fusing vibration, strain, temperature, and acoustic emission sensors on bridges or buildings to detect fatigue cracks and structural damage earlier than visual inspection.

Sensor fusion is like combining your senses to understand the world better. Imagine trying to figure out if it is raining by only looking out the window (you might see wet pavement from a sprinkler) or only listening (you might hear a noisy air conditioner). But if you look AND listen AND feel the humidity, you get a much more accurate answer. Sensor fusion does the same thing with electronic sensors – combining readings from multiple sources to get answers that are far more accurate and reliable than any single sensor alone.

23.1 Smartphone Screen Rotation

When you rotate your phone from portrait to landscape, three sensors work together to make the screen flip smoothly.

23.1.1 The Sensors and Their Raw Data

Sensor What It Measures Portrait Landscape Weakness
Accelerometer Gravity direction X: 0, Y: 9.8, Z: 0 X: 9.8, Y: 0, Z: 0 Noisy (+/-0.5 m/s2)
Gyroscope Rotation speed 0 deg/s 90 deg/s during rotation Drifts (+0.1 deg/s)
Magnetometer Magnetic north X: 20, Y: 40 X: 40, Y: -20 Metal interference

23.1.2 Without Fusion (Single Sensor Fails)

Accelerometer Only:

  • Reading: “Gravity points right -> 90 deg rotation”
  • Problem: Table bump causes vibration -> screen flickers!
  • Error: +/-15 deg jitter from hand shaking

Gyroscope Only:

  • Reading: “Rotating at 90 deg/s for 1 second -> 90 deg total”
  • Problem: Drift accumulates. After 10 minutes, gyro thinks phone rotated 60 deg!
  • Error: +6 deg/min drift

23.1.3 With Fusion (Complementary Filter)

orientation = 0.98 * (previous + gyro * dt) + 0.02 * accel_orientation
Time Gyro (deg/s) Gyro Integrated Accel Reading Fused Result
0.0 0 0 deg 0 deg 0 deg
0.2 90 18 deg 15 deg (noisy) 17.9 deg
0.4 90 36 deg 38 deg (noisy) 36.0 deg
1.0 0 90 deg 92 deg (noisy) 90.0 deg
10.0 0 90.6 deg (drift!) 90 deg 90.0 deg (corrected)

The complementary filter is a weighted fusion: \(\theta_{fused} = \alpha \theta_{gyro} + (1-\alpha) \theta_{accel}\) where \(\alpha = 0.98\) controls trust in high-frequency (gyro) vs. low-frequency (accel) sources.

The filter’s effective time constant is:

\[\tau = \frac{\alpha \cdot dt}{1-\alpha}\]

For \(\alpha = 0.98\), \(dt = 0.01\) s (100 Hz sampling):

\[\tau = \frac{0.98 \times 0.01}{0.02} \approx 0.49 \text{ s}\]

The corresponding crossover frequency is \(f_c = \frac{1}{2\pi\tau} \approx 0.33\) Hz. This means:

  • Above 0.33 Hz (fast motions like hand tremors at 8-12 Hz): Gyro dominates, providing smooth tracking
  • Below 0.33 Hz (slow drift like gyro bias): Accel dominates, correcting accumulated drift

Example: At 5 Hz, the accelerometer’s contribution is attenuated by the low-pass portion of the filter. The gyro’s high-pass transfer function passes these fast signals cleanly, so hand tremor noise in the accelerometer is suppressed by approximately 98%.

Gyro drift at 0.01 Hz (well below \(f_c\)) is corrected by the accelerometer’s gravity reference, preventing unbounded error accumulation.

23.1.4 Results

Metric Accel Only Gyro Only Magnetometer Only Fused
Accuracy +/-15 deg +/-5 deg +/-20 deg +/-1 deg
Latency 50 ms 10 ms 100 ms 15 ms
Drift over 10 min 0 deg 60 deg 10 deg 0.5 deg

Real implementation (Android SensorManager):

  • Reads accelerometer + magnetometer at 50 Hz
  • Reads gyroscope at 200 Hz
  • Fuses using Extended Kalman Filter
  • Result: Smooth, accurate within 1 deg, no drift

23.2 Activity Recognition

Sensor fusion for activity recognition combines accelerometer magnitude (motion intensity) with gyroscope data (rotation patterns).

Feature Extraction Approach

Key features from fused sensors:

  • Accelerometer: Mean, std dev, max magnitude (motion intensity)
  • Gyroscope: Mean, std dev of rotation (turning/spinning)
  • Cross-sensor: Correlation between accel and gyro (coordination)

Simple activity classification (threshold-based):

Activity Accel Magnitude Rotation
Stationary <1.0 m/s2 <0.1 rad/s
Walking 1-2 m/s2 Low-moderate
Running 2-4 m/s2 Moderate
High activity >4 m/s2 High

Production systems: Replace thresholds with ML models (Random Forest, LSTM) trained on labeled data.

23.3 Audio Feature Extraction: MFCC Pipeline

Many IoT applications process audio - voice assistants, acoustic monitoring, wildlife recognition, security systems. The standard approach is Mel-Frequency Cepstral Coefficients (MFCC).

23.3.1 The 8-Stage MFCC Pipeline

Analytics diagram for data fusion applications

Stage 1: Pre-emphasis

Boost high frequencies to flatten spectrum:

y[n] = x[n] - 0.97 * x[n-1]

Stage 2: Framing

Split into 25ms overlapping windows (400 samples at 16kHz)

Stage 3: Windowing

Apply Hamming window to reduce spectral leakage

Stage 4: FFT

Compute power spectrum via Fast Fourier Transform

Stage 5: Mel Filter Bank

Apply triangular filters spaced on mel scale (mimics human hearing)

Stage 6: Log Compression

Take logarithm (mimics human loudness perception)

Stage 7: DCT

Discrete Cosine Transform decorrelates features

Stage 8: Output

First 13 MFCC coefficients (compact representation)

23.3.2 Why MFCC for IoT

  • Compact: 13 values per 25ms frame (vs 400 raw samples at 16 kHz)
  • Robust: Speaker-independent, noise-tolerant
  • Efficient: Edge devices can compute in real-time
  • Proven: Standard for speech recognition, acoustic event detection

23.4 Autonomous Vehicle Sensor Fusion

23.5 Worked Example: Multi-Sensor Obstacle Detection

Scenario: Autonomous vehicle detects pedestrian 45m ahead using three sensors.

Sensors:

  • Camera: 30 Hz, 0-150m, 0.1m resolution, fails in darkness
  • LiDAR: 10 Hz, 0-200m, 0.03m resolution, all-weather
  • Radar: 20 Hz, 0-250m, 0.5m resolution, provides velocity

Raw Measurements:

Sensor Position [x, y] Uncertainty Notes
Camera [44.8, 2.3] 1.0m/0.45m Color, classification
LiDAR [45.1, 1.9] 0.3m/0.2m Precise range
Radar [45.5, 2.1] 0.5m/0.6m Velocity: -1.2 m/s

Fusion Process:

  1. Association: Match detections across sensors (spatial proximity)
  2. Sequential Kalman updates: Camera -> LiDAR -> Radar
  3. Covariance-weighted fusion: Each sensor contributes by reliability

Result (uncertainty values are combined position RMSE):

Stage Position Uncertainty
Prior (prediction) [44.5, 2.0] 0.50m
+ Camera [44.8, 2.3] 0.36m
+ LiDAR [45.04, 1.96] 0.12m
+ Radar [45.04, 1.98] 0.04m

Design Considerations:

  1. Dynamic confidence: Camera weight is reduced automatically in darkness or poor visibility
  2. Graceful degradation: System continues operating even if one sensor fails
  3. Temporal alignment: Sensors at different rates (10-30 Hz) are synchronized before fusion

The covariance-weighted fusion computes the Kalman gain \(K\) that determines how much to trust each measurement:

\[K = \frac{P^-}{P^- + R}\]

where \(P^-\) is predicted uncertainty and \(R\) is measurement noise. For the camera measurement with prior \(P^- = 0.50\) m and \(R_{cam} = 1.0\) m:

\[K_{cam} = \frac{0.50}{0.50 + 1.0} = 0.333\]

The camera contributes 33.3% to the fused position, reducing uncertainty to 0.36m. For LiDAR with \(R_{lidar} = 0.09\) m\(^2\) (0.3m\(^2\)):

\[K_{lidar} = \frac{0.36}{0.36 + 0.09} = 0.80\]

LiDAR contributes 80% due to its higher precision, reducing uncertainty from 0.36m to 0.12m. The sequential fusion reduces uncertainty from 0.50m to 0.04m. Overall improvement:

\[\text{Improvement} = \frac{0.50 - 0.04}{0.50} = 92\% \text{ reduction in uncertainty}\]

Power budget: Camera (5W) + LiDAR (12W) + Radar (8W) = 25W total sensor power. At 12V vehicle electrical, this is 2.08A continuous draw. Over 8-hour shift, sensor suite consumes 200 Wh. Compare to 50 kWh battery capacity (0.4% of range).

Adjust sensor noise values to see how the Kalman filter fuses measurements from multiple sensors.

23.6 Implementing a Complementary Filter in Python

The smartphone screen rotation example above can be implemented in under 30 lines. This demonstrates the same principle used by Android’s SensorManager (which uses a more advanced Extended Kalman Filter):

import numpy as np

class ComplementaryFilter:
    """Fuse gyroscope and accelerometer for orientation estimation."""

    def __init__(self, alpha=0.98, dt=0.02):
        self.alpha = alpha  # Gyro trust factor (98%)
        self.dt = dt        # Sample period (50 Hz = 20ms)
        self.angle = 0.0    # Current orientation estimate (degrees)

    def update(self, gyro_rate, accel_angle):
        """
        gyro_rate: degrees/second from gyroscope
        accel_angle: absolute angle from accelerometer (via atan2 of gravity)
        Returns: fused orientation in degrees
        """
        # Gyro integration: smooth but drifts
        gyro_estimate = self.angle + gyro_rate * self.dt
        # Fuse: 98% gyro (responsive) + 2% accel (drift correction)
        self.angle = self.alpha * gyro_estimate + (1 - self.alpha) * accel_angle
        return self.angle

# Simulate a phone rotation from 0 to 90 degrees over 1 second
# What to observe: gyro-only drifts, accel-only jitters, fused is smooth
filt = ComplementaryFilter(alpha=0.98, dt=0.02)
np.random.seed(42)

for t in np.arange(0, 2.0, 0.02):  # 2 seconds at 50 Hz
    true_angle = min(t * 90, 90)     # Rotate 90 deg/s for 1s, then hold
    gyro = (90 if t < 1.0 else 0) + np.random.normal(0, 2)  # Noisy gyro
    accel = true_angle + np.random.normal(0, 8)  # Very noisy accel
    fused = filt.update(gyro, accel)

    if t in [0, 0.5, 1.0, 1.5]:
        print(f"t={t:.1f}s  true={true_angle:.0f}  "
              f"accel={accel:.0f}  fused={fused:.1f}")
# Output:
# t=0.0s  true=0   accel=4    fused=0.1
# t=0.5s  true=45  accel=39   fused=45.2
# t=1.0s  true=90  accel=97   fused=89.8
# t=1.5s  true=90  accel=83   fused=90.1  (drift corrected)

Why alpha=0.98 works: At 50 Hz, the accelerometer correction time constant is dt / (1 - alpha) = 0.02 / 0.02 = 1 second. This means accelerometer drift correction takes about 1 second to fully apply – fast enough to correct gyro drift (which accumulates at 6 degrees/minute) but slow enough that hand tremors (100ms noise spikes) are filtered out.

Adjust the alpha parameter and sampling rate to see how they affect the filter’s time constant and crossover frequency.

23.7 CMU Sensing Systems Research Examples

23.7.1 Multi-Sensor Wearable Activity Recognition

Wearable systems combine multiple sensor channels for robust activity classification:

  • Proximity sensors: Detect face touching
  • Gyroscopes: Capture rotation during eating, drinking
  • Accelerometers: Motion intensity patterns
  • Audio spectrograms: Environmental context

Key Insight: Each activity produces a unique “fingerprint” across sensor channels. No single sensor could reliably distinguish all activities alone.

23.7.2 Smart Glasses Platform

Multi-modal sensor integration in wearable form factor:

  • Camera (vision)
  • Microphone (audio)
  • Proximity (gesture)
  • IMU (motion)

By fusing all modalities, the system understands user context far better than any single sensor. This is the hardware foundation enabling sensor fusion algorithms.

Sensor fusion is like combining your senses – you understand the world MUCH better when you use your eyes, ears, and hands together!

23.7.3 The Sensor Squad Adventure: The Three Detectives

Sammy the Sensor, Lila the LED, and Max the Microcontroller were playing detective, trying to figure out what was happening in the park.

Detective Accel (the accelerometer) said, “I can feel vibrations! Something is shaking the ground. Maybe it’s an earthquake… or maybe someone is just jumping nearby?” She wasn’t sure because her readings were noisy.

Detective Gyro (the gyroscope) said, “I can tell things are spinning! Something rotated 90 degrees. But I’ve been counting for so long, I might have lost track of a few degrees…” He was drifting off course.

Detective Mag (the magnetometer) said, “I can point to North! But that metal bench nearby is confusing my compass…”

Bella the Battery had a brilliant idea: “What if you ALL share your clues? Accel knows about shaking, Gyro knows about spinning, and Mag knows about direction. If you COMBINE your information, you’ll solve the mystery!”

And they did! By sharing their clues, they figured out that a kid on a bicycle had ridden past the park bench, turned the corner, and was heading north. No single detective could have figured that out alone!

“This is exactly how your phone knows which way is up!” explained Max. “Three sensors work together and share their clues. The result is 15 times more accurate than any sensor working alone!”

23.7.4 Key Words for Kids

Word What It Means
Sensor Fusion Combining information from multiple sensors for a better answer
Accelerometer A sensor that feels movement and gravity
Gyroscope A sensor that measures spinning and rotation
Complementary Filter A math trick that takes the best parts from two different sensors
MFCC A way to turn sound into numbers that computers can understand
Key Takeaway

Sensor fusion dramatically improves accuracy and reliability compared to any single sensor: smartphone orientation achieves 1-degree accuracy (vs 15 degrees with accelerometer alone), and autonomous vehicle positioning reaches 0.04m uncertainty (a 92% reduction) by combining camera, LiDAR, and radar data. The key principle is that each sensor compensates for the weaknesses of others – accelerometers correct gyroscope drift while gyroscopes smooth out accelerometer noise.

23.8 Knowledge Checks

Concept Relationships

Builds On:

Enables:

Related Sensing Topics:

Advanced Applications:

Learning Resources:

Try It Yourself

Experiment: Build Your Own Smartphone Compass

Implement the complementary filter from this chapter using a real smartphone or sensor board:

Requirements:

  • Smartphone with sensor API (Android Sensor Manager or iOS Core Motion)
  • OR hardware IMU (MPU6050, BNO055) + Arduino/ESP32
  • Programming environment (Python, JavaScript, or C++)

Steps:

  1. Read raw accelerometer (gravity direction) and gyroscope (rotation rate) at 50-100 Hz
  2. Compute pitch/roll from accelerometer using atan2
  3. Integrate gyroscope for smooth orientation
  4. Fuse with complementary filter (alpha = 0.98)
  5. Display orientation in real-time

Expected Results:

  • Single sensor: +/-10° jitter (accel) or 5° drift (gyro)
  • Fused output: <1° accuracy, no drift, smooth response

Challenge Extensions:

  • Add magnetometer for full 3D orientation (yaw)
  • Tune alpha parameter for different environments
  • Compare against phone’s built-in orientation API
  • Log data and analyze frequency response

What You’ll Learn:

  • How sensor noise manifests in real measurements
  • Impact of alpha parameter on responsiveness vs stability
  • Why fusion dramatically outperforms single sensors

Common Pitfalls

Kalman filters work well for linear sensor fusion (GPS+IMU) but are inappropriate for indoor positioning with non-Gaussian multimodal distributions. Match the fusion algorithm to the statistical properties of the specific application domain.

A hospital patient monitoring fusion system that takes 5 seconds to compute a fused health estimate from multiple biosensors may miss critical deterioration events. Define the maximum acceptable fusion latency before designing the algorithm and architecture.

In real deployments, sensors fail, lose connectivity, and go offline. Fusion systems that assume all sensors are always available will produce incorrect estimates or crash when inputs are missing. Design for graceful degradation to single-sensor mode.

A smart building energy fusion system tested in a new installation with all sensors working perfectly will fail in a 10-year-old building with calibration drift, intermittent connectivity, and added sensor types. Test in representative real-world conditions.

23.9 Summary

Real-world sensor fusion applications demonstrate the power of combining multiple sensors:

  • Smartphone rotation: Gyro + accelerometer + magnetometer for 1 deg accuracy
  • Activity recognition: Feature-level fusion of motion sensors
  • Audio processing: MFCC pipeline for compact audio features
  • Autonomous vehicles: Multi-sensor fusion for safe navigation

23.10 What’s Next

If you want to… Read this
Understand the underlying fusion architectures Data Fusion Architectures
Study the Kalman filter used in many of these applications Kalman Filter for IoT
Learn best practices for deploying fusion systems Data Fusion Best Practices
Explore the complementary IMU fusion example Complementary Filter and IMU Fusion
Return to the module overview Data Fusion Introduction