Sensor fusion architectures define how and where data from multiple sensors is combined. The three main patterns – centralized, distributed, and hierarchical – offer different trade-offs in accuracy, scalability, and fault tolerance. The Dasarathy taxonomy classifies fusion systems by their input/output abstraction levels, from raw data combination to decision-level voting.
Learning Objectives
After completing this chapter, you will be able to:
Design centralized, distributed, and hierarchical fusion architectures
Apply the Dasarathy taxonomy for fusion classification
Choose appropriate fusion architectures for IoT applications
Evaluate trade-offs between architecture options
Key Concepts
Centralised fusion: All raw sensor data is transmitted to a central node for processing — simple to implement but creates a bandwidth bottleneck and single point of failure.
Decentralised fusion: Each sensor or node performs local processing and shares only estimates or features with neighbours, reducing communication overhead and improving fault tolerance.
Hierarchical fusion: A tree-structured architecture where low-level nodes fuse nearby sensors, intermediate nodes fuse local estimates, and a central node produces the final global estimate — balancing communication efficiency with accuracy.
Distributed fusion: A peer-to-peer architecture where nodes share estimates with neighbours and iteratively converge to a global consensus without any central coordinator.
Track-to-track fusion: Combining state estimates (tracks) from multiple independent trackers rather than raw observations, reducing communication bandwidth at the cost of some information loss.
JDL (Joint Directors of Laboratories) model: A standard five-level data fusion reference model: Level 0 (signal processing), Level 1 (object refinement), Level 2 (situation refinement), Level 3 (threat refinement), Level 4 (process refinement).
For Beginners: Sensor Fusion Architectures
Imagine you have several sensors in a room – a thermometer, a motion detector, and a light sensor. Each one gives you a piece of the puzzle about what is happening. Sensor fusion is the process of combining these readings to get a more complete and accurate picture. The key question is: where do you combine them? You could send all raw data to one powerful computer (centralized), let each sensor make its own decision and then vote (distributed), or organize sensors into teams that summarize their findings before passing them up to a coordinator (hierarchical). Each approach has trade-offs in speed, reliability, and accuracy.
19.1 Fusion Architecture Types
19.1.1 Architecture Comparison
Architecture
Pros
Cons
Use Case
Centralized
Optimal fusion, simple
Single point of failure, high bandwidth
Small-scale systems
Distributed
Scalable, fault-tolerant, low bandwidth
Sub-optimal, complex
Large-scale IoT networks
Hierarchical
Balanced, modular
Moderate complexity
Smart buildings, smart cities
Worked Example: Choosing a Fusion Architecture for a Smart Campus
A university deploys 800 environmental sensors across 12 buildings to monitor HVAC efficiency. Each sensor (BME680) reports temperature, humidity, pressure, and VOC every 30 seconds. The campus facilities team needs fused “comfort index” values per room, per floor, and campus-wide.
Data volume calculation:
Per sensor per reading: 4 values x 4 bytes = 16 bytes + 12 bytes metadata = 28 bytes
Per sensor per day: 28 bytes x 2,880 readings = 80.6 KB/day
Total raw data: 800 sensors x 80.6 KB = 64.5 MB/day
Architecture A: Centralized – All 800 sensors send raw data to one campus server.
Metric
Value
Network bandwidth
64.5 MB/day (0.75 KB/s sustained)
Server compute
2,304,000 readings/day, single Kalman filter instance
Latency (sensor to decision)
2-5 seconds (network + queue + processing)
Failure impact
Server down = zero monitoring for all 12 buildings
Infrastructure cost
1 server ($3,000) + campus-wide network backhaul
Architecture B: Distributed – Each sensor independently computes its own comfort index and broadcasts its decision.
Metric
Value
Network bandwidth
9.2 MB/day (each sensor sends a 4-byte decision packet instead of 28 raw bytes: 86% reduction)
Edge compute
Each sensor runs local threshold logic (fits in 64 KB RAM)
Problem: No cross-room correlation. Cannot detect that Building A’s HVAC is pulling cold air into Building B through connected ductwork because each sensor only sees its own room.
Room node fails = 1 room dark, floor/building still functional
Infrastructure cost
100 room gateways ($50 each = $5,000) + 12 building servers ($500 each = $6,000) = $11,000
Decision: The campus chose Architecture C (Hierarchical) despite higher infrastructure cost because:
Cross-building HVAC correlation required floor-level fusion (ruled out pure distributed)
A single-server failure blacking out all 12 buildings was unacceptable (ruled out centralized)
Hierarchical allowed incremental deployment – they started with 3 buildings and added 9 more over 18 months
6-month result: Hierarchical fusion detected that Buildings 3 and 7 shared a return air plenum, causing temperature oscillations (Building 3’s heater triggered Building 7’s cooler in a feedback loop). Fixing the plenum damper saved $14,200/year in energy waste – an insight impossible with per-sensor distributed fusion and discovered within 2 weeks of enabling cross-building correlation at the campus level.
Putting Numbers to It
The bandwidth savings from hierarchical aggregation are substantial. Raw data transmission (Architecture A):
At $0.09/GB for cellular IoT data plans, hierarchical fusion saves $1.76/year in bandwidth alone – modest, but the real payoff is in analytics. The $11,000 infrastructure investment pays for itself in under 10 months solely from the $14,200 annual HVAC energy savings, not counting bandwidth.
Latency budget: Room decision in <200ms. Campus decision in 3-8 seconds. The 5-second difference allows for sequential processing: Room → Floor (1s) → Building (2s) → Campus (5s total). Each level has time to apply Kalman filtering with 10 samples (200ms × 10 = 2s window) before aggregating upward.
19.2 Data Fusion Taxonomy (Dasarathy Classification)
The Dasarathy taxonomy provides a formal framework for classifying sensor fusion systems based on their input and output abstraction levels.
19.2.1 The Five Dasarathy Categories
Category
Input -> Output
Description
IoT Example
DAI-DAO
Data -> Data
Raw values combined
Two temperature sensors averaged
DAI-FEO
Data -> Feature
Raw data to features
Accel + gyro raw -> motion features
FEI-FEO
Feature -> Feature
Features merged
Wi-Fi RSSI + BLE features combined
FEI-DEO
Feature -> Decision
Features classified
HR + motion -> “User sleeping”
DEI-DEO
Decision -> Decision
Decisions fused
Camera + Radar -> BRAKE!
19.2.2 When to Use Each Category
DAI-DAO: Need single “best” raw measurement from redundant sensors
DAI-FEO: Raw data needs transformation into meaningful metrics
FEI-FEO: Combining complementary features from different sensor types
FEI-DEO: Features need classification into categorical outputs
DEI-DEO: Multiple independent classifiers vote on final decision
Check Your Understanding: Dasarathy Taxonomy
19.3 Smart Home Multi-Sensor Example
A smart home illustrates how the Dasarathy categories apply in practice. Temperature and humidity sensors fuse at the data level (DAI-DAO) to produce a comfort index. Motion, light, and door sensors extract occupancy features (DAI-FEO), which are then classified into states like “home,” “away,” or “sleeping” (FEI-DEO). Finally, the comfort system and security system each make independent decisions that are combined at the decision level (DEI-DEO) to determine actions like adjusting the thermostat or arming the alarm.
Alert if:
(LIDAR AND Camera) OR
(Ultrasonic AND Camera) OR
(LIDAR confidence > 0.95)
Result: 99.92% recall, 0.7 false alarms/hr, 155ms latency – meeting all three requirements.
Key Insight: Cascaded fusion leverages each sensor’s strengths – LIDAR for fast spatial detection, camera for visual classification, ultrasonic for close-range confirmation.
19.6 Code Example: Kalman Filter for Sensor Fusion
The Kalman filter is the most widely used algorithm for combining noisy sensor data. Here is a simplified implementation fusing a GPS and an accelerometer for position tracking:
import numpy as npclass SimpleKalmanFusion:"""Fuse GPS position with accelerometer for smoother tracking. GPS: accurate but slow (1 Hz) and noisy (+/- 5m) Accelerometer: fast (100 Hz) but drifts over time Kalman filter: combines both for best of both worlds """def__init__(self):# State: [position, velocity]self.x = np.array([0.0, 0.0])# State covariance (uncertainty)self.P = np.array([[100.0, 0.0], [0.0, 10.0]])# Process noise (how much we trust the model)self.Q = np.array([[0.1, 0.0], [0.0, 1.0]])# GPS measurement noise (+/- 5 meters)self.R_gps = np.array([[25.0]])def predict(self, acceleration, dt):"""Prediction step using accelerometer data."""# State transition: position += velocity * dt + 0.5 * a * dt^2 F = np.array([[1.0, dt], [0.0, 1.0]]) B = np.array([0.5* dt**2, dt])self.x = F @self.x + B * accelerationself.P = F @self.P @ F.T +self.Qdef update_gps(self, gps_position):"""Correction step using GPS measurement.""" H = np.array([[1.0, 0.0]]) # GPS measures position only y = gps_position - H @self.x # Innovation S = H @self.P @ H.T +self.R_gps # Innovation covariance K =self.P @ H.T @ np.linalg.inv(S) # Kalman gainself.x =self.x + (K @ y).flatten()self.P = (np.eye(2) - K @ H) @self.Preturnself.x[0] # Return fused position# Usage examplekf = SimpleKalmanFusion()# Simulate: GPS at 1 Hz, accelerometer at 100 Hz# Run for 5 seconds (500 steps at 100 Hz)for t inrange(500): accel =0.5# m/s^2 constant acceleration kf.predict(accel, dt=0.01) # 100 Hz accelerometer updateif t %100==0: # GPS at 1 Hz (every 100 accel steps) gps_reading = kf.x[0] + np.random.normal(0, 5) # Noisy GPS fused_pos = kf.update_gps(gps_reading)print(f"t={t/100:.0f}s GPS: {gps_reading:.1f}m, Fused: {fused_pos:.1f}m")
Why Kalman filtering matters for IoT:
Scenario
Without Fusion
With Kalman Fusion
Delivery drone position
GPS jumps 5-10m between readings
Smooth trajectory, < 1m between GPS updates
Fitness tracker steps
Accelerometer miscounts 15-25%
Fused with gyroscope: < 5% error
Indoor-outdoor transition
GPS lost, position unknown
IMU maintains position estimate until GPS recovers
Expected Results: | Architecture | Accuracy | Latency | Bandwidth | Fault Tolerance | |————–|———-|———|———–|—————–| | Centralized | Best | High | High | Poor | | Distributed | Moderate | Low | Low | Excellent | | Hierarchical | Good | Medium | Medium | Good |
Interactive Quiz: Match Concepts
Interactive Quiz: Sequence the Steps
Common Pitfalls
1. Choosing centralised fusion without considering the communication cost
Centralised fusion requires transmitting all raw sensor data to a single node — in large-scale IoT deployments this communication overhead can dwarf the processing cost. Calculate the communication budget before choosing centralised vs distributed fusion.
2. Using hierarchical fusion without validating inter-tier information loss
Each tier in a hierarchical fusion system reduces information as it aggregates. Verify that the accuracy achieved at the top tier is adequate for the application after all intermediate aggregation steps, not just in the best-case scenario.
3. Designing fusion architectures without failure mode analysis
In a distributed fusion network, the failure of a node that aggregates data from 50 sensors silently removes all those sensors from the system. Design explicit failure detection and isolation so that node failures are visible and handled gracefully.
4. Not accounting for synchronisation overhead in distributed fusion
Distributed fusion architectures require time-synchronised measurements from all participating nodes. The synchronisation protocol overhead (NTP, PTP) can consume a significant fraction of the communication budget. Include it in resource budgeting.
Label the Diagram
Code Challenge
19.8 Summary
Sensor fusion architectures determine system performance:
Centralized: Optimal but single point of failure
Distributed: Scalable and fault-tolerant
Hierarchical: Balanced approach for complex systems
Dasarathy taxonomy: Classifies fusion by input/output abstraction levels
Trade-offs: Early vs late fusion, sensor-level vs central
For Kids: Meet the Sensor Squad!
Sensor fusion architectures are like different ways to organize a team of detectives solving a mystery together!
19.8.1 The Sensor Squad Adventure: Three Ways to Solve a Mystery
The Sensor Squad had a big mystery to solve: “Who left the window open in the Smart School?” Principal Processor gave them three plans to choose from.
Plan A – The Big Meeting (Centralized): Sammy the Sensor, Lila the LED, Max the Microcontroller, and Bella the Battery all bring their clues to ONE big meeting room. Sammy says, “I felt cold air at 3 PM!” Lila adds, “I saw extra light from outside!” Max reports, “I heard the wind sensor go crazy!” Principal Processor looks at ALL the clues together and figures it out. This works great for a small team, but what if Principal Processor gets sick? Nobody can solve the mystery!
Plan B – Neighborhood Watch (Distributed): Each squad member investigates their OWN hallway. Sammy checks the science wing, Lila checks the art room, Max checks the gym, and Bella checks the library. They each make their own guess about which window is open, then they VOTE on the answer. If Sammy gets confused, the others can still solve it!
Plan C – Team Captains (Hierarchical): The squad splits into two sub-teams. Team Temperature (Sammy and a thermometer friend) figures out WHERE it is cold. Team Light (Lila and a brightness buddy) figures out WHERE extra light is coming in. Then Team Captain Max combines the sub-team reports. This is balanced – not too slow, not too risky!
Principal Processor smiled: “Choose your architecture based on the mission! Small mission? Use Plan A. Big school? Use Plan B. Medium? Plan C!”
19.8.2 Key Words for Kids
Word
What It Means
Centralized
Everyone sends data to ONE boss who makes the decision
Distributed
Everyone makes their own decision, then they vote
Hierarchical
Small teams decide first, then team captains combine results
Architecture
The plan for how a system is organized
Key Takeaway
Fusion architecture choice determines system reliability, scalability, and accuracy. Centralized fusion provides optimal accuracy but creates a single point of failure. Distributed fusion scales well and tolerates failures but may sacrifice accuracy. Hierarchical fusion balances both concerns and is the most common choice for production IoT systems. Use the Dasarathy taxonomy to classify what level of abstraction your fusion operates at.
19.9 What’s Next
If you want to…
Read this
Study concrete fusion applications built on these architectures