The Mistake: Running 5 routing protocol simulations (AODV, LEACH, Directed Diffusion, GPSR, DSR), picking the one with highest packet delivery rate and lowest average hop count.
Why It’s Wrong: Protocol performance depends critically on workload characteristics. LEACH wins for periodic sensing but fails catastrophically for rare event detection. Directed Diffusion excels at event queries but wastes energy for continuous monitoring.
Real-World Example: Smart city deployment chose AODV because simulation showed “95% delivery rate, 3.2 average hops, lowest latency.” Application: Air quality monitoring with 5-minute periodic reports from 500 sensors. Actual deployment after 3 months: - AODV hotspot nodes died (relaying 400 packets/hour), creating coverage holes - Route maintenance overhead: 40% of total traffic (RREQ/RREP/RERR) - Network partitioned into 8 disconnected islands - Battery life: 4 months worst-case (target was 3 years)
Post-Mortem: LEACH would have extended lifetime to 2.5 years for this periodic reporting application via: - Clustering eliminates hotspot problem (CH rotation distributes relay burden) - Aggregation reduces traffic by 85% (20 readings → 1 summary per cluster) - No route maintenance overhead (clusters use direct communication)
The Fix: Match protocol to workload, not abstract metrics: 1. Characterize YOUR workload: Periodic vs. event? Query patterns? Latency needs? 2. Understand protocol assumptions: LEACH assumes static periodic reporting. AODV assumes occasional any-to-any communication. Directed Diffusion assumes event queries. 3. Simulate realistic workload: Don’t use “random packet every 10 seconds” – use actual application traffic patterns 4. Measure over full lifetime: Include node failures, battery depletion, protocol overhead
Rule of Thumb: Protocol selection is application-driven, not performance-driven. There is no “best WSN routing protocol” – only “best protocol for THIS application’s workload and constraints.”