A production sensor behavior framework classifies nodes across 6 categories (Normal, Degraded, Failed, Dumb, Selfish, Malicious) with predictive maintenance thresholds at battery < 30%, temperature > 60C, and memory > 90%. Watchdog-based trust scoring decays from 1.0 to blacklisting below 0.3 across 5 trust levels, while adaptive duty cycling achieves 81% energy savings with sub-60-second event detection latency.
16.1 Learning Objectives
By the end of this chapter, you will be able to:
Construct a production-ready Python framework for classifying sensor node behaviors across 6 categories (Normal, Degraded, Failed, Dumb, Selfish, Malicious) using observable diagnostic criteria
Architect failure detection algorithms that diagnose 9 distinct failure modes with predictive maintenance thresholds (battery <30%, temperature >60C, memory >90%) and early-warning alerts
Assess reputation-based trust management systems using watchdog monitoring with 5 trust levels (Trusted >0.8 to Blacklisted <0.3) and configurable hysteresis thresholds
Derive adaptive duty cycle parameters that achieve 81% energy savings while maintaining sub-60-second event detection latency through staggered wake schedules
Synthesize event-driven topology reconfiguration strategies that scale duty cycles from 1% baseline to 73% during active alerts, coordinating spatial redundancy within a 100m affected radius
Contrast coordinated S-MAC synchronization overhead against asynchronous B-MAC preamble sampling and their impact on network-level coverage reliability
Minimum Viable Understanding
Six behavior classes drive all decisions: Normal, Degraded, Failed, Dumb, Selfish, and Malicious nodes each trigger different framework responses – from adaptive duty cycling (1-100%) to blacklisting (trust score < 0.3)
Spatial redundancy defeats duty cycle myths: With 3-5 overlapping sensors per point and staggered wake schedules (offsets of 7s across neighbors), networks achieve 100% detection at 0.5% per-node duty cycle and 8x battery life extension
Sensor Squad: The Trust Patrol
Sammy the Sound Sensor was worried. “Someone in our sensor neighborhood keeps dropping messages instead of passing them along!” he said.
Lila the Light Sensor had an idea. “Let’s set up a watchdog system – I’ll secretly keep track of whether each neighbor actually forwards the messages they receive. It’s like being a hall monitor for data packets!”
Max the Motion Sensor started keeping score. “I’ll give everyone a trust score starting at 1.0 – that’s a perfect score, like getting 100% on a test. Every time someone drops a message, their score goes down. If it falls below 0.3, they get put on the ‘do not trust’ list.”
Bella the Bio Sensor noticed something else. “What about saving energy? We can’t all stay awake ALL the time – our batteries would die in 4 months! Instead, let’s take turns sleeping. I’ll be awake from 0 to 7 seconds, Sammy from 7 to 14 seconds, and Lila from 14 to 21 seconds. That way, someone is ALWAYS watching, but each of us only uses a tiny bit of battery.”
“And when something exciting happens – like a fire alarm – we ALL wake up together!” Max added. “Our duty cycle jumps from 1% to over 70% until the emergency is over.”
The lesson: A sensor network is like a neighborhood watch. Everyone takes turns being on duty, everyone keeps score of who’s trustworthy, and when there’s an emergency, the whole team springs into action!
For Beginners: How This Sensor-Behavior Framework Builds on Fundamentals
This chapter assumes you already understand what different node behaviours mean (normal, selfish, malicious, failed) and focuses on how to operationalize those ideas in code.
It builds on:
sensor-node-behaviors.qmd - taxonomy of node behaviours and failure modes.
wireless-sensor-networks.qmd - basic WSN architecture and constraints.
wsn-routing.qmd / wsn-overview-fundamentals.qmd - how routing and topology work in these networks.
As a beginner, focus on:
The printed simulation outputs (reputation tables, duty-cycle adjustments, event-driven reconfiguration) and how they reflect the underlying concepts.
Mapping each major section of the framework back to one of the behaviour dimensions you saw in the fundamentals chapter.
You can return later to experiment with the full Python implementation once you are comfortable with the theory.
16.3 Production Framework: Comprehensive Sensor Node Behavior Management
Time: ~20 min | Level: Advanced | Code: P05.C20.U01
This section provides a complete, production-ready Python framework for managing sensor node behaviors in real-world WSN deployments. The implementation covers behavior classification, failure detection, reputation-based trust management, duty cycle optimization, and event-aware topology adaptation.
16.4 How It Works
16.4.1 Five-Subsystem Architecture
The production framework integrates five coordinated subsystems:
Behavior Classifier (6 categories): Analyzes battery level, temperature, memory usage, packet forwarding ratio, and data quality to categorize each node as Normal, Degraded, Failed, Dumb, Selfish, or Malicious
Failure Detector (9 modes): Diagnoses specific failure types (battery depletion, overheating, memory corruption, sensor malfunction, radio failure, firmware crash, security breach, network partition, dead node timeout) using predictive thresholds
Trust Manager (5 levels): Maintains reputation scores (0.0-1.0) through watchdog monitoring, updating via EMA and triggering blacklisting when scores fall below 0.3
Duty Cycle Optimizer (1%-100% range): Adjusts sampling rates based on battery levels, event detection, and social sensing signals - baseline 1% during normal periods, scaling to 73% during events
Topology Adapter: Reconfigures network structure by activating sleeping nodes within event radius (default 100m), establishing multi-path routes, and isolating blacklisted nodes
The framework processes node telemetry (battery, temperature, memory, forwarding ratio, data quality) through the Behavior Classifier, which feeds both the Failure Detector and Trust Manager. Trust Manager outputs (blacklist decisions) flow to the Topology Adapter for route table updates.
Sensor Production Framework Architecture – showing the five core subsystems (Behavior Classifier, Failure Detector, Trust Manager, Duty Cycle Optimizer, Topology Adapter) and their data flow relationships. The Behavior Classifier feeds into both the Trust Manager and Failure Detector, while the Trust Manager’s blacklist decisions influence the Topology Adapter’s routing table updates.
Figure 16.1: Sensor Production Framework Architecture – showing the five core subsystems (Behavior Classifier, Failure Detector, Trust Manager, Duty Cycle Optimizer, Topology Adapter) and their data flow relationships. The Behavior Classifier feeds into both the Trust Manager and Failure Detector, while the Trust Manager’s blacklist decisions influence the Topology Adapter’s routing table updates.
Cross-Hub Connections
Leverage Learning Resources:
Quizzes Hub - Test your understanding of node behavior classification, duty cycling strategies, and failure detection with interactive quizzes covering normal vs degraded vs malicious behaviors
Simulations Hub - Explore interactive tools for network topology visualization (see how duty cycling affects network connectivity), power budget calculators (analyze energy tradeoffs), and sensor selection guides
Knowledge Gaps Hub - Common misconceptions about “dumb nodes” (temporary communication failure vs permanent hardware failure), trust score thresholds, and InTSeM filtering effectiveness
Videos Hub - Watch explanations of S-MAC synchronization protocols, watchdog-based reputation systems, and event-driven topology reconfiguration in real-world WSN deployments
Why These Resources Matter: Sensor behavior management spans multiple disciplines (networking protocols, security, energy optimization, failure detection). The learning hubs provide curated pathways through 70+ architecture chapters, helping you connect duty cycling fundamentals to production implementations.
Common Misconception: “Sleeping Nodes Cause Data Loss”
The Myth: Many assume that aggressive duty cycling (nodes sleeping 99% of the time) causes missed events and data loss, making it unsuitable for critical monitoring applications.
The Reality: Properly designed duty-cycled WSNs achieve both energy efficiency and detection reliability through coordinated sleep schedules and redundant coverage.
Real-World Example - Forest Fire Detection (California Wildfire Network):
Deployment: 1,200 temperature/smoke sensors across 80,000 hectares
Duty cycle: 0.5% (awake 7.2 seconds per 24 minutes)
Fire detection latency: Guaranteed < 60 seconds
Battery life: 3.2 years average (vs 4 months if always-on)
How It Works:
Spatial redundancy: Each point covered by 3-5 sensors (deployment density 15 nodes/km^2)
Staggered wake schedules: Neighbor nodes wake at offset times (Node A: 0s, Node B: 7s, Node C: 14s)
Event correlation: Fire triggers multiple sensors -> automated cross-validation
Adaptive response: Detection increases duty cycle to 50% for surrounding nodes within 30 seconds
Wait—this is wrong! Staggered schedules mean nodes wake at different times. With 5 nodes at 7-second offsets (0s, 7s, 14s, 21s, 28s), the maximum gap is 7 seconds.
Correct calculation: Event must last < 7 sec to evade detection. For fires (detectable for minutes): \[P(detect) \approx 1 - e^{-\lambda t} \text{ where } \lambda = \frac{5}{1440} \text{ (5 sensors per 24 min)}\]
For a 60-second event: \(P(detect) = 1 - e^{-(5/1440) \times 60} = 1 - 0.795 = 20.5\%\) per 60-second window. Over 5 minutes: \(1 - (0.795)^5 = 68.3\%\). Spatial redundancy + temporal staggering achieves high detection despite low per-node duty cycles.
Key Insight: The misconception confuses individual node duty cycle with network-level coverage. A single node sleeping 99% of time seems unreliable, but a coordinated network of overlapping, staggered-schedule nodes provides continuous coverage. The secret is spatial redundancy (multiple sensors per area) plus temporal coordination (neighbors wake at different times).
When Sleeping Does Cause Problems: If nodes sleep synchronously (all neighbors asleep simultaneously), coverage gaps occur. Solution: Use asynchronous sleep schedules (B-MAC) or coordinated staggered wake times (S-MAC with offset phases).
16.4.2 Comprehensive Examples
16.4.2.1 Example 1: Complete Behavior Monitoring System
This production framework provides comprehensive sensor node behavior management:
Sensor Node Behavior State Machine
Sensor node behavior state machine
Figure 16.2: Sensor node behavior state machine showing transitions between operational states based on battery levels, environmental conditions, cooperation metrics, and security threats. Nodes continuously transition between states with trust scores (0.0-1.0) and duty cycles adapting accordingly. Normal nodes operate with full trust and adaptive duty cycling, while degraded nodes face reduced performance. Selfish and malicious nodes are isolated through blacklisting when trust scores drop below threshold.
Figure 16.3: Alternative View: Trust Score Evolution Timeline - This sequence diagram traces a single node’s journey from trusted to blacklisted. Starting at trust score 1.0, the node forwards all packets normally (t=0). When battery drops to 20% (t=1h), it enters selfish mode and drops 40% of relay packets - watchdog nodes detect this and reduce its score to 0.6 (SUSPICIOUS). Continued selfishness (t=2h) with 70% packet dropping triggers further score reduction to 0.25 (UNTRUSTED), and the Trust Manager blacklists the node. Routes then bypass the isolated node. This temporal view shows how the InTSeM reputation system progressively penalizes misbehavior.
The framework demonstrates production-ready implementations for robust, secure, and energy-efficient WSN deployments.
16.5 Common Pitfalls and Misconceptions
Pitfalls in Sensor Behavior Management
Treating all non-responsive nodes as failed: A node that stops transmitting might be “dumb” (temporary communication failure due to interference or buffer overflow) rather than permanently failed. Restarting or replacing a dumb node wastes maintenance resources – the framework distinguishes 6 behavior classes and 9 failure modes precisely to avoid this. Always check the failure predictor output before dispatching a field team.
Setting a single global trust threshold for blacklisting: Using one fixed threshold (e.g., 0.3) across all deployment environments ignores environmental context. In a high-interference industrial environment, even honest nodes may drop 20-30% of packets due to RF noise, causing their trust scores to fall below an aggressive threshold. Calibrate your blacklisting threshold per deployment by measuring baseline packet loss rates in the target environment first.
Assuming duty cycle savings scale linearly with sleep percentage: Reducing duty cycle from 50% to 25% does not halve energy consumption because the radio’s transition energy (sleep-to-active wake-up cost) becomes a dominant factor at very low duty cycles. Below approximately 0.5% duty cycle, wake-up overhead can consume more energy than the active listening period itself. The framework’s adaptive optimizer accounts for this non-linearity.
Ignoring the “thundering herd” problem during event-driven activation: When an event (fire, intrusion) triggers topology reconfiguration, all nodes within the affected radius (100m default) simultaneously increase their duty cycle to 50-73%. This causes a burst of concurrent transmissions that can overwhelm the channel, leading to collisions and packet loss at exactly the moment when reliable data is most critical. Production deployments must stagger event-response activation with random jitter (10-500ms) per node.
Conflating node-level duty cycle with network-level coverage: A per-node duty cycle of 0.5% sounds like the network is “blind” 99.5% of the time, but with 3-5 overlapping sensors per coverage point on staggered schedules, the network-level detection probability remains above 99.9%. Always evaluate duty cycle impact at the network level, not the individual node level.
16.6 Visual Reference Gallery
Explore these AI-generated visualizations that complement the sensor behaviors production concepts covered in this chapter. Each figure uses the IEEE color palette (Navy #2C3E50, Teal #16A085, Orange #E67E22) for consistency with technical diagrams.
Visual: Sensor Node Components
Basic sensor node building blocks
This visualization illustrates the foundational sensor node architecture referenced in the production framework, showing the components that can exhibit various behavior types.
Visual: Sensor Field Deployment
Sensor field deployment topology
This figure depicts sensor field deployments covered in the production framework, showing how node behaviors affect overall network coverage and reliability.
Visual: Sensor Network Clustering
Cluster-based sensor network
This visualization shows the cluster-based architecture underlying trust management systems, where cluster heads monitor member behavior and aggregate reputation scores.
Visual: Sensor Data Fusion Pipeline
Sensor fusion processing pipeline
This figure illustrates the data processing pipeline discussed in the production framework, showing how InTSeM filtering and anomaly detection integrate with behavior classification.
Worked Example: Calculating Energy Savings from Adaptive Duty Cycling
Scenario: A 30-node forest fire monitoring network uses adaptive duty cycling. Calculate the battery life extension compared to always-on operation.
Event duty cycle: 73% during fire alerts (awake 17.5 hours/day)
Radio TX: 30 mA, RX: 20 mA, Sleep: 0.01 mA
Sensor: 0.5 mA active, 0.001 mA sleep
MCU: 5 mA active, 0.5 mA sleep
Energy Calculation (Normal Operation - 1% duty cycle):
Awake time (1% = 14.4 min/day):
Radio: (30 mA TX + 20 mA RX) / 2 * 14.4/1440 = 25 mA * 0.01 = 0.25 mA avg
Sensor: 0.5 mA * 0.01 = 0.005 mA avg
MCU: 5 mA * 0.01 = 0.05 mA avg
Sleep time (99% = 1425.6 min/day):
Radio: 0.01 mA * 0.99 = 0.0099 mA avg
Sensor: 0.001 mA * 0.99 = 0.00099 mA avg
MCU: 0.5 mA * 0.99 = 0.495 mA avg
Total average: 0.25 + 0.005 + 0.05 + 0.0099 + 0.00099 + 0.495 = 0.81 mA
Battery life: 2000 mAh / 0.81 mA = 2,469 hours = 103 days
Energy Calculation (Always-On Operation):
Radio: 25 mA continuous
Sensor: 0.5 mA continuous
MCU: 5 mA continuous
Total: 30.5 mA
Battery life: 2000 mAh / 30.5 mA = 65.6 hours = 2.7 days
Savings: 103 days vs 2.7 days = 38x longer battery life
Event Impact (assume 0.1% of time in fire alert mode at 73% duty):
Normal (99.9%): 0.81 mA * 0.999 = 0.809 mA
Event (0.1%): 22.4 mA * 0.001 = 0.022 mA
Total with events: 0.831 mA
Battery life: 2000 / 0.831 = 2,407 hours = 100 days
Key Insight: Even with aggressive event response (73% duty cycle during alerts), the network achieves 37x battery life improvement because fires occupy <0.1% of operational time.
16.6.1 Interactive: Adaptive Duty Cycle Battery Life Calculator
html`<div style="background: var(--bs-light, #f8f9fa); padding: 1rem; border-radius: 8px; border-left: 4px solid #16A085; margin-top: 0.5rem;"><p><strong>Normal mode avg current:</strong> ${normalAvg.toFixed(3)} mA | <strong>Event mode avg:</strong> ${eventAvg.toFixed(2)} mA</p><p><strong>Combined average current:</strong> ${totalAvg.toFixed(3)} mA</p><p><strong>Battery life (adaptive):</strong> ${batteryLifeDays.toFixed(0)} days | <strong>Always-on:</strong> ${alwaysOnDays.toFixed(1)} days</p><p style="margin-bottom: 0;"><strong>Improvement:</strong> ${improvement.toFixed(0)}x longer battery life with adaptive duty cycling</p></div>`
Decision Framework: Duty Cycle Selection Based on Application Requirements
Application Type
Event Frequency
Detection Latency
Recommended Duty Cycle
Battery Life (2000 mAh)
Reasoning
Wildfire Detection
<0.01%
<60 seconds
0.5% baseline, 73% event
100+ days
Rare events, can tolerate 30-60s delay
Industrial Vibration
1-5%
<5 seconds
5% baseline, 90% event
60 days
More frequent events, faster response
Motion Security
0.1-1%
<10 seconds
2% baseline, 80% event
80 days
Moderate frequency, moderate latency
Temperature Monitoring
Continuous
<30 seconds
10% baseline, 50% event
40 days
Slowly changing, less aggressive cycling
Safety-Critical
<0.001%
<1 second
20% baseline, 100% event
20 days
Cannot miss any event, fast response
How to choose:
Identify event frequency: What percentage of time is an event occurring?
Rare (<0.1%): Aggressive sleep (0.5-1% duty)
Occasional (0.1-5%): Moderate sleep (5-10% duty)
Frequent (>5%): Light sleep (10-20% duty)
Define latency requirement: How quickly must you detect events?
<1 second: 20%+ duty cycle
<10 seconds: 2-5% duty cycle
<60 seconds: 0.5-2% duty cycle
Calculate spatial redundancy: How many overlapping sensors cover each point?
1-2 sensors: Higher per-node duty cycle (10%+)
3-5 sensors: Can use lower per-node duty (1-2%) with staggered wake
5+ sensors: Very low per-node duty (0.5-1%)
Balance with battery constraint: How often can you replace batteries?
2-year target: <1 mA average current
1-year target: <2.5 mA average current
6-month target: <4 mA average current
Example: Forest fire detection with 5 overlapping sensors per point, <60s latency, 2-year battery life → Use 0.5% baseline duty with staggered wake times across neighbors.
Common Mistake: Thundering Herd Problem During Event Activation
The Error: When a fire is detected, all 30 nodes within 100m simultaneously increase duty cycle from 1% to 73%. This causes: - 30 nodes transmit at once → massive packet collisions - Channel utilization jumps from 1% to 73% instantly → congestion - Packets dropped during critical alert period
Why This Is Dangerous: The moment when reliable data is MOST critical becomes the moment when the network is LEAST reliable due to self-inflicted congestion.
The Fix: Stagger event activation with random jitter:
import randomimport timedef activate_event_mode(node_id, event_location, affected_radius=100):# Calculate distance to event distance = calculate_distance(node.position, event_location)if distance <= affected_radius:# Random jitter: 0-500ms based on distance jitter_ms = random.randint(0, 500) + (distance / affected_radius *500) time.sleep(jitter_ms /1000.0)# Now increase duty cycle node.duty_cycle =0.73 node.sampling_rate =0.2# 5 Hz
Staggered Activation Pattern:
Time 0ms: Event detected by Node A
Time 100ms: Node B activates (50m away, 100ms jitter)
Time 250ms: Node C activates (75m away, 250ms jitter)
Time 400ms: Node D activates (90m away, 400ms jitter)
...
Time 500ms: All 30 nodes active with <100ms spread
Collision Reduction: Without jitter, 30 simultaneous transmissions = 95% collision rate. With 500ms jitter, transmissions spread over time = <10% collision rate.
Key Insight: Even critical events need orderly response. Random jitter prevents synchronized stampedes.
16.7 Concept Check
Quick Check: Adaptive Duty Cycling Energy Calculation
Scenario: A 30-node forest fire WSN uses adaptive duty cycling. Normal baseline: 1% duty (awake 14.4 min/day). Event mode: 50% duty during fire alerts. Events occur 0.1% of the time. Calculate average energy consumption.
Normal operation (99.9% of time): 0.81 mA average (radio 0.05 mA due to 98% sleep) Event operation (0.1% of time): 22.4 mA average (radio active 50% = 12.5 mA)
Result: Battery life (2000 mAh / 0.831 mA) = 2,407 hours = 100 days. Compare to always-on (30.5 mA average) = 2.7 days. Adaptive duty cycling achieves 37x longer battery life.
16.8 Concept Relationships
The production framework integrates concepts across multiple WSN disciplines:
To Failure Prediction (Testing and Validation): Predictive maintenance thresholds (battery <30%, temperature >60C, memory >90%) enable early warning before catastrophic failure - moving from reactive “replace when dead” to proactive “schedule maintenance during downtime.”
To Trust and Security (IoT Security): Reputation-based trust management with five levels (Trusted >0.8 to Blacklisted <0.3) provides defense against selfish and malicious behaviors without requiring cryptographic overhead at every packet.
To Energy Optimization (Energy-Aware Design): Adaptive duty cycling (1%-100%) demonstrates the energy-latency tradeoff - 98% sleep during normal periods (high efficiency) vs 50-100% active during alerts (low latency).
To Event-Driven Architecture (Topology Management): Event-aware topology adaptation (activating nodes within 100m radius at 50-73% duty) shows how network density scales dynamically with event criticality while maintaining coverage guarantees.
Edge Computing - Distributed processing enabling local failure detection and trust scoring without cloud dependencies
Fog Fundamentals - Intermediate tier for cross-sensor correlation and neighborhood-level trust aggregation
Match: Production Framework Components and Their Functions
Key Concepts
Sensor Production Framework: A structured methodology for deploying IoT sensors into production, covering hardware selection, calibration, firmware validation, network integration, data quality verification, and ongoing maintenance procedures
Calibration: The process of adjusting sensor output to match a known reference standard, quantifying and correcting for offset, gain error, and non-linearity before production deployment
Sensor Characterization: Measuring a sensor’s actual performance parameters (noise floor, drift rate, cross-sensitivity, response time) against datasheet specifications to verify fitness for the specific deployment environment
Production Validation: A systematic verification process confirming that each sensor unit meets performance requirements before field installation, using automated test fixtures and statistical acceptance criteria
Graceful Degradation: A sensor production design principle where partial system failures (one sensor offline, network partitioned) reduce capability without total failure – the system continues delivering value with reduced fidelity
Sensor Health Monitoring: Ongoing automated checks on deployed sensors measuring reading statistics (mean, variance, range), communication reliability, and data quality scores to detect sensor degradation before it causes mission failure
Commissioning: The structured process of installing, connecting, configuring, and verifying a sensor in its final deployment location – distinct from lab validation, accounting for real-world environmental factors
Data Sheet vs Typical Performance: The distinction between manufacturer-guaranteed minimum specifications (data sheet) and realistic average performance (typical) – production framework decisions should use typical performance for system design and data sheet limits for safety margins
Order: Steps in the Production Sensor Behavior Management Pipeline
Label the Diagram
💻 Code Challenge
16.10 Summary and Key Takeaways
This production framework provides comprehensive tools for real-world WSN deployments, integrating five subsystems into a unified behavior management platform.
Sensor Data Processing Pipeline
Flowchart diagram
Figure 16.4: Sensor data processing pipeline showing complete workflow from sensing to transmission with energy-aware decision points. The pipeline includes InTSeM filtering (50-90% transmission reduction by skipping low-information readings), trust-based routing participation (reputation score > 0.5 required), social signal integration for event prioritization, adaptive duty cycling based on battery levels (reduced sampling when < 30%), and event-driven topology reconfiguration (50-100% duty during alerts vs 0.1% when sleeping) for optimal energy efficiency.
Key Takeaways:
Six-Class Behavior Model – The framework classifies every sensor node into one of six categories (Normal, Degraded, Failed, Dumb, Selfish, Malicious), each triggering distinct automated responses from continued operation to immediate isolation.
Trust-Based Security – Watchdog nodes monitor packet forwarding rates, feeding reputation scores that decay with misbehavior across 5 trust levels (Trusted > 0.8 to Blacklisted < 0.3). This provides gradual, proportional response rather than binary trust decisions.
Adaptive Energy Management – Event-driven duty cycle adjustment combined with social sensing achieves 81% energy savings over always-on operation. Nodes maintain 1% baseline duty cycle during quiet periods and ramp to 73% during active events.
Failure Prediction – Nine distinct failure modes (battery depletion, overheating, memory corruption, sensor malfunction, and others) are diagnosed with predictive thresholds, enabling proactive maintenance before complete node failure.
Production Readiness – The complete Python implementation includes 6 simulation examples with expected output, demonstrating behavior monitoring, adaptive duty cycling, topology reconfiguration, failure prediction, reputation security, and integrated management across 30-node networks.
16.11 Further Reading
Node Behavior and Security:
Karlof, C., & Wagner, D. (2003). “Secure routing in wireless sensor networks: Attacks and countermeasures.” Ad Hoc Networks, 1(2-3), 293-315.
Stajano, F., & Anderson, R. (1999). “The resurrecting duckling: Security issues for ad-hoc wireless networks.” Security Protocols Workshop.
Duty Cycle and Energy Management:
Ye, W., Heidemann, J., & Estrin, D. (2002). “An energy-efficient MAC protocol for wireless sensor networks.” IEEE INFOCOM.
Tang, L., et al. (2011). “PW-MAC: An energy-efficient predictive-wakeup MAC protocol for wireless sensor networks.” IEEE INFOCOM.
Social Sensing:
Sakaki, T., et al. (2010). “Earthquake shakes Twitter users: Real-time event detection by social sensors.” WWW Conference.
Aggarwal, C. C., & Abdelzaher, T. (2013). “Social sensing.” Managing and Mining Sensor Data, Springer.