5  Selfish & Malicious Nodes

In 60 Seconds

Just 10-15% selfish nodes can reduce WSN throughput by 40%, while a single well-placed black hole attacker can isolate 30% of a network. Selfish nodes respond to incentives (reputation systems, virtual currency) and can be rehabilitated, but malicious nodes require cryptographic defenses and must be excluded – never apply the same mitigation strategy to both behavior types.

5.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Detect selfish behavior: Implement watchdog-based reputation systems that identify nodes prioritizing self-interest over network cooperation within 50-100 observed packets
  • Calculate reputation scores: Apply the EWMA formula R(t) = alpha x observation + (1-alpha) x R(t-1) to track node cooperation levels and trigger isolation at configurable thresholds
  • Identify malicious attacks: Recognize and differentiate black hole, sinkhole, wormhole, and Sybil attack signatures in sensor network traffic patterns
  • Design mitigation strategies: Select and deploy appropriate defense mechanisms including multi-path routing, authenticated updates, and cryptographic verification for each attack type
  • Apply game theory concepts: Analyze Nash equilibrium implications of selfish behavior to design incentive-compatible protocols that reward cooperation
  • Evaluate detection accuracy: Assess false positive and false negative rates in reputation-based detection systems across different network densities and traffic conditions
MVU: Minimum Viable Understanding

Core concept: In wireless sensor networks, some nodes intentionally misbehave – either selfishly (to save energy) or maliciously (to disrupt the network). Detecting and mitigating these behaviors is critical for network reliability.

Why it matters: A single malicious node can disrupt an entire WSN deployment. Research shows that just 10-15% selfish nodes can reduce network throughput by 40%, while a single well-placed black hole attacker can isolate 30% of a network.

Key takeaway: Selfish nodes respond to incentives (reputation systems, virtual currency) and can be rehabilitated. Malicious nodes require cryptographic defenses (authentication, secure routing) and must be excluded. Never apply the same mitigation strategy to both.

5.2 Prerequisites

Before diving into this chapter, you should be familiar with:

5.3 Introduction: Intentional Misbehavior

Key Concepts
  • Selfish Node: A node that prioritizes self-interest (energy conservation) over network cooperation
  • Malicious Node: A node deliberately disrupting network operation through active attacks
  • Reputation System: Mechanism for tracking and scoring node cooperation behavior over time
  • Black Hole Attack: Malicious node drops all packets after attracting traffic
  • Sybil Attack: Single node presents multiple fake identities to control consensus

Selfish nodes are like classmates who do their own homework but refuse to help others during group projects. They are not trying to hurt the team - they are just lazy or trying to save effort for themselves.

Malicious nodes are like bullies who offer to deliver your homework to the teacher, then throw it in the trash. They are actively trying to hurt the team.

Behavior Selfish Node Malicious Node
Goal Save battery, live longer Disrupt network, cause harm
Forwards own data Yes (needs network) Maybe not (does not care)
Forwards others’ data Only when watched Never or selectively
Response to monitoring Improves behavior No change
Recovery possible Yes (with incentives) No (requires exclusion)

Detection strategy: If a node forwards more packets when neighbors are watching, it is probably selfish (responding to social pressure). If it drops packets regardless of monitoring, it is probably malicious (deliberately attacking).

Misconception: Students often assume selfish nodes and malicious nodes are the same - both drop packets, so they must be equally harmful.

Reality: Selfish and malicious nodes have fundamentally different motivations and behaviors:

Behavior Selfish Node Malicious Node
Motivation Extend own lifetime (rational self-interest) Disrupt network (active attack)
Forwarding rate 40-60% (drops when unmonitored) 0-20% (drops always or selectively)
Response to monitoring Forwards 95%+ when neighbors watch Continues dropping even when monitored
Own packets Always forwards own data May drop own data
Recovery Behavior improves when reputation is low Never improves
Network impact Gradual degradation Catastrophic failure

Key insight: Selfish nodes respond to incentives (reputation, exclusion threat). Malicious nodes require cryptographic defenses (authentication, encryption). Different mitigation strategies are needed.

Some sensors in a network do not play fair – let us find out why!

5.3.1 The Sensor Squad Adventure: The Lazy Letter Carrier

Imagine a neighborhood where houses pass letters to each other to get messages to the post office far away. Each house uses a bit of energy (like snacks!) every time it passes a letter along.

Sammy the Selfish Sensor is running low on snacks. He thinks: “If I stop passing other people’s letters and only send my own, my snacks will last much longer!” So when Lila asks Sammy to pass her letter, he pretends he is too tired.

Max the Malicious Sensor is different. Max actually WANTS to cause trouble! He tells everyone: “Give me your letters, I know the fastest route to the post office!” But when he gets the letters, he throws them in the trash! Max is a letter black hole.

Bella the Detective notices something strange: - Sammy only refuses to pass letters when nobody is watching. When Bella stands next to him, he passes letters just fine! (That is how you spot a selfish sensor.) - Max throws away letters whether Bella watches or not. He does not care about being caught! (That is how you spot a malicious sensor.)

5.3.2 How Do We Fix It?

Problem Solution
Sammy is lazy Give Sammy a gold star every time he passes a letter. If his stars drop too low, nobody helps him either!
Max is mean Give every letter a secret code. If Max changes or destroys the letter, everyone will know!

The Big Lesson: Lazy sensors need encouragement (rewards and punishments). Mean sensors need locks and secret codes (security).

5.3.3 Key Words for Kids

Word What It Means
Selfish Node A sensor that is lazy and does not want to help pass messages to save its own battery
Malicious Node A sensor that is mean and tries to mess up the network on purpose
Reputation A score that tracks how helpful each sensor has been – like gold stars!
Black Hole A mean sensor that swallows up all the messages and never delivers them
Sybil Attack When one mean sensor pretends to be lots of different sensors at the same time

5.4 Selfish Nodes

Time: ~12 min | Difficulty: Intermediate | Unit: P05.C14.U04

Definition:
Nodes that prioritize self-interest (energy conservation, resource preservation) over network cooperation
Decision tree showing how a selfish node decides whether to forward or drop packets based on monitoring and battery level
Figure 5.1: Selfish node packet forwarding decision tree

5.4.1 Selfish Behaviors

  1. Packet dropping: Refuse to forward others’ packets
  2. Route advertisement refusal: Do not participate in route discovery
  3. Lazy sensing: Skip sensing cycles to save power
  4. False battery reports: Claim low battery to avoid relay duty
  5. Opportunistic sleep: Sleep longer than protocol requires

This timeline shows how selfish behavior evolves over time as battery depletes:

Timeline showing gradual evolution of selfish node behavior as battery depletes from cooperative through transition to fully selfish

Key Insight: Selfish behavior is often gradual, not sudden. Reputation systems can detect the transition phase before full selfishness occurs.

5.4.2 Economic Rationality of Selfishness

Why Selfishness Makes Sense (For the Node)

From the selfish node’s perspective, cooperation has costs:

Energy Budget:

E_total = E_own_sensing + E_own_TX + E_relay_RX + E_relay_TX

Selfish strategy: Minimize E_relay to maximize lifetime
Cooperative strategy: Accept E_relay as network duty

Lifetime Calculation:

def calculate_lifetime(battery_mah, cooperative=True):
    """Calculate node lifetime with/without cooperation"""

    # Energy consumption per hour
    E_sense_TX = 50  # mAh (own data)
    E_relay = 30     # mAh (forwarding for others)

    if cooperative:
        hourly_consumption = E_sense_TX + E_relay
    else:
        hourly_consumption = E_sense_TX  # Selfish: no relay

    lifetime_hours = battery_mah / hourly_consumption
    return lifetime_hours

battery = 5000  # mAh

coop_lifetime = calculate_lifetime(battery, cooperative=True)
selfish_lifetime = calculate_lifetime(battery, cooperative=False)

print(f"Cooperative node lifetime: {coop_lifetime:.1f} hours")
print(f"Selfish node lifetime: {selfish_lifetime:.1f} hours")
print(f"Selfish benefit: +{(selfish_lifetime/coop_lifetime - 1)*100:.1f}%")

Output:

Cooperative node lifetime: 62.5 hours (2.6 days)
Selfish node lifetime: 100.0 hours (4.2 days)
Selfish benefit: +60.0% lifetime

The Tragedy of the Commons: If all nodes act selfishly, the network collapses (no one forwards packets). But individual selfish nodes benefit at the expense of cooperative nodes.

5.4.3 Detection and Mitigation

Reputation system flowchart showing monitoring of forwarding rates, EWMA score calculation, and threshold-based exclusion
Figure 5.2: Reputation-based selfish node detection system

Reputation Calculation:

\[ Reputation_i(t) = \frac{Packets_{forwarded}}{Packets_{requested}} \cdot \alpha + Reputation_i(t-1) \cdot (1 - \alpha) \]

Where:

  • \(\alpha\) = learning rate (e.g., 0.3)
  • Reputation is in range [0, 1]
  • Low reputation leads to node being avoided by routing protocols

Track how a selfish node’s reputation degrades over time with \(\alpha = 0.3\):

Initial state: \(R_0 = 0.85\) (trusted), node forwards 60% of packets

Iteration 1: \[R_1 = 0.60 \times 0.3 + 0.85 \times 0.7 = 0.18 + 0.595 = 0.775\]

Iteration 2 (continued 60% forwarding): \[R_2 = 0.60 \times 0.3 + 0.775 \times 0.7 = 0.18 + 0.543 = 0.723\]

Iteration 3: \[R_3 = 0.60 \times 0.3 + 0.723 \times 0.7 = 0.18 + 0.506 = 0.686\]

The reputation drops from 0.85 to 0.69 after just 3 intervals (each ~1 hour), approaching the 0.5 “suspicious” threshold. If the node improves to 95% forwarding:

\[R_4 = 0.95 \times 0.3 + 0.686 \times 0.7 = 0.285 + 0.480 = 0.765\]

Recovery requires sustained cooperation—3 intervals of 60% drops but only 1 interval at 95% to regain trusted status (>0.8) requires \(0.95 \times 0.3 + R_{prev} \times 0.7 > 0.8\), solving: \(R_{prev} > 0.736\), meaning 2 more cooperative intervals.

Adjust the parameters below to see how a node’s reputation evolves over 10 intervals.

Worked Example: Detecting a Selfish Node Through Reputation Monitoring

Scenario: An agricultural WSN has 50 nodes monitoring soil moisture. Node 17 has been operating for 8 months and its battery dropped to 25%. Network administrators notice packet delivery rates decreasing in the region around Node 17.

Given:

  • Node 17’s previous reputation: R(t-1) = 0.85
  • Learning rate: alpha = 0.3
  • In the last monitoring interval, Node 17 was asked to forward 80 packets
  • Node 17 actually forwarded only 48 packets
  • Exclusion threshold: R < 0.50

Steps:

  1. Calculate current forwarding ratio:
    • Forwarding ratio = 48/80 = 0.60 (60%)
  2. Apply EWMA reputation formula:
    • R(t) = (Packets_forwarded / Packets_requested) x alpha + R(t-1) x (1-alpha)
    • R(t) = 0.60 x 0.3 + 0.85 x 0.7
    • R(t) = 0.18 + 0.595 = 0.775
  3. Evaluate against thresholds:
    • New reputation: 0.775 (above 0.50 threshold)
    • Node remains in network but is flagged for monitoring
    • If this pattern continues for 3 more intervals:
      • Interval 2: R = 0.60 x 0.3 + 0.775 x 0.7 = 0.723
      • Interval 3: R = 0.60 x 0.3 + 0.723 x 0.7 = 0.686
      • Interval 4: R = 0.60 x 0.3 + 0.686 x 0.7 = 0.660

Result: Node 17’s reputation dropped from 0.85 to 0.775 in one interval. Continued monitoring will either force improved behavior or eventual exclusion.

Key Insight: The EWMA formula provides graceful degradation - a single bad interval does not cause immediate exclusion, but persistent selfish behavior accumulates penalties.

5.4.4 Incentive Mechanisms

  1. Tit-for-tat: “I forward for you only if you forward for me”
  2. Virtual currency: Nodes earn credits by forwarding, spend credits to send
  3. Reciprocity: Track bilateral cooperation ratios
  4. Exclusion threat: Selfish nodes lose network access for their own traffic

5.4.5 Game Theory Perspective: The Prisoner’s Dilemma in WSNs

Selfish behavior in WSNs maps directly to the classic Prisoner’s Dilemma from game theory. Each node decides independently whether to cooperate (forward packets) or defect (drop packets):

Node B Cooperates Node B Defects
Node A Cooperates Both benefit (network works) A wastes energy, B saves energy
Node A Defects A saves energy, B wastes energy Network collapses (both lose)

Nash Equilibrium Problem: Without enforcement, the rational strategy for each node is to defect (be selfish), even though mutual cooperation produces the best collective outcome. This is why reputation systems and incentive mechanisms are not optional – they shift the equilibrium toward cooperation.

Repeated Game Dynamics: In WSNs, nodes interact repeatedly over time. This transforms the one-shot Prisoner’s Dilemma into an iterated game, where strategies like tit-for-tat become viable:

  • Round 1: Cooperate (give benefit of the doubt)
  • Round N: Copy what the neighbor did in Round N-1
  • Forgiveness: Occasionally cooperate even after defection to allow recovery

Research (Srinivasan et al., 2003) demonstrates that cooperation emerges naturally in iterated WSN games when nodes have sufficient future interaction probability (>0.6), making reputation systems effective enforcement mechanisms.

Context: A European smart city project deployed 500 battery-powered air quality sensors managed by different community organizations. Each organization was responsible for maintaining their own nodes.

Problem Observed: After 6 months, packet delivery rates dropped from 95% to 62%. Investigation revealed that organizations with nodes in critical relay positions had quietly reduced forwarding rates to extend battery life, since replacing batteries required volunteer effort.

Measurements:

  • 73 nodes (15%) exhibited forwarding rates below 50%
  • These nodes were concentrated in high-relay-traffic positions
  • Network partitioning occurred in 3 neighborhoods
  • Sensor data gaps exceeded regulatory reporting requirements

Solution Implemented:

  1. Reputation dashboard: Public display of each organization’s forwarding contribution
  2. Fair relay scheduling: Routing protocol adjusted to distribute relay burden more evenly
  3. Battery subsidy: Organizations with high-relay nodes received priority battery replacements
  4. Graduated penalties: Low-reputation organizations lost priority access to aggregated data

Results: Within 2 months, packet delivery recovered to 91%. The combination of social pressure (public dashboard) and material incentives (battery subsidy) proved most effective.

Lesson Learned: Selfish behavior in real WSN deployments is often rational and predictable. Design the incentive structure before deployment, not after problems emerge.

5.4.6 Tuning the EWMA Learning Rate

The learning rate \(\alpha\) in the reputation formula controls how quickly the system responds to behavior changes:

Learning Rate (\(\alpha\)) Behavior Best For
0.1 (slow) Reputation changes slowly; tolerant of occasional drops Stable networks with rare misbehavior
0.3 (moderate) Balanced responsiveness; standard choice Most WSN deployments
0.5 (fast) Reputation changes rapidly; detects sudden shifts High-security or adversarial environments
0.7 (aggressive) Almost immediate response to current behavior Critical infrastructure monitoring
Trade-off: Responsiveness vs. Stability

A high \(\alpha\) catches selfish nodes faster but also produces more false positives – a cooperative node that drops a few packets due to temporary congestion may be incorrectly flagged. A low \(\alpha\) is stable but slow to detect gradually increasing selfishness. Choose \(\alpha\) based on your deployment’s tolerance for false alarms vs. detection delay.

5.5 Malicious Nodes

Time: ~15 min | Difficulty: Intermediate | Unit: P05.C14.U05

Definition:
Nodes deliberately disrupting network operation through active attacks
Taxonomy of malicious node attacks in WSNs: black hole, sinkhole, wormhole, Sybil, and selective forwarding
Figure 5.3: Malicious node attack taxonomy

This variant maps attack types to the network layer they target and the defense mechanisms required:

Attack types mapped to targeted network layers with corresponding defense mechanisms at each layer

Security Implementation Priority: Network layer attacks (black hole, sinkhole, wormhole) are most common in WSNs. Implement secure routing with authenticated updates as the first line of defense.

5.5.1 Black Hole Attack

Black hole attack: malicious node advertises false best route, attracts traffic, then silently drops all received packets
Figure 5.4: Black hole attack illustration

Characteristics:

  • Advertises false “best routes” to attract traffic
  • Drops all received packets silently
  • Creates routing black hole (packets disappear)

Impact:

  • Denial of service (communications fail)
  • Energy waste (sources keep retransmitting)
  • Possible data theft (attacker sees packet contents before dropping)

5.5.2 Sinkhole Attack

Attacker makes itself appear attractive as routing parent:

// Example: Malicious node advertising false routing metrics
void maliciousSinkholeAdvertisement() {
    RoutingPacket fake_ad;

    fake_ad.node_id = attacker_id;
    fake_ad.distance_to_gateway = 1;  // LIE: Claim 1-hop to gateway
    fake_ad.link_quality = 255;       // LIE: Perfect link
    fake_ad.battery = 100;            // LIE: Full battery

    // Broadcast fake advertisement
    broadcastRoutingUpdate(&fake_ad);

    // Result: Many nodes will choose attacker as parent
    // Attacker can now eavesdrop, drop, or modify their traffic
}

5.5.3 Wormhole Attack

Two colluding attackers create “tunnel” to confuse routing:

Wormhole attack with two colluding nodes connected by out-of-band tunnel making distant network areas appear adjacent
Figure 5.5: Wormhole attack with colluding nodes

How it works:

  1. M1 and M2 collude (connected via out-of-band link: Ethernet, Wi-Fi, etc.)
  2. M1 captures packets in Area 1
  3. M1 tunnels packets to M2 via private link (appears to be 1 hop)
  4. M2 re-broadcasts in Area 2
  5. Result: Nodes think Areas 1 and 2 are adjacent (1-2 hops apart)
  6. Routing protocols converge on wormhole as “optimal” path
  7. Attackers can monitor, drop, or modify all tunneled traffic

5.5.4 Sybil Attack

Single malicious node presents multiple identities:

Sybil attack where one physical node presents multiple fake identities to control majority voting in a WSN
Figure 5.6: Sybil attack controlling voting through fake identities

Example:

  • Voting-based protocol: “90% of neighbors agree”
  • Attacker presents 9 fake identities
  • Attacker controls majority vote with single physical node

Real-World Impact: Sybil attacks are particularly dangerous in:

  • Distributed voting systems: Attacker controls consensus outcomes
  • Reputation systems: Fake identities boost malicious node’s reputation
  • Geographic routing: Fake nodes appear to cover large geographic area
  • Data aggregation: Attacker injects fabricated sensor readings from multiple “sources”

5.5.5 Selective Forwarding Attack

Unlike a black hole that drops all packets, selective forwarding is more subtle:

Behavior patterns:
  - Drop packets from specific source nodes (targeted denial)
  - Drop packets containing specific data types (intelligence gathering)
  - Drop packets randomly at 20-30% rate (avoid detection thresholds)
  - Forward packets from allies, drop from targets (coalition attack)

Why This Is Harder to Detect: The attacker maintains a plausible forwarding rate (70-80%), making simple threshold-based detection ineffective. Advanced detection requires:

  1. Per-source forwarding analysis: Track forwarding rates for each source separately, not just aggregate
  2. Consistency checks: Compare what the node receives (overheard by neighbors) vs. what it forwards
  3. Statistical anomaly detection: Identify non-random drop patterns using chi-squared tests

5.5.6 Attack Difficulty and Sophistication Comparison

Quadrant chart comparing WSN attack types by sophistication (x-axis) and detection difficulty (y-axis). Black hole attacks are low sophistication and easy to detect. Wormhole and Sybil attacks are high sophistication and hard to detect. Selective forwarding is moderate sophistication but hard to detect.

5.5.7 Mitigation Strategies

Table 5.1: Malicious Node Defenses
Attack Detection Mitigation
Black Hole Monitor forwarding rates, consistency checks Multi-path routing, reputation systems
Sinkhole Verify routing metrics independently Authenticated routing updates
Wormhole Timing analysis (tunneled packets arrive too fast) Geographic/timing constraints
Sybil Identity verification, resource testing PKI, physical verification
Jamming Detect high collision rates Frequency hopping, spread spectrum

5.5.8 Defense-in-Depth Architecture

No single defense mechanism is sufficient against all attack types. A robust WSN requires layered defenses:

Six-layer defense-in-depth architecture for WSN security. Layer 1 (Physical Security) through Layer 6 (Network Response) shown as a sequential stack. Layers 1-2 in navy represent foundational security, Layers 3-4 in teal represent communication security, and Layers 5-6 in orange represent behavioral monitoring and response.

Implementation Priority for Resource-Constrained WSNs:

  1. Essential (deploy immediately): Layers 2-3 – device authentication and encrypted communication prevent most attacks
  2. Recommended (deploy when possible): Layers 4-5 – secure routing and behavior monitoring catch sophisticated attacks
  3. Advanced (high-security only): Layers 1 and 6 – physical security and automated response require significant investment

Scenario: A precision agriculture WSN with 200 nodes monitors soil moisture across a 50-hectare farm. The nodes are deployed in open fields (physically accessible), use multi-hop routing to 4 gateway nodes, and transmit data every 15 minutes. Recent incidents suggest someone is tampering with nodes near the property boundary.

Threat Assessment:

Threat Likelihood Impact Priority
Physical tampering (node replacement) HIGH (open field) HIGH (inject false data) CRITICAL
Black hole attack (compromised relay) MEDIUM HIGH (data loss in entire region) HIGH
Selfish behavior (battery depletion) HIGH (year 2+) MEDIUM (gradual degradation) MEDIUM
Wormhole attack LOW (requires sophistication) HIGH (routing disruption) LOW

Recommended Defense Stack:

  1. Authentication: Each node has a pre-shared key (PSK) rotated monthly. New nodes require physical registration.
  2. Secure routing: Use authenticated RPL with version number verification. Nodes validate that routing metrics are physically plausible (cannot claim 1-hop to gateway if GPS shows 500m away).
  3. Reputation monitoring: EWMA with \(\alpha = 0.3\), exclusion threshold at 0.40. Nodes below threshold are routed around.
  4. Multi-path redundancy: Critical sensor zones use 2-path routing. If both paths report data loss, alert the gateway.
  5. Physical inspection schedule: Quarterly inspection of boundary nodes. Tamper-evident seals on node enclosures.

Cost-Benefit: This defense stack adds approximately 15% energy overhead (encryption + reputation monitoring) but prevents an estimated 90% of realistic attack scenarios for this deployment.

5.6 Knowledge Check

Scenario: In a WSN, Node X forwards only 40% of packets it is asked to relay. Neighbor nodes calculate X’s reputation = 0.40 (below 0.50 threshold). Node X claims its battery is at 15% and it is conserving energy for its own critical sensing tasks.

Think about:

  1. Is Node X selfish (rational energy conservation) or malicious (active attack)?
  2. How can the network verify X’s battery claim?
  3. What is the appropriate response: exclusion, reduced trust, or full cooperation?

Key Insight: Node X is likely selfish, not malicious, but verification is crucial.

Distinguishing selfish vs malicious:

  1. Selfish nodes preserve energy for self-interest but respond predictably - forward when monitored, drop when unmonitored
  2. Malicious nodes actively attack regardless of monitoring - may forward 0% or selectively drop critical packets

Verification strategies:

  1. Cross-check battery reports: Monitor X’s transmission power. Strong signal with claimed low battery = lying (malicious)
  2. Monitor duty cycle: Selfish nodes extend sleep periods. Malicious nodes maintain normal activity but drop packets
  3. Behavioral consistency: Offer cooperation incentives. Selfish nodes accept; malicious nodes refuse

Appropriate response:

  • If selfish: Gradual exclusion - route less traffic through X, reserve X’s energy for its own sensing
  • If malicious: Immediate exclusion - broadcast warning to all neighbors

5.7 Comparing Detection Approaches

Different detection and mitigation approaches have distinct strengths and trade-offs:

Approach Energy Overhead Detection Rate False Positive Rate Best For
Watchdog (passive monitoring) Low (5-10%) Moderate (70-80%) High (15-20%) Selfish node detection
EWMA reputation Low (5-8%) High (85-90%) Low (5-10%) Gradual behavior change
Cryptographic authentication Moderate (15-25%) Very High (95%+) Very Low (<2%) Malicious node prevention
Multi-path verification High (30-50%) High (90%+) Low (5%) Black hole / sinkhole
Geographic leashing Moderate (10-15%) High for wormholes (90%) Low (3-5%) Wormhole detection
Resource testing High (20-30%) Moderate (75-85%) Moderate (10-15%) Sybil detection
Design Rule: Match Defense to Threat

The most common mistake in WSN security design is applying a single defense mechanism uniformly. Instead:

  • For selfish nodes: Reputation + incentives (low overhead, high effectiveness)
  • For malicious routing attacks: Authenticated routing + multi-path (moderate overhead)
  • For identity attacks (Sybil): Hardware-based identity + resource testing (high overhead, deploy selectively)
  • For physical attacks: Tamper detection + redundant deployment (highest overhead, only for critical assets)

Budget your security energy overhead: aim for 15-25% total overhead, allocated across the most likely threats for your deployment scenario.

5.8 Knowledge Check

Test Your Understanding

Question 1: A node forwards 95% of packets when neighbors are monitoring but only 40% when unmonitored. What type of misbehavior is this?

  1. Malicious black hole attack
  2. Random hardware failure
  3. Selfish energy conservation
  4. Sybil attack

c) Selfish energy conservation. The key indicator is the behavioral change in response to monitoring. Selfish nodes respond to social pressure (reputation systems) and increase forwarding when observed. Malicious nodes maintain the same drop rate regardless of monitoring. Hardware failure would show random patterns unrelated to observation.

Question 2: In the EWMA reputation formula \(R_i(t) = \frac{F}{R} \cdot \alpha + R_i(t-1) \cdot (1-\alpha)\), what happens when \(\alpha\) is set very high (e.g., 0.9)?

  1. The system becomes more tolerant of temporary drops
  2. The system responds quickly but is more prone to false positives
  3. The system ignores recent behavior
  4. The system never detects selfish nodes

b) The system responds quickly but is more prone to false positives. A high \(\alpha\) gives heavy weight to the most recent observation, making the system highly responsive to recent behavior changes. This means a single dropped packet can sharply lower reputation (false positive risk). A lower \(\alpha\) (e.g., 0.3) smooths over temporary fluctuations but takes longer to detect sustained selfishness. The trade-off is responsiveness vs. stability.

Question 3: Which attack is MOST difficult to detect because the attacking node appears to function normally to its immediate neighbors?

  1. Black hole attack
  2. Selective forwarding attack
  3. Wormhole attack
  4. Sybil attack

c) Wormhole attack. In a wormhole attack, two colluding nodes create a tunnel that appears to be a single hop. Both nodes forward packets normally to their neighbors, so local reputation monitoring sees nothing wrong. The attack manipulates routing by making distant nodes appear adjacent, which is invisible to single-hop watchdog systems. Detection requires geographic leashes, temporal leashes, or multi-hop path analysis.

Common Pitfalls

Reputation-based trust systems assign scores to nodes based on observed behavior. A Sybil attack creates multiple fake node identities (all controlled by one adversary) to accumulate high reputation scores while a subset of identities misbehave. Without Sybil resistance (resource-constrained IDs, geographic verification, physical unclonable functions), reputation systems can be gamed – an attacker gets high trust scores on fake nodes then uses them to route or inject malicious data.

Cryptographic countermeasures against selfish nodes (MACs for message authentication, hash chains for freshness) consume energy on every message. A countermeasure that doubles energy consumption to prevent 5% packet loss from selfish nodes may shorten network lifetime more than the selfish behavior itself. Always quantify the energy overhead of security mechanisms against the realistic probability and impact of the attack.

A node that drops 20% of forwarded packets may be: (a) intentionally selfish, (b) experiencing buffer overflow from traffic bursts, (c) experiencing radio interference on specific channels, or (d) running low on battery and triggering power management. Misclassifying unintentional drops as attacks triggers exclusion of functioning (if stressed) nodes. Use multi-hypothesis detection that considers hardware failure and resource constraints alongside attack models.

5.9 Summary

5.9.1 Key Concepts Covered

This chapter covered intentional misbehavior in wireless sensor networks:

  • Selfish Nodes: Nodes that prioritize energy conservation over network cooperation, with 40-60% forwarding rates and conditional cooperation when monitored
  • Economic Rationality: Selfish behavior can extend node lifetime by 60% or more, creating a tragedy of the commons; game theory explains why cooperation requires enforcement
  • Reputation Systems: EWMA-based reputation tracking with tunable learning rate (\(\alpha\)), providing gradual degradation that prevents false positives while catching persistent selfishness
  • Incentive Mechanisms: Tit-for-tat, virtual currency, reciprocity, and exclusion threats to encourage cooperation; iterated game dynamics make these effective in WSNs
  • Malicious Attacks: Black hole, sinkhole, wormhole, Sybil, and selective forwarding attacks with their characteristics, detection difficulty, and impacts
  • Defense-in-Depth: Layered security architecture from physical hardware through behavior monitoring, with priority-based implementation for resource-constrained deployments
  • Detection Accuracy: Understanding the trade-off between responsiveness and false positive rates when tuning detection parameters

5.9.2 The Critical Distinction

Dimension Selfish Nodes Malicious Nodes
Motivation Rational self-interest (energy) Deliberate disruption
Response to monitoring Improve behavior No change
Detection method Reputation tracking Cryptographic verification
Mitigation strategy Incentives and social pressure Authentication and exclusion
Recovery possible? Yes, with proper incentives No, requires permanent exclusion
Energy overhead of defense Low (5-10%) Moderate to High (15-30%)

5.9.3 Key Formulas

Reputation (EWMA): \[Reputation_i(t) = \frac{Packets_{forwarded}}{Packets_{requested}} \cdot \alpha + Reputation_i(t-1) \cdot (1 - \alpha)\]

Selfish Lifetime Gain: \[Gain = \frac{E_{total}}{E_{own}} - 1 = \frac{E_{relay}}{E_{own}}\]

5.10 Concept Relationships

Prerequisites:

Builds Upon:

  • EWMA reputation formulas for tracking cooperation
  • Game theory (Prisoner’s Dilemma, Nash equilibrium)
  • Multi-layer defense-in-depth architecture

Enables:

  • Dumb Nodes and Recovery - Environmental failures
  • Secure WSN design with authenticated routing
  • Incentive-compatible protocol design

Related Concepts:

  • Selfish vs malicious behavior distinction
  • Black hole, sinkhole, wormhole, Sybil attack patterns
  • Cryptographic defenses vs reputation-based detection

5.11 See Also

Security Deep Dives:

Practical Application:

5.13 What’s Next

If you want to… Read this
Understand how to classify and detect misbehaving nodes Node Behavior Classification
Study the full taxonomy of node behaviors from cooperative to adversarial Node Behavior Taxonomy
Implement trust mechanisms to handle selfish and malicious nodes Sensor Behaviors Trust Implementation
Learn recovery strategies for compromised network nodes Dumb Recovery Strategies
See WSN security applied in mine safety case studies Sensor Behaviors Mine Safety