23  Intrusion Detection for IoT

23.1 Intrusion Detection Systems for IoT

This chapter covers intrusion detection and prevention systems (IDS/IPS) for IoT environments, including signature-based and anomaly-based detection, deployment strategies, and practical knowledge checks.

23.2 Learning Objectives

By the end of this chapter, you will be able to:

  • Differentiate signature-based and anomaly-based detection methods
  • Evaluate network-based versus host-based IDS suitability for IoT
  • Architect IDS deployment strategies for IoT networks
  • Calibrate detection thresholds to balance alerts and false positives
  • Formulate incident response procedures for detected security events

Key Concepts

  • NIDS (Network Intrusion Detection System): A system monitoring network traffic for patterns matching known attack signatures or statistical anomalies, deployed at network boundaries or on span ports to inspect IoT device communications.
  • HIDS (Host Intrusion Detection System): Intrusion detection running on the device itself, monitoring for suspicious process activity, file system changes, or unusual system calls — limited by device resource constraints on MCU-based IoT.
  • Signature-based detection: Identifying attacks by matching network traffic or behaviour against a database of known attack patterns — fast and accurate for known attacks but blind to novel threats.
  • Anomaly-based detection: Identifying attacks by establishing a baseline of normal device behaviour and flagging deviations — capable of detecting novel attacks but requires careful tuning to avoid excessive false positives.
  • Lateral movement detection: Identifying an attacker who has compromised one device and is attempting to reach other devices, detected by unusual inter-device communication patterns not present in the normal baseline.
  • C2 (Command and Control) detection: Detecting botnet command-and-control traffic by identifying IoT devices communicating with external servers on unusual ports, at unusual times, or using obfuscated protocols.
In 60 Seconds

IoT intrusion detection identifies malicious activity in device networks by monitoring communication patterns, device behaviour, and system states for anomalies that indicate compromise — adapting techniques from enterprise network security to the unique characteristics of IoT traffic (high device counts, constrained protocols, predictable normal behaviour). The advantage of IoT is that device behaviour is highly regular, making anomalies easier to detect than in unpredictable human-driven enterprise systems.

IoT security threats are the various ways that connected devices and their data can be compromised. Think of your IoT system as a house – you need to understand how burglars might try to get in before you can choose the right locks, alarms, and security cameras. This chapter helps you understand the threats so you can build effective defenses.

“I have got TWO superpowers for catching intruders!” Sammy the Sensor announced proudly. “My first trick is like having a book of mugshots – I know what known attacks look like, and if I spot one, ALARM! That is signature-based detection.”

“But what about brand new attacks that are not in the book?” Max the Microcontroller asked.

“That is my second superpower!” Sammy grinned. “I learn what normal behavior looks like – how much data flows, when devices talk, what patterns are typical. If something seems weird, like a temperature sensor suddenly trying to access the database at midnight, I sound the alarm even though I have never seen that exact attack before. That is anomaly-based detection!”

Lila the LED jumped in. “And when Sammy spots trouble, the system can either just DETECT it – that is an IDS, like a burglar alarm that rings – or PREVENT it – that is an IPS, like a door that automatically slams shut. IDS watches and alerts; IPS watches and blocks!”

“The best part is using BOTH signature and anomaly detection together,” Bella the Battery said. “Signatures catch the known bad guys with high confidence, while anomaly detection catches the clever new tricks. Two layers of alarm are always better than one!”

23.3 IDS/IPS Fundamentals

23.3.1 Detection Methods

Method Description Strengths Weaknesses
Signature-Based Matches known attack patterns Low false positives, fast Cannot detect unknown attacks
Anomaly-Based Detects deviations from baseline Catches novel attacks Higher false positives
Hybrid Combines both approaches Best coverage More complex to tune

23.3.2 IDS vs IPS

Aspect IDS (Detection) IPS (Prevention)
Action Alert only Alert + block
Position Passive (copy of traffic) Inline (traffic flows through)
Latency None Adds processing delay
Risk May miss attacks May block legitimate traffic
IoT Use Monitoring, forensics Critical asset protection
Tradeoff: Signature-Based IDS vs Anomaly-Based IDS
Factor Signature-Based Anomaly-Based
Known Attacks Excellent detection May detect if behavior anomalous
Unknown Attacks Cannot detect (no signature) Can detect if behavior differs
False Positives Low (precise matching) Higher (baseline drift, edge cases)
Tuning Effort Low (vendor updates signatures) High (establish baselines, tune thresholds)
Resource Usage Low (pattern matching) Higher (statistical analysis, ML)
IoT Suitability Good for known IoT malware Better for behavioral changes

Recommendation: Use both methods. Signature-based catches known threats with high confidence; anomaly-based provides safety net for novel attacks.

Tradeoff: Network-Based IDS vs Host-Based IDS for IoT
Factor Network-Based (NIDS) Host-Based (HIDS)
Visibility All network traffic Only host activities
Deployment Central (span port, tap) Per-device agent
Resource Impact None on IoT devices CPU/memory on constrained devices
Encrypted Traffic Cannot inspect (without TLS termination) Can see decrypted data on host
Scalability One sensor monitors many devices Agent per device (thousands)
IoT Suitability Excellent (no device changes) Limited (resource constraints)

Recommendation: NIDS for IoT (centralized monitoring, no device overhead). HIDS only for gateway devices with sufficient resources.

23.4 IDS Deployment Architecture for IoT

IDS deployment architecture showing NIDS sensors at network boundaries monitoring traffic between IoT device zones, with centralized SIEM collecting alerts from signature-based and anomaly-based detection engines
Figure 23.1: IDS deployment architecture for IoT network

23.5 Knowledge Check Exercises

23.5.1 Exercise 1: Network Segmentation and Firewall Configuration

Scenario: You’re the security architect for a smart factory with 500 IoT sensors, 50 PLCs (Programmable Logic Controllers), and 200 employee workstations. Recent threat intelligence indicates that attackers are targeting industrial IoT networks by compromising workstations via phishing, then pivoting to PLCs to disrupt manufacturing.

Current State:

  • All devices on flat network: 192.168.0.0/16
  • No VLANs or internal firewalls
  • PLCs communicate with sensors via Modbus TCP (port 502)
  • Workstations access PLC HMI dashboards via HTTP (port 80)

Task: Design a segmented network architecture and firewall rules that prevent workstation compromises from reaching PLCs while maintaining operational functionality.

Click to reveal solution

Proposed Architecture:

VLAN Name Subnet Devices
10 Corporate 10.10.10.0/24 Workstations
20 OT-DMZ 10.10.20.0/24 HMI servers, historians
30 OT-Control 10.10.30.0/24 PLCs
40 OT-Field 10.10.40.0/23 500 IoT sensors

Firewall Rules:

# Sensors → PLCs: Modbus only
allow tcp 10.10.40.0/23 10.10.30.0/24 port 502

# PLCs → OT-DMZ: Data to historian
allow tcp 10.10.30.0/24 10.10.20.0/24 port 3000

# Workstations → OT-DMZ: HMI access only
allow tcp 10.10.10.0/24 10.10.20.0/24 port 443

# Workstations → PLCs: DENIED
deny ip 10.10.10.0/24 10.10.30.0/24

# All other inter-VLAN: DENIED
deny ip any any log

Key Security Improvements:

  1. Phishing pivot blocked: Compromised workstation cannot reach PLCs (deny rule)
  2. Defense in depth: HMI access through DMZ (never direct to PLCs)
  3. Lateral movement limited: Sensors can only reach PLCs on Modbus port
  4. Audit trail: All denied traffic logged for incident response

23.5.2 Exercise 2: IDS Deployment Strategy

Scenario: A hospital is deploying 200 medical IoT devices across 10 floors. The security team must implement intrusion detection while complying with HIPAA requirements for audit logging and incident detection.

Constraints:

  • Limited budget: Can deploy 3 IDS sensors
  • Medical devices cannot have agents installed (FDA regulations)
  • Network spans 10 VLANs (one per floor plus servers)

Question: Where should the IDS sensors be placed for maximum visibility?

Click to reveal solution

Optimal IDS Placement:

  1. Sensor 1: Core switch span port
    • Sees all inter-VLAN traffic
    • Detects lateral movement between floors
    • Catches north-south traffic to servers
  2. Sensor 2: Internet edge (behind firewall)
    • Monitors incoming threats
    • Detects C2 communication attempts
    • Catches data exfiltration
  3. Sensor 3: Medical device aggregation point
    • Monitors IoT-specific traffic patterns
    • Baseline normal device behavior
    • Detects anomalous device activity

HIPAA Compliance:

  • All sensors log to centralized SIEM with 6-year retention
  • Alerts forwarded to 24/7 security operations
  • Quarterly review of IDS rules and baselines

23.5.3 Quiz: IDS Concepts

Question 1: A smart agriculture system monitors soil moisture across 500 sensors. The IDS uses Z-score anomaly detection with a threshold of 3 standard deviations. During the first week of deployment, the system generates 200+ alerts daily, most from sensors in a newly irrigated field showing moisture readings outside the historical baseline. How should the security team respond?

Click to reveal answer

Answer: Retrain the baseline for affected sensors to reflect the new normal operating conditions after irrigation.

Anomaly detection requires accurate baselines. When operating conditions legitimately change (new irrigation patterns), the baseline must be updated. Many production systems have “learning mode” periods after environmental changes to establish new baselines. This maintains detection capability while eliminating false positives.

Question 2: An IPS protecting an IoT gateway is configured to block sources that exceed 100 requests per minute. A legitimate IoT dashboard sends 95 requests per minute during normal operation. During a security incident investigation, an analyst queries the same dashboard 10 times in one minute to review sensor data. What happens?

Click to reveal answer

Answer: The dashboard and analyst are both blocked because combined traffic (105 requests) exceeds the threshold.

Rate limiting counts all requests from a source. 95 (dashboard) + 10 (analyst) = 105 requests/minute exceeds the 100 threshold. This illustrates a real-world challenge: security controls can block legitimate activity during incidents. Production systems often have analyst-specific exceptions or elevated limits for investigation periods.

Question 3: A manufacturing plant’s IPS is configured in inline mode to block malicious traffic to protect PLCs. During a firmware update, the IPS incorrectly identifies the update traffic as a potential attack and blocks it, causing the update to fail. What architectural change would prevent this scenario?

Click to reveal answer

Answer: Implement a maintenance mode or change window policy that temporarily reduces IPS sensitivity for planned updates.

Production IPS deployments require change management integration. During planned maintenance windows, IPS can operate in “detect-only” mode or with relaxed rules for known update traffic patterns. This balances security with operational requirements.

23.6 Threat Detection Techniques

23.6.1 Statistical Anomaly Detection

Z-score anomaly detection is fundamental in IoT security:

Z-score = (Value - Mean) / Standard Deviation

How it works:

  1. The system maintains a baseline of recent sensor readings
  2. Each new reading is compared to the baseline using Z-score
  3. Values with |Z-score| > 3 are flagged as anomalies (only 0.3% of normal readings would exceed this threshold)

Why it matters for IoT:

  • Detects sensor tampering (injected false readings)
  • Identifies equipment malfunctions before failure
  • Catches data exfiltration through sensor channels
  • Works without predefined attack signatures

23.6.2 Pattern-Based Attack Detection

Correlating multiple events to detect multi-stage attacks:

Port Scan Detection:

  • Tracks connection attempts to different ports
  • 5+ ports probed in 5 seconds = scan detected
  • Indicates reconnaissance phase of attack

Brute Force Detection:

  • Monitors authentication failure sequences
  • 3+ failures from same source = attack detected
  • Triggers account lockout

Event Correlation:

  • Severity scores assigned to each event
  • Multiple medium events = elevated threat
  • Single critical event = immediate response

This Python code implements a hybrid IDS with both signature-based and anomaly-based detection. Feed it simulated IoT network events and watch it catch attacks.

import random
import math
from collections import defaultdict

class IoTIntrusionDetector:
    """Hybrid IDS: signature + anomaly detection for IoT networks."""

    # Known attack signatures (pattern matching)
    SIGNATURES = {
        "port_scan": {"event": "connection", "ports_per_min": 10},
        "brute_force": {"event": "auth_fail", "attempts_per_min": 5},
        "data_exfil": {"event": "transfer", "bytes_threshold": 1_000_000},
    }

    def __init__(self, anomaly_window=30, z_threshold=2.5):
        self.anomaly_window = anomaly_window
        self.z_threshold = z_threshold
        self.traffic_history = []     # Bytes per interval
        self.auth_failures = defaultdict(list)  # IP -> timestamps
        self.port_scans = defaultdict(set)       # IP -> ports accessed
        self.alerts = []

    def _z_score_check(self, value):
        """Anomaly detection using rolling Z-score."""
        if len(self.traffic_history) < 10:
            self.traffic_history.append(value)
            return False, 0.0
        mean = sum(self.traffic_history) / len(self.traffic_history)
        variance = sum((x - mean)**2 for x in self.traffic_history) / len(self.traffic_history)
        std = math.sqrt(variance) if variance > 0 else 1.0
        z = abs(value - mean) / std
        # Add to window (only non-anomalous values)
        if z <= self.z_threshold:
            self.traffic_history.append(value)
            if len(self.traffic_history) > self.anomaly_window:
                self.traffic_history.pop(0)
        return z > self.z_threshold, z

    def process_event(self, timestamp, src_ip, event_type, details):
        """Process a single network event through both detection engines."""
        alerts = []

        # === Signature-based detection ===
        if event_type == "auth_fail":
            self.auth_failures[src_ip].append(timestamp)
            # Keep only last 60 seconds
            recent = [t for t in self.auth_failures[src_ip]
                      if timestamp - t < 60]
            self.auth_failures[src_ip] = recent
            if len(recent) >= self.SIGNATURES["brute_force"]["attempts_per_min"]:
                alerts.append({
                    "type": "SIGNATURE", "attack": "Brute Force",
                    "source": src_ip, "severity": "HIGH",
                    "detail": f"{len(recent)} auth failures in 60s"
                })

        if event_type == "connection":
            port = details.get("port", 0)
            self.port_scans[src_ip].add(port)
            if len(self.port_scans[src_ip]) >= self.SIGNATURES["port_scan"]["ports_per_min"]:
                alerts.append({
                    "type": "SIGNATURE", "attack": "Port Scan",
                    "source": src_ip, "severity": "MEDIUM",
                    "detail": f"{len(self.port_scans[src_ip])} ports probed"
                })

        # === Anomaly-based detection ===
        if event_type == "traffic":
            bytes_count = details.get("bytes", 0)
            is_anomaly, z_score = self._z_score_check(bytes_count)
            if is_anomaly:
                alerts.append({
                    "type": "ANOMALY", "attack": "Traffic Anomaly",
                    "source": src_ip, "severity": "MEDIUM",
                    "detail": f"Z-score={z_score:.1f}, "
                              f"bytes={bytes_count:,}"
                })

        self.alerts.extend(alerts)
        return alerts

# Simulate IoT network traffic
random.seed(42)
ids = IoTIntrusionDetector(anomaly_window=20, z_threshold=2.5)

print("=== IoT Intrusion Detection System ===\n")

# Phase 1: Normal traffic baseline (30 events)
print("Phase 1: Establishing baseline (normal traffic)...")
for t in range(30):
    ids.process_event(t, "sensor-01", "traffic",
                      {"bytes": random.randint(100, 500)})
print(f"  Baseline: {len(ids.traffic_history)} samples, "
      f"avg={sum(ids.traffic_history)/len(ids.traffic_history):.0f} bytes\n")

# Phase 2: Brute force attack
print("Phase 2: Simulating brute force login attack...")
for t in range(30, 36):
    alerts = ids.process_event(t, "attacker-42", "auth_fail",
                               {"user": "admin"})
    if alerts:
        for a in alerts:
            print(f"  [{a['severity']}] {a['type']}: {a['attack']} "
                  f"from {a['source']} -- {a['detail']}")

# Phase 3: Port scan
print("\nPhase 3: Simulating port scan reconnaissance...")
for port in [22, 80, 443, 502, 8883, 1883, 5683, 3000, 8080, 9090, 161]:
    alerts = ids.process_event(40, "attacker-42", "connection",
                               {"port": port})
    if alerts:
        for a in alerts:
            print(f"  [{a['severity']}] {a['type']}: {a['attack']} "
                  f"from {a['source']} -- {a['detail']}")

# Phase 4: Data exfiltration (anomalous traffic volume)
print("\nPhase 4: Simulating data exfiltration attempt...")
for t in range(50, 55):
    # Normal sensors
    ids.process_event(t, "sensor-02", "traffic",
                      {"bytes": random.randint(100, 500)})
    # Compromised sensor exfiltrating data
    alerts = ids.process_event(t, "sensor-compromised", "traffic",
                               {"bytes": random.randint(50000, 100000)})
    if alerts:
        for a in alerts:
            print(f"  [{a['severity']}] {a['type']}: {a['attack']} "
                  f"from {a['source']} -- {a['detail']}")

# Summary
print(f"\n--- Detection Summary ---")
sig_alerts = [a for a in ids.alerts if a["type"] == "SIGNATURE"]
anom_alerts = [a for a in ids.alerts if a["type"] == "ANOMALY"]
print(f"Signature-based alerts: {len(sig_alerts)}")
print(f"Anomaly-based alerts:   {len(anom_alerts)}")
print(f"Total alerts:           {len(ids.alerts)}")
print(f"\nKey insight: Signature detection caught known patterns "
      f"(brute force, port scan).\nAnomaly detection caught the "
      f"data exfiltration -- no signature needed!")

What to Observe:

  • The baseline phase establishes “normal” traffic volume (~100-500 bytes)
  • Signature detection catches brute force (5+ auth failures) and port scan (10+ ports) with zero false positives
  • Anomaly detection catches data exfiltration (50-100KB vs 100-500 byte baseline) using Z-score
  • The hybrid approach detects both known attack patterns and novel behavioral anomalies

23.7 Worked Example: Sizing an IDS for a Smart Building Network

Scenario: A commercial office building has 800 IoT devices across 5 floors: 300 environmental sensors (temp/humidity/CO2), 200 lighting controllers, 150 access control readers, 100 HVAC actuators, and 50 security cameras. Design the IDS architecture and calculate the alert processing requirements.

Step 1: Traffic Baseline Estimation

Device Class Count Msgs/min/device Avg Size (bytes) Daily Traffic
Environmental sensors 300 1 64 27.6 MB
Lighting controllers 200 0.5 48 6.9 MB
Access readers 150 2 (events) 128 55.3 MB
HVAC actuators 100 0.2 96 2.8 MB
Security cameras 50 Continuous 500 KB/s 2,160 GB
Total 800 ~2,160 GB

Camera traffic dominates (99.9%), but sensor/controller traffic is where most IoT-specific attacks occur. The IDS must handle both.

Step 2: Detection Method Selection

Signature-based detection:
  Signature database: ~3,000 IoT-specific rules (Suricata IoT ruleset)
  Processing: 1 us per rule x 3,000 rules = 3 ms per packet
  At 800 devices x avg 1 msg/min = 13.3 msgs/sec
  CPU load: 13.3 x 3 ms = 40 ms/sec = 4% of one core

Anomaly-based detection:
  Baseline features per device: 8 (msg rate, size, destination,
    protocol, time-of-day, payload entropy, sequence gaps, error rate)
  Model: Online gradient descent, updated every 60 seconds
  Baseline window: 7 days (10,080 samples per device)
  Memory: 800 devices x 8 features x 8 bytes = 51.2 KB
  Z-score threshold: 3.0 (99.7% of normal traffic passes)

Step 3: False Positive Rate Analysis

With Z-score threshold = 3.0:
  Expected false positive rate: 0.3% per device per evaluation
  Evaluations per day: 800 devices x 1,440 minutes = 1,152,000
  Expected false positives: 1,152,000 x 0.003 = 3,456 alerts/day

That is 144 alerts/hour -- alert fatigue territory.

Tuning strategy:
  1. Raise threshold to 4.0: FP rate drops to 0.006%
     Expected FPs: 1,152,000 x 0.0000634 = 73 alerts/day = 3.0/hour
  2. Add correlation: group alerts within 60s window from same device
     Reduces effective alerts by ~5x = 14 alerts/day
  3. Allow-list scheduled operations (firmware checks, daily reports)
     Reduces another ~30% = 10 alerts/day

Final: 10 actionable alerts/day (manageable for 1 security analyst)

Step 4: NIDS Sensor Placement

Floor plan: 5 floors, each with:
  - 1 floor switch (connects all floor devices)
  - 1 uplink to core switch
  - 1 gateway per device category (5 gateways per floor)

NIDS sensor placement:
  Option A: 1 sensor at core switch (span port)
    Cost: $0 (software on existing server)
    Coverage: All inter-floor and internet traffic
    Blind spot: Intra-floor device-to-device traffic
    Detect: 70% of attack patterns

  Option B: 5 sensors (1 per floor switch)
    Cost: 5 x $500 = $2,500 (Raspberry Pi 4 + Suricata)
    Coverage: All floor traffic including intra-floor
    Blind spot: None for wired traffic
    Detect: 95% of attack patterns

  Option C: 25 sensors (1 per gateway)
    Cost: 25 x $500 = $12,500
    Coverage: Per-device-class granularity
    Detect: 99% of attack patterns
    Problem: Diminishing returns, 5x management overhead

Decision: Option B ($2,500) provides optimal coverage.
The 25% detection improvement from Option A to B justifies $2,500.
The 4% improvement from B to C does not justify $10,000 + management.

Result: A 5-sensor NIDS deployment with hybrid detection (signature + anomaly, Z-score threshold 4.0 with correlation) produces ~10 actionable alerts per day, costing $2,500 in hardware plus one analyst reviewing alerts for 30 minutes daily. Total annual cost: $2,500 + $6,250 (30 min/day x $50/hr x 250 days) = $8,750.

Key lesson: IDS design is primarily a tuning problem, not a deployment problem. The difference between 3,456 alerts/day (unusable) and 10 alerts/day (actionable) comes from threshold adjustment, correlation, and allow-listing – not from buying more sensors.

IDS Alert Fatigue: Quantifying the Cost of False Positives

Calculate the analyst time cost and detection degradation from poorly-tuned IDS thresholds:

High False Positive Rate (Z-score threshold = 3.0): \[\text{FP Rate} = 2 \times \Phi(-3.0) = 0.0027 \text{ (0.27%)}\] \[\text{Daily FPs} = 14.4M \text{ events/day} \times 0.0027 = 38,880 \text{ alerts/day}\]

Analyst Processing:

  • Alert triage rate: 2 minutes/alert (automated), 5 alerts → 1 investigation (10 min)
  • Daily time: \((38,880 \times 2\text{min}) / 60 = 1,296 \text{ hours/day}\)
  • Staffing needed: \(1,296 / 8 = 162\) FTE analysts

Annual Cost: \(162 \text{ FTE} \times \$80,000/\text{year} = \$12.96M\)

Low False Positive Rate (Z-score threshold = 5.0): \[\text{FP Rate} = 2 \times \Phi(-5.0) = 0.00000058\] \[\text{Daily FPs} = 14.4M \times 0.00000058 = 8.4 \text{ alerts/day}\]

Staffing: 1 FTE analyst reviewing 8 alerts/day = $80,000/year

Cost Reduction: \(\$12.96M - \$80K = \$12.88M\) saved annually through threshold tuning alone. ROI: infinite (tuning is free). True positive detection rate remains 100% for 8-sigma events at both thresholds. This demonstrates that IDS success depends on tuning skill, not sensor count.

23.8 Mathematical Foundation: Anomaly Detection False Positive Rate Optimization

Formal Definition: In anomaly-based IDS using Z-score thresholds, the false positive rate \(FPR\) depends on threshold \(z\) under Gaussian assumption:

\[ FPR(z) = P(|Z| > z) = 2 \cdot \Phi(-z) \]

where \(\Phi\) is cumulative normal distribution. For non-attack baseline with \(N\) events/day:

\[ \text{False Positives/Day} = N \cdot FPR(z) \]

Worked Calculation (Smart Factory with 500 IoT Devices):

Given: - Baseline traffic: 500 devices × 20 events/min = 10,000 events/min = 14.4M events/day - Target: ≤10 false positives/day (manageable for single analyst) - Anomaly detection: Z-score on packet rate, packet size, protocol mix

For threshold \(z = 3.0\) (standard “3-sigma” rule): \[ FPR(3.0) = 2 \cdot \Phi(-3.0) = 2 \cdot 0.00135 = 0.0027 \text{ (0.27%)} \] \[ \text{False Positives/Day} = 14,400,000 \cdot 0.0027 = 38,880 \text{ alerts/day (unusable)} \]

For threshold \(z = 4.0\): \[ FPR(4.0) = 2 \cdot \Phi(-4.0) = 2 \cdot 0.0000317 = 0.0000634 \] \[ \text{False Positives/Day} = 14,400,000 \cdot 0.0000634 \approx 913 \text{ alerts/day (still high)} \]

For threshold \(z = 5.0\): \[ FPR(5.0) = 2 \cdot \Phi(-5.0) = 2 \cdot 0.00000029 = 0.00000058 \] \[ \text{False Positives/Day} = 14,400,000 \cdot 0.00000058 \approx 8.4 \text{ alerts/day (manageable)} \]

Trade-off with detection rate: Higher threshold reduces false positives but increases false negatives (missed attacks). Assume attacks are 8-sigma events: - \(z = 3.0\): detects 100% of 8-sigma attacks, 38,880 FP/day - \(z = 4.0\): detects 100% of 8-sigma attacks, 913 FP/day - \(z = 5.0\): detects 100% of 8-sigma attacks, 8.4 FP/day

Result: Threshold \(z = 5.0\) provides 99.978% reduction in false positives vs. \(z = 3.0\) (38,880 → 8.4 alerts/day), with zero impact on true attack detection for high-severity anomalies. Requires empirical validation that real attacks exceed 5-sigma deviation.

Why This Matters for IoT: IDS with 38,880 alerts/day causes alert fatigue and analyst burnout. Optimal threshold tuning (plus correlation and allow-listing) is the difference between deployed-but-ignored and operationally-useful security monitoring.

23.9 Concept Relationships

How IDS Components Work Together
Component Builds Upon Enables Production Challenge
Signature Detection Known attack patterns High-confidence blocking Must update signatures weekly
Anomaly Detection Statistical baselines Zero-day detection High false positive rate (requires tuning)
NIDS (Network-based) Centralized monitoring Scalable IoT coverage Cannot inspect encrypted traffic
HIDS (Host-based) On-device agents Encrypted traffic visibility Resource overhead on constrained devices
SIEM Integration IDS alerts + logs Multi-stage attack correlation Alert fatigue if thresholds not tuned
Automated Response IDS + firewall Real-time blocking Risk of blocking legitimate traffic

Optimal Architecture: Hybrid (signature + anomaly) NIDS at network boundaries + SIEM correlation + tuned thresholds (Z > 4). This catches 95% of attacks with <10 actionable alerts/day. Signature handles known threats (Mirai, SQL injection). Anomaly handles novel threats (zero-day, insider). SIEM correlates into incidents.

23.10 See Also

Foundation:

Complementary Defenses:

Implementation:

Common Pitfalls

Anomaly-based IDS requires a statistically validated baseline of normal device communication patterns. Deploying without a baseline produces either excessive false positives (too sensitive) or missed attacks (not sensitive enough) until the baseline is learned.

Enterprise IDS signatures target HTTP, SMB, and RDP traffic not present in typical IoT device communications. Apply IoT-specific signatures targeting MQTT, CoAP, Modbus, and BACnet protocols and common IoT attack patterns.

IDS detects attacks after they begin — it is not a preventive control. Pair IDS with preventive controls (network segmentation, strong authentication, firmware integrity) so that IDS serves as an additional layer rather than the primary defence.

An IDS that generates alerts without a documented response procedure provides the illusion of security. Before deploying IDS, define the playbook for investigating, containing, and recovering from each type of alert.

23.11 Summary

This chapter covered intrusion detection for IoT:

  • Detection Methods: Signature-based (known attacks), anomaly-based (novel attacks)
  • IDS vs IPS: Detection only vs. active blocking
  • NIDS for IoT: Centralized monitoring without device overhead
  • Deployment: Strategic sensor placement at zone boundaries
  • Tuning: Balance detection sensitivity with false positive rates

23.12 Knowledge Check

23.13 What’s Next

The next chapter provides Hands-On Labs using Wokwi ESP32 simulators to practice implementing authentication, rate limiting, and threat detection in IoT devices.

Network Segmentation Hands-On Labs