17  Threats & Vulnerabilities

17.1 Overview

IoT systems face diverse threats from various actors using multiple attack vectors. This comprehensive module covers the threat landscape, vulnerability types, security frameworks, and practical defense strategies for IoT deployments.

Key Takeaway

In one sentence: Attackers exploit the weakest link—understand threats, attack vectors, and vulnerabilities to defend where it matters most.

Remember this rule: Threats target assets via attack vectors exploiting vulnerabilities; know your adversary’s motivation, capability, and opportunity to prioritize defenses effectively.

17.2 Learning Objectives

By completing this module, you will be able to:

  • Classify common IoT threat actors by motivation, capability, and typical attack methods
  • Differentiate between attack types targeting IoT devices, networks, and applications
  • Diagnose vulnerabilities specific to IoT devices and networks using structured assessment methods
  • Analyze attack vectors and attack surfaces in IoT deployments
  • Apply the STRIDE threat model to systematically identify threats in IoT architectures
  • Evaluate real-world IoT attack scenarios using DREAD risk scoring
  • Design defensive strategies that address the most likely threats for a given deployment
  • Conduct basic vulnerability assessments on IoT devices using standard scanning tools
In 60 Seconds

IoT attacks exploit specific vulnerabilities — default credentials, unencrypted communications, exposed debug interfaces, and missing update mechanisms — to achieve goals ranging from data theft to physical system disruption. Understanding the attack-vulnerability mapping is essential for designing controls that address actual threats rather than hypothetical ones.

IoT security threats are the various ways that connected devices and their data can be compromised. Think of your IoT system as a house – you need to understand how burglars might try to get in before you can choose the right locks, alarms, and security cameras. This chapter helps you understand the threats so you can build effective defenses.

“There are different types of attackers out there,” Sammy the Sensor whispered, looking around nervously. “Script kiddies are beginners who use ready-made tools, cybercriminals want money, hacktivists want to make a political statement, insiders are people who already work inside the organization, and nation-states have the biggest budgets and most advanced tools.”

Max the Microcontroller continued, “Each attacker has different skills and goals, which means they use different methods. The key rule to remember is: attackers always go for the weakest link. If your password is strong but your firmware is not updated, they will attack through the firmware. If your network is encrypted but your debug port is open, they will plug right in!”

“Vulnerabilities are the weak spots that attackers exploit,” Lila the LED explained. “There are physical vulnerabilities – like exposed circuit boards. Network vulnerabilities – like unencrypted data. Software vulnerabilities – like bugs in code. And even human vulnerabilities – like phishing emails that trick people into giving up passwords!”

“The STRIDE model helps us organize all these threats into six categories,” Bella the Battery said. “Spoofing is faking an identity. Tampering is changing data. Repudiation is denying you did something. Information disclosure is leaking secrets. Denial of service is crashing systems. Elevation of privilege is gaining admin access. If you check for all six, you have covered the main ways things can go wrong!”

Explore the core attack patterns that target IoT systems with this interactive visualization.

17.3 Module Chapters

This topic is divided into the following focused chapters:

17.3.1 1. Introduction to IoT Threats

Introduction to IoT Threats and Attacks

  • Learning objectives and prerequisites
  • Why IoT devices are attractive targets
  • Real-world example: The Mirai botnet
  • Common IoT attack types explained
  • The CIA triad and what attackers target
  • IoT attack surfaces overview

17.3.2 2. Threat Landscape and STRIDE Model

Threat Landscape and STRIDE Model

  • Threat actor classification (script kiddies to nation-states)
  • The 4-quadrant security framework (People, Processes, Physical, Technology)
  • STRIDE threat modeling framework
    • Spoofing, Tampering, Repudiation
    • Information Disclosure, Denial of Service, Elevation of Privilege
  • Common security pitfalls and how to avoid them

17.3.3 3. OWASP IoT Top 10 Vulnerabilities

OWASP IoT Top 10 Vulnerabilities

  • The 10 most critical IoT security risks
  • Case study: Mirai botnet (2016)
  • Case study: LockState smart locks (2017)
  • Deep dive: IoT botnet attack patterns and defense
  • Security tradeoffs: scanning vs. penetration testing, zero trust vs. perimeter

17.3.4 4. Security Compliance Frameworks

IoT Security Compliance Frameworks

  • Framework comparison (NIST, ISO 27001, IEC 62443, ETSI EN 303 645, FDA)
  • NIST Cybersecurity Framework for IoT
  • ETSI EN 303 645 - Consumer IoT Security (13 provisions)
  • IEC 62443 - Industrial IoT Security (security levels, zones)
  • FDA Cybersecurity Guidance for Medical IoT
  • Third-party assessment and certification programs

17.3.5 5. Practice Exercises

IoT Security Practice Exercises

  • Exercise 1: Threat actor analysis and mitigation strategy
  • Exercise 2: STRIDE threat modeling workshop
  • Exercise 3: Vulnerability scanning and assessment
  • Exercise 4: Incident response simulation
  • Exercise 5: OWASP IoT Top 10 audit
  • Exercise 6: Network segmentation design

17.3.6 6. Interactive Security Tools

Interactive IoT Security Tools

  • IoT Security Risk Calculator (DREAD methodology)
  • Attack Surface Visualizer
  • Component-specific attack and mitigation analysis

17.3.7 7. Worked Examples

Worked Examples: Threat Modeling and Incident Response

  • Worked Example: Threat modeling for connected medical device (insulin pump)
    • System decomposition, STRIDE analysis, attack trees
    • Risk prioritization, mitigation design, residual risk acceptance
  • Worked Example: Incident response for IoT breach (building automation)
    • Detection, containment, eradication, recovery, lessons learned

17.4 Quick Reference: Key Concepts

Concept Definition
Threat Actor Entity that might attack: script kiddie, cybercriminal, hacktivist, insider, nation-state
Attack Vector Path attackers use: network exploits, phishing, physical access, firmware tampering
Vulnerability Security weakness: default passwords, unpatched software, missing encryption
STRIDE Threat taxonomy: Spoofing, Tampering, Repudiation, Info Disclosure, DoS, Elevation of Privilege
CIA Triad Security goals: Confidentiality, Integrity, Availability
Defense in Depth Layering multiple security controls for comprehensive protection

17.6 Prerequisites

Before starting this module, you should be familiar with:

17.7 Knowledge Check

Scenario: An attacker targets a smart factory’s industrial control system to cause production disruption. The security team needs to quantify the likelihood of a successful attack to justify security investments.

Attack Goal: Gain control of PLC (Programmable Logic Controller) to halt production line

Attack Tree:

[ROOT] Control PLC to Halt Production (SUCCESS if ANY path succeeds)
├─ [Path A] Network Attack (OR)
│  ├─ Step A1: Scan network for exposed PLC (95% success)
│  ├─ Step A2: Exploit Modbus TCP vulnerability (70% success | A1 succeeds)
│  └─ Step A3: Send malicious command to PLC (90% success | A2 succeeds)
│
└─ [Path B] Physical Attack (OR)
   ├─ Step B1: Gain physical access to control room (40% success)
   ├─ Step B2: Connect laptop to PLC via Ethernet (85% success | B1 succeeds)
   └─ Step B3: Use engineering software to reprogram PLC (95% success | B2 succeeds)

Probability Calculations:

Path A (Network Attack):

  • All steps must succeed (AND logic for sequential steps)
  • P(Path A) = P(A1) × P(A2|A1) × P(A3|A2)
  • P(Path A) = 0.95 × 0.70 × 0.90 = 0.5985 = 59.85%

Path B (Physical Attack):

  • All steps must succeed (AND logic)
  • P(Path B) = P(B1) × P(B2|B1) × P(B3|B2)
  • P(Path B) = 0.40 × 0.85 × 0.95 = 0.3230 = 32.30%

Overall Attack Success (either path succeeds): - OR logic: At least one path succeeds - P(Success) = 1 - P(Both paths fail) - P(Both fail) = (1 - 0.5985) × (1 - 0.3230) = 0.4015 × 0.6770 = 0.2718 - P(Success) = 1 - 0.2718 = 0.7282 = 72.82%

Interpretation: Without additional defenses, attackers have a 73% chance of successfully disrupting production.

Mitigation Analysis:

Mitigation 1: Network Segmentation (Blocks Path A)

  • Cost: $15,000 (VLAN configuration, firewall rules)
  • Effect: Reduces P(A1) from 95% to 10% (PLC no longer on scanned network)
  • New P(Path A) = 0.10 × 0.70 × 0.90 = 6.3%

Mitigation 2: Physical Access Control (Blocks Path B)

  • Cost: $25,000 (badge readers, cameras, locks)
  • Effect: Reduces P(B1) from 40% to 5% (control room requires badge + PIN)
  • New P(Path B) = 0.05 × 0.85 × 0.95 = 4.0%

Mitigation 3: PLC Application Allowlisting (Defense-in-Depth)

  • Cost: $8,000 (configure PLC to accept commands only from authorized engineering workstation)
  • Effect: Reduces P(A3) from 90% to 10% AND P(B3) from 95% to 10%
  • New P(Path A) with Mit 1+3 = 0.10 × 0.70 × 0.10 = 0.7%
  • New P(Path B) with Mit 2+3 = 0.05 × 0.85 × 0.10 = 0.4%

Investment Scenarios:

Mitigation Combination Cost Path A Success Path B Success Overall Success Risk Reduction
Baseline (no mitigations) $0 59.85% 32.30% 72.82% 0%
Mit 1 (Network segmentation only) $15K 6.3% 32.30% 36.55% 50%
Mit 2 (Physical access only) $25K 59.85% 4.0% 61.46% 16%
Mit 1+2 (Both) $40K 6.3% 4.0% 10.05% 86%
Mit 1+2+3 (All) $48K 0.7% 0.4% 1.10% 98.5%

ROI Calculation:

Cost of Production Halt:

  • Downtime: 8 hours to restore PLC
  • Lost production: $50,000/hour × 8 hours = $400,000
  • Incident response: $100,000 (forensics, remediation, PR)
  • Total: $500,000 per incident

Expected Annual Cost WITHOUT Mitigations:

  • Attack likelihood: 72.82%
  • If attacker attempts attack annually: 0.7282 × $500,000 = $364,100

Expected Annual Cost WITH Mit 1+2+3:

  • Attack likelihood: 1.10%
  • If attacker attempts attack annually: 0.0110 × $500,000 = $5,500

Net Savings: $364,100 - $5,500 = $358,600 per year Investment: $48,000 (one-time) Payback Period: 48 days (0.13 years) 5-Year ROI: ($358,600 × 5 - $48,000) / $48,000 = 3,639% return

Key Insight: Attack tree probability calculations quantify security ROI. This example shows that $48K in security controls provides $358K in annual risk reduction – a 3,600% return over 5 years. Without quantitative risk analysis, security investments appear as “pure cost” rather than revenue protection.

When you have limited security resources and multiple vulnerable systems, use this framework to prioritize remediation:

Factor Weight Measurement Impact on Priority
CVSS Score ×3 0.0-10.0 (from NIST NVD) Higher score = higher priority
Exploitability ×2 Public exploit available? (Yes=10, No=3) Weaponized exploits = immediate priority
Exposure ×2 Internet-facing=10, Internal=5, Air-gapped=2 Public exposure = higher priority
Asset Criticality ×2 Production=10, Development=5, Test=2 Business-critical = higher priority
Affected Population ×1 Number of vulnerable instances Scale amplifies everything

Priority Score Formula:

Priority = (CVSS × 3) + (Exploitability × 2) + (Exposure × 2) + (Criticality × 2) + (Population × 1)
           ───────────────────────────────────────────────────────────────────────────────────────
                                            10 (max denominator)

Example: Three Vulnerabilities to Prioritize

Vulnerability A: SQL Injection in Customer Portal

  • CVSS: 9.8 (Critical)
  • Exploitability: 10 (Public exploit, automated tools)
  • Exposure: 10 (Internet-facing)
  • Criticality: 10 (Production system with customer PII)
  • Population: 1 server
  • Priority Score: (9.8×3 + 10×2 + 10×2 + 10×2 + 1×1) / 10 = (29.4 + 20 + 20 + 20 + 1) / 10 = 9.04 (CRITICAL)

Vulnerability B: Outdated OpenSSL in Development Environment

  • CVSS: 7.5 (High)
  • Exploitability: 8 (Known CVE, exploit requires MITM position)
  • Exposure: 5 (Internal network only)
  • Criticality: 5 (Development, not production)
  • Population: 50 developer machines
  • Priority Score: (7.5×3 + 8×2 + 5×2 + 5×2 + 50×1) / 10 = (22.5 + 16 + 10 + 10 + 50) / 10 = 10.85 → capped at 10 (CRITICAL)

Wait – this seems wrong. Let’s recalculate with population scaling: - Population score: min(10, log₁₀(50) × 3) = min(10, 5.1) = 5.1 - Priority Score: (7.5×3 + 8×2 + 5×2 + 5×2 + 5.1×1) / 10 = 6.76 (HIGH)

Vulnerability C: Missing Security Patch in Air-Gapped Industrial Sensor

  • CVSS: 8.2 (High)
  • Exploitability: 3 (No public exploit, requires sophisticated attacker)
  • Exposure: 2 (Air-gapped network)
  • Criticality: 10 (Controls production line)
  • Population: 200 sensors
  • Population score: min(10, log₁₀(200) × 3) = 6.9
  • Priority Score: (8.2×3 + 3×2 + 2×2 + 10×2 + 6.9×1) / 10 = 6.19 (HIGH)

Prioritization Result:

  1. Vulnerability A (9.04) - Patch within 24 hours
  2. Vulnerability B (6.76) - Patch within 30 days
  3. Vulnerability C (6.19) - Patch during next scheduled maintenance window

Decision Tree for Edge Cases:

START: Which vulnerability to fix first?

├─ Any vulnerability CVSS ≥ 9.0 AND internet-exposed?
│  └─ YES → Fix immediately (within 24 hours) regardless of other factors
│  └─ NO → Continue
│
├─ Public exploit available AND internet-exposed?
│  └─ YES → Fix urgently (within 7 days)
│  └─ NO → Continue
│
├─ Affects production system?
│  ├─ YES → Fix within 30 days
│  └─ NO → Fix within 90 days
│
└─ Test/development environment only?
   └─ Fix during next major release (no emergency patching)

Key Principle: A 9.8 CVSS vulnerability in an air-gapped test environment is lower priority than a 7.5 CVSS vulnerability on an internet-facing production server with a public exploit. Context matters more than absolute CVSS score.

Common Mistake: Treating All “High” CVEs Equally

The Mistake: A security team receives 50 vulnerability scan results, all marked “HIGH” severity (CVSS 7.0-8.9). They allocate resources alphabetically, patching CVE-2023-0001 before CVE-2023-9999, without considering context.

Example of Wrong Prioritization:

CVE-2023-0001: Buffer overflow in legacy print service - CVSS: 7.8 (HIGH) - Context: Runs on 10-year-old printer firmware, internal network only - Exploitation: Requires crafted PostScript file sent to printer - Impact: Printer crashes, needs reboot - Actual Priority: LOW (annoying, not dangerous)

CVE-2023-9999: Authentication bypass in IoT gateway API - CVSS: 7.5 (HIGH) - Context: Gateway controls 500 industrial sensors, internet-exposed - Exploitation: Simple HTTP request with no authentication required - Impact: Attacker gains full sensor control, can inject false data - Actual Priority: CRITICAL (immediate production risk)

Result: Team spent 2 weeks patching the printer firmware (CVE-2023-0001) while the gateway remained vulnerable. Attacker exploited CVE-2023-9999 on day 10, causing $500K in production downtime.

Why This Happens:

  1. Relying solely on CVSS scores without context
  2. Patching in order of CVE ID (arbitrary numbering)
  3. Not considering asset criticality
  4. Ignoring whether systems are internet-exposed
  5. Not checking if exploits are publicly available

The Correct Approach: Contextual Risk Scoring

Step 1: Enrich CVE Data with Context

CVE ID CVSS System Exposure Exploit Available? Asset Criticality Real Priority
CVE-2023-0001 7.8 Print service Internal No Low (printer) 4.2 (MEDIUM)
CVE-2023-9999 7.5 IoT gateway Internet Yes (Shodan) Critical (production) 9.5 (CRITICAL)
CVE-2023-5555 7.2 Dev database Internal Yes (Metasploit) Low (test data) 5.8 (MEDIUM)
CVE-2023-7777 8.1 Sensor firmware Internal No Critical (safety) 8.3 (CRITICAL)

Step 2: Apply Exploit Availability Multiplier

  • If exploit is in Metasploit/Exploit-DB: ×1.5
  • If exploit is on GitHub (requires minor adaptation): ×1.3
  • If proof-of-concept only (requires expertise): ×1.1
  • If no public exploit: ×1.0

CVE-2023-9999 Adjusted:

  • Base CVSS: 7.5
  • Exploit multiplier: ×1.5 (Shodan search reveals 10,000+ vulnerable instances)
  • Adjusted: 7.5 × 1.5 = 11.25 → capped at 10 (CRITICAL)

Step 3: Apply Exposure Multiplier

  • Internet-facing: ×1.4
  • Internal network (VPN required): ×1.0
  • Air-gapped: ×0.7

CVE-2023-0001 Adjusted:

  • Base CVSS: 7.8
  • Exploit multiplier: ×1.0 (no exploit)
  • Exposure multiplier: ×0.7 (internal print service)
  • Adjusted: 7.8 × 1.0 × 0.7 = 5.46 (MEDIUM)

Step 4: Final Prioritization

CVE ID Base CVSS Context-Adjusted Score Priority Tier Remediation Timeline
CVE-2023-9999 7.5 10.0 (CRITICAL) P0 24 hours (emergency patch)
CVE-2023-7777 8.1 8.3 (CRITICAL) P1 7 days (urgent)
CVE-2023-5555 7.2 6.5 (HIGH) P2 30 days (scheduled)
CVE-2023-0001 7.8 5.5 (MEDIUM) P3 90 days (regular maintenance)

Verification Checklist Before Prioritizing:

Key Takeaway: A “HIGH” CVE on an internet-facing production system with a public exploit is effectively CRITICAL. A “HIGH” CVE on an internal test system with no public exploit is effectively MEDIUM. CVSS provides a baseline, but context determines actual priority. Blindly patching by CVSS score wastes resources on low-risk vulnerabilities while high-risk ones remain exploitable.

Quantitative threat actor capability scoring based on resources, skills, and motivation to inform defense prioritization.

Threat Actor Capability Score: \[\text{Capability} = \frac{R + S + M}{30}\] where \(R\) = Resources (0-10), \(S\) = Skill level (0-10), \(M\) = Motivation (0-10)

Working through an example: Given: Assess threats to industrial IoT sensor network

Threat Actor Resources (0-10) Skill (0-10) Motivation (0-10) Capability Score Likelihood
Script Kiddie 2 3 5 \((2+3+5)/30 = 0.33\) Low
Cybercriminal 6 7 8 \((6+7+8)/30 = 0.70\) Medium
Nation-State 10 10 9 \((10+10+9)/30 = 0.97\) High
Insider 5 8 6 \((5+8+6)/30 = 0.63\) Medium

Step 1: Identify highest-capability threat: Nation-State (0.97) Step 2: But assess actual target value: Industrial sensors (not critical infrastructure) → Motivation = 4 (not 9) Step 3: Recalculate: \((10 + 10 + 4)/30 = 0.80\) (still high, but lower priority than critical infrastructure)

Result: Design defenses assuming Cybercriminal-level adversary (0.70 capability) as baseline threat model

Attack Success Probability vs Defense Investment: \[P(\text{Breach}) = \text{Actor Capability} \times (1 - \text{Defense Strength})\]

Given: Cybercriminal (0.70 capability) vs. IoT system - No defenses: \(P = 0.70 \times (1 - 0) = 0.70 = 70\%\) breach probability - Basic defenses (firewalls, passwords): Defense = 0.50 → \(P = 0.70 \times (1 - 0.50) = 0.35 = 35\%\) - Strong defenses (MFA, encryption, monitoring): Defense = 0.85 → \(P = 0.70 \times (1 - 0.85) = 0.105 = 10.5\%\)

Result: Strong defenses reduce breach probability from 70% to 10.5% (85% risk reduction)

Annualized Loss Expectancy (ALE): \[\text{ALE} = P(\text{Breach}) \times \text{Average Loss}\] Given: 10.5% breach probability, \(500K average breach cost\)\(\text{ALE} = 0.105 \times \$500{,}000 = \$52{,}500\)$

In practice: Not all attackers are equal. Script kiddies use automated tools (defend with rate limiting). Nation-states use zero-days (defend with network segmentation, air gaps). Capability scoring guides “how much security is enough?” - if ALE ($52.5K/year) < Defense Cost ($100K/year), you’re over-investing. If ALE > Defense Cost, invest more. Optimize defense spending to match actual threat actor capability for your deployment.

Concept Relationships

Understanding how threat and vulnerability concepts interconnect:

Core Concept Prerequisite Understanding Enables Common Misconception
Threat Actors (Script kiddies → Nation-states) Motivation, capability, opportunity Threat modeling scope, defense budget allocation Thinking “we’re too small to be targeted” (botnets target ALL devices)
Attack Vectors (Network, physical, social, firmware) System architecture, entry points Attack surface mapping, defense prioritization Confusing attack vector (path) with vulnerability (weakness)
STRIDE Taxonomy CIA triad, security properties Systematic threat identification, countermeasure mapping Applying STRIDE once and never revisiting (threats evolve)
Vulnerability Lifecycle (Discovery → Disclosure → Patch → Exploitation) CVE process, patch management Proactive vs reactive security, time-to-patch metrics Thinking disclosed = exploited (disclosure gives you time to patch)
Defense in Depth Layered controls, failure modes Resilient architecture, compensating controls Believing one strong control = defense in depth (need multiple layers)
Lateral Movement (Compromised device → network → high-value targets) Network segmentation, trust boundaries Zero-trust architecture, microsegmentation Assuming network perimeter protects internal devices

Key Insight: Attackers target the weakest link. A system with strong encryption but default passwords will be compromised via the passwords. Defense-in-depth ensures that even if one control fails, others provide protection.

See Also

Threat Modeling and Analysis:

Security Frameworks:

Foundational Concepts:

Mitigation Implementation:

Case Studies and Practice:

Learning Resources:

17.8 What’s Next

If you want to… Read this
Apply STRIDE to categorise these attacks systematically STRIDE Framework
Develop realistic attack scenarios for each vulnerability Threat Attack Scenarios
Study the OWASP IoT Top 10 vulnerability framework OWASP IoT Top 10
Learn about threat assessments to prioritise remediation Threats Assessments
Return to the security module overview IoT Security Fundamentals

Common Pitfalls

New IoT vulnerabilities are discovered daily. A security programme that only addresses the OWASP IoT Top 10 from 2018 will miss recently discovered vulnerability classes. Subscribe to IoT security advisories (ICS-CERT, MITRE CVE) and update your vulnerability inventory continuously.

Advanced persistent threat (APT) techniques are compelling to study but most IoT breaches exploit basic vulnerabilities: default passwords, no encryption, no updates. Ensure that defences against common attacks are solid before investing in advanced threat protection.

A buffer overflow vulnerability in UART-only accessible firmware requires physical device access to exploit. Rank attacks by exploitability in your specific deployment context (remote vs physical access required, attacker sophistication needed) not just by CVSS score.

A vulnerability in OpenSSL affects only devices using OpenSSL for TLS, not all devices. Maintain a software bill of materials (SBOM) for each device type so that when a new CVE is announced, you can immediately determine which devices in your fleet are affected.