Select Appropriate Mitigations: Justify security control choices for different threat types using risk scores
Synthesize Learning Resources: Connect videos and documentation to deepen threat modeling understanding
Key Concepts
Vulnerability assessment: An automated or manual scan of an IoT system to identify known vulnerabilities, misconfigurations, and security weaknesses without actively exploiting them.
Penetration test: An authorised simulation of a real attack against an IoT system that actively exploits identified vulnerabilities to demonstrate their actual impact — requires explicit written authorisation.
Security audit: A systematic review of an IoT system’s security controls, configurations, and processes against a defined standard or policy to identify compliance gaps.
Red team exercise: A realistic attack simulation conducted by an independent team operating as adversaries with no prior knowledge of the system’s defences, testing the complete detection and response capability.
Attack surface assessment: An enumeration of all interfaces through which an attacker could interact with an IoT system, used to identify underprotected entry points and prioritise hardening efforts.
Risk rating: The assessment output combining vulnerability severity (CVSS score), exploitability in the specific deployment context, and business impact to produce a prioritised remediation list.
In 60 Seconds
IoT security assessments systematically evaluate a deployed or designed system’s security posture — identifying vulnerabilities, measuring control effectiveness, and quantifying residual risk to drive prioritised remediation. Effective assessments combine automated scanning, manual penetration testing, and architecture review to cover technical vulnerabilities, configuration weaknesses, and design flaws that each method alone would miss.
For Beginners: Threat Modeling Assessment
This chapter helps you consolidate your understanding of IoT security threats through review and assessment. Think of it as a security audit of your own knowledge – identifying what you understand well and what needs more study ensures you are prepared to tackle real-world security challenges.
Systematic threat modeling provides structured methodology for proactive security through five iterative steps: acquiring comprehensive architecture knowledge of IoT components and interactions, identifying entry points across physical/network/application interfaces, mapping data flow paths with encryption and authentication checkpoints, defining trust boundaries between device/gateway/cloud tiers, and conceiving plausible attack scenarios based on threat intelligence. This disciplined approach ensures comprehensive coverage of potential vulnerabilities before deployment.
Multiple threat taxonomies guide classification and prioritization. STRIDE maps threats to security properties: Spoofing violates authentication, Tampering violates integrity, Repudiation prevents accountability, Information Disclosure violates confidentiality, Denial of Service violates availability, and Elevation of Privilege violates authorization. The Open Threat Taxonomy categorizes threats as Physical (hardware damage), Resource (power/network disruption), Personnel (social engineering), and Technical (exploitation). ENISA provides IoT-specific frameworks covering devices, communications, data, services, and stakeholders.
Ten critical IoT attack scenarios demonstrate real-world exploitation paths: network eavesdropping for intelligence gathering, sensor manipulation causing safety failures, actuator sabotage for production disruption, administration system compromise enabling mass device control, protocol exploitation leveraging vulnerabilities, command injection for privilege escalation, stepping stone attacks for anonymity, DDoS botnet creation (Mirai-style), power manipulation for battery depletion, and ransomware attacks on critical infrastructure. Each scenario includes detailed attack steps, impact analysis, and targeted mitigations.
The comprehensive Python threat modeling framework implements DREAD scoring (calculating risk as average of five factors rated 1-10), attack tree analysis with probability propagation and critical path identification, automated threat identification matching asset properties to known vulnerabilities, MITRE ATT&CK phase mapping for IoT-specific techniques, residual risk calculation showing effectiveness of applied mitigations, attack surface scoring per asset based on threat count and severity, and mitigation coverage analysis identifying unprotected vulnerabilities. This framework enables data-driven security decisions with quantifiable risk metrics and prioritized remediation roadmaps.
16.4 Videos
Threat Modeling Overview
Threat Modeling Overview
From slides — frameworks (STRIDE, attack trees) and IoT-specific considerations.
Privacy Engineering Basics
Privacy Engineering Basics
From slides — aligning threat models with privacy-by-design principles.
Attack Trees and Misuse Cases
Attack Trees and Misuse Cases
From slides — constructing and using attack trees for IoT.
16.5 Cross-links
Security overview and CIA fundamentals: ../privacy-compliance/security-and-privacy-overview.html
Try It: Adjust the sliders to see how different factor values affect the overall DREAD score and risk classification. Notice how high Damage or Affected Users scores can push a vulnerability into CRITICAL territory even if other factors are moderate.
Worked Example: DREAD Risk Scoring for Smart Lock Vulnerability
Scenario: A penetration tester discovers that a smart lock’s Bluetooth pairing can be sniffed and replayed within 30 seconds of legitimate pairing, allowing an attacker to clone the digital key.
Given:
Vulnerability: BLE pairing replay attack
Affected product: Consumer smart door lock ($200)
Attack requires: Proximity during pairing (within 10 meters), BLE sniffer ($50), replay tool
Impact: Unauthorized physical access to home
Step 1: Score Each DREAD Factor (0-10 scale)
Factor
Question
Score
Justification
Damage
What’s the worst case?
9
Complete home access, theft of valuables, physical safety risk
Reproducibility
How easy to reproduce?
7
Requires timing (during pairing), but attack works consistently when conditions met
Exploitability
How much skill needed?
6
Requires BLE knowledge and tools, but tutorials exist online
Affected Users
How many users impacted?
8
All users of this lock model (~500,000 sold), vulnerability is systematic
Discoverability
How easy to find?
5
Requires security research to discover, not obvious to casual attackers
Residual risk of 3.6 (Low) is acceptable because: - Damage potential remains (inherent to the asset being protected) - But exploitability and reproducibility are dramatically reduced - OTA firmware update can patch existing devices within 30 days
Key insight: DREAD scoring shows that mitigations should target the factors with highest scores AND highest potential for reduction. Damage (9) is difficult to reduce (the lock still protects your home), but Reproducibility and Exploitability can be dramatically reduced through cryptographic fixes. Focusing mitigation on the “reducible” factors provides maximum risk reduction per engineering dollar spent.
Worked Example: DREAD Scoring for Real-World IoT Vulnerability
Scenario: During penetration testing, a security researcher discovers that a widely-deployed industrial IoT gateway stores AWS API credentials in plaintext in a world-readable configuration file at /etc/iot-gateway/aws-config.json.
System Context:
Product: Industrial IoT gateway (50,000+ units deployed globally)
Function: Aggregates sensor data from 10-200 sensors per gateway, forwards to AWS IoT Core
If the team had scored this as: - Damage: 5 (thinking “it’s just config file access”) - Reproducibility: 10 - Exploitability: 8 - Affected: 5 (thinking “only users who let attacker on their network”) - Discoverability: 5
Incorrect Overall: (5+10+8+5+5)/5 = 6.6 (HIGH, not CRITICAL)
Result: Vulnerability gets 90-day remediation window instead of immediate response. Attacker discovers and exploits during those 90 days. Breach costs $25M in remediation + lawsuits + regulatory fines.
Key Insight: DREAD scoring requires understanding both technical exploitability AND business impact. This vulnerability affects AWS credentials for 50,000 devices – the cascading impact across the entire customer base pushes Damage and Affected scores into CRITICAL territory.
After mitigating threats, some residual risk always remains. Use this framework to decide whether residual risk requires additional mitigation or can be accepted:
Residual Risk Score (After Mitigation)
Risk Level
Decision Required
Approval Authority
Documentation Required
8.0-10.0
CRITICAL
MUST mitigate further
Cannot be accepted
N/A (unacceptable)
6.0-7.9
HIGH
Additional mitigation OR executive acceptance
VP/C-level + Board notification
Risk acceptance form, annual re-review
4.0-5.9
MEDIUM
Evaluate: cost-benefit analysis
Director-level
Risk register entry
2.0-3.9
LOW
Can be accepted
Manager-level
Risk register entry
0.0-1.9
MINIMAL
Standard acceptance
No formal approval needed
Brief notation in security log
Risk Acceptance Criteria:
Accept residual risk ONLY if ALL of the following are true:
Example Decision Matrix:
Scenario 1: SQL Injection Vulnerability
Initial DREAD: 8.5 (CRITICAL)
After mitigation (parameterized queries): 2.0 (LOW)
Residual risk: Logic errors in query construction
Decision: ACCEPT - approved by Engineering Manager, noted in risk register
Scenario 2: Default Credentials
Initial DREAD: 9.2 (CRITICAL)
After mitigation (forced password change on first boot): 4.5 (MEDIUM)
Residual risk: User may set weak password despite complexity requirements
Decision: ACCEPT with compensating control - implemented account lockout after 3 failed attempts + MFA option
Approval: Director of Security
Scenario 3: Unencrypted Sensor Data
Initial DREAD: 7.0 (HIGH)
After mitigation (TLS encryption): 6.8 (HIGH)
Residual risk: TLS misconfiguration or downgrade attack
After mitigation (tamper-evident seals + secure boot): 7.2 (HIGH)
Residual risk: Sophisticated attacker can bypass seals with specialized tools
Mitigation cost: $200/device for hardware security module (HSM)
Device value: $50/device
Deployment: 100,000 devices
Decision: Cannot justify $20M for HSM when device is $5M total value
Resolution: Accept 7.2 HIGH risk with executive sign-off + insurance policy + monitoring for tamper attempts
Approval: CEO + Board of Directors
Cost-Benefit Decision Tree:
START: Residual risk score after initial mitigation?
├─ Score ≥ 8.0 (CRITICAL)?
│ └─ MUST mitigate further, no acceptance allowed
│ └─ If mitigation truly impossible, system cannot be deployed
│
├─ Score 6.0-7.9 (HIGH)?
│ └─ Calculate: Mitigation Cost vs Potential Impact
│ ├─ Cost < Impact/10 → Mitigate
│ └─ Cost > Impact/10 → Escalate to executive for acceptance
│ └─ Requires: Formal risk acceptance, compensating controls, insurance
│
├─ Score 4.0-5.9 (MEDIUM)?
│ └─ Evaluate: Quick wins available?
│ ├─ YES → Implement if cost < $10K and time < 2 weeks
│ └─ NO → Accept with director approval
│
└─ Score < 4.0 (LOW)?
└─ ACCEPT - standard risk management process
Risk Acceptance Template:
## Residual Risk Acceptance Form**Risk ID**: SEC-2024-042**System**: Industrial IoT Gateway v3.2**Date**: 2024-11-15### Threat DescriptionPhysical device tampering to extract firmware and cryptographic keys### Initial Risk Score8.0/10 (CRITICAL) - High damage, medium exploitability### Mitigations Implemented1. Tamper-evident seals on device enclosure2. Secure boot with signed firmware3. Encrypted storage for sensitive data4. Physical access logging### Residual Risk Score7.2/10 (HIGH) - Sophisticated attacker with lab equipment can still extract keys### Additional Mitigation Considered But Not Implemented- Hardware Security Module (HSM) to store keys in tamper-resistant silicon- Cost: $200/device × 100,000 devices = $20M- Device unit cost: $50- Total fleet value: $5M### Justification for Acceptance- Mitigation cost ($20M) exceeds total system value ($5M) by 4x- Attack requires physical access + specialized equipment (>$50K)- Attack only affects individual compromised device, not fleet-wide- Threat actor: Nation-state or highly-funded organized crime (not opportunistic attackers)- Compensating controls: Physical security at deployment sites, monitoring for tamper attempts- Insurance: $10M cyber insurance policy in place### Acceptance- **Accepted By**: Jane Doe, Chief Information Security Officer- **Date**: 2024-11-15- **Review Date**: 2025-11-15 (annual re-evaluation required)- **Board Notification**: Yes, presented at 2024-11-20 board meeting### Monitoring & Response- Monthly review of tamper detection logs- Incident response plan if physical compromise detected- Key rotation for all devices if ANY device is compromised**Signatures**:CISO: _________________________CEO: __________________________Board Chair: ___________________
Key Principle: Residual risk acceptance is not a shortcut to avoid security work. It’s a formal process for acknowledging that perfect security is impossible and documenting rational risk management decisions when mitigation costs exceed potential benefits.
Common Mistake: Averaging DREAD Scores Without Understanding Dimensions
The Mistake: A security team scores a vulnerability and gets: - Damage: 2/10 - Reproducibility: 10/10 - Exploitability: 10/10 - Affected Users: 10/10 - Discoverability: 10/10
Average DREAD: (2+10+10+10+10)/5 = 42/5 = 8.4/10 (CRITICAL)
The team prioritizes this for immediate remediation, allocating 2 engineers for 1 week ($20K cost).
Reality Check: The vulnerability is “Information disclosure of public product documentation via HTTP instead of HTTPS.”
Damage: 2/10 (information is already public on the website)
Reproducibility: 10/10 (works every time)
Exploitability: 10/10 (no authentication required)
Affected Users: 10/10 (all users can access it)
Discoverability: 10/10 (linked from homepage)
The Problem: The DREAD average (8.4) suggests CRITICAL priority, but the actual damage is trivial (public info over HTTP vs HTTPS). The team wasted $20K on a non-issue.
Why This Happens:
Treating DREAD as purely mathematical (average of 5 numbers)
Not considering that Damage should outweigh other factors
Scoring “Can we exploit this?” instead of “Should we care?”
The Correct Approach: Weighted DREAD
Not all DREAD dimensions are equally important. Damage and Affected Users have more business impact than Discoverability.
Still HIGH, but borderline. Let’s apply domain judgment:
Domain-Adjusted: “Information is public anyway, so actual Damage = 0” (0×3 + 10×1 + 10×1 + 10×2 + 10×1) / 8 = (0 + 10 + 10 + 20 + 10) / 8 = 50/8 = 6.25 (HIGH)
Now let’s look at Affected Users: “Users accessing public docs – is this really 10/10?” - Users affected: Anyone accessing docs - Actual impact: They see HTTP URL instead of HTTPS URL in browser - Security impact: None (data is public) - Adjusted Affected Users: 2/10 (low impact even if many users)
Decision: Fix during regular maintenance, not emergency response. Cost: $2K instead of $20K.
Key Insight Comparison:
Factor
Unweighted Average
Weighted Formula
Domain-Adjusted
Priority
Cost
DREAD Score
8.4 (CRITICAL)
7.0 (HIGH)
4.25 (MEDIUM)
N/A
N/A
Response
Immediate
Urgent (1 week)
Standard (30 days)
N/A
N/A
Allocated Resources
2 engineers × 1 week
1 engineer × 2 weeks
1 engineer × 1 day
N/A
N/A
Cost
$20,000
$10,000
$2,000
N/A
N/A
Guidelines for DREAD Dimension Weighting:
When to Weight Damage Higher (×3):
Safety-critical systems (medical, automotive, industrial control)
Financial systems (payment processing, banking)
Personally identifiable information (PII) at risk
When to Weight Affected Users Higher (×2):
Consumer-facing products with millions of users
Supply chain components affecting multiple downstream systems
Single point of failure in critical infrastructure
When to Reduce Reproducibility/Exploitability/Discoverability Weight (×1):
If damage is low, who cares if it’s easy to exploit?
Example: Easily-exploitable bug that prints “hello” to console = low priority
Red Flags for “Fake CRITICAL” Scores:
Verification Checklist Before Finalizing DREAD:
Key Takeaway: DREAD is a tool, not a religion. The average provides a starting point, but human judgment must adjust for domain context. A 10/10 Exploitability on a 2/10 Damage vulnerability is still low priority. Always sanity-check: “If this is CRITICAL, would we wake up the CEO at 2am?” If not, it’s not CRITICAL.
Interactive: Residual Risk Calculator
Calculate how much risk remains after applying mitigation controls.
Try It: See how mitigation effectiveness affects residual risk. Notice that even 85% effective mitigation on an 80/100 risk still leaves 12 points of residual risk.
Interactive: Attack Tree Probability Calculator
Calculate the overall success probability of a multi-step attack chain.
Try It: Adjust individual step probabilities to see how they affect overall attack success. Notice how the probabilities multiply (AND gate logic), not average.
Putting Numbers to It: Vulnerability Assessment Metrics
Quantitative vulnerability severity scoring using industry-standard metrics (CVSS, DREAD) to prioritize remediation.
CVSS v3.1 Base Score Formula (simplified): \[\text{CVSS} = \min\left(1.08 \times (\text{Impact} + \text{Exploitability}), 10\right)\]
Working through an example: Given: IoT gateway SQL injection vulnerability - Exploitability = 3.9 (network accessible, low complexity, no privileges required) - Impact = 5.9 (high confidentiality impact, high integrity impact, low availability impact)
Step 1: Sum components: \(3.9 + 5.9 = 9.8\) Step 2: Apply multiplier: \(1.08 \times 9.8 = 10.584\) Step 3: Cap at 10: \(\min(10.584, 10) = 10.0\)
Result: 0.235 vulnerabilities per device (industry benchmark: <0.1 for mature IoT deployments)
In practice: CVSS scoring transforms subjective “this bug seems bad” into objective “this is CVSS 10.0 CRITICAL.” MTTR calculation shows that fixing 47 vulnerabilities takes ~6 months on average - cannot all be fixed simultaneously, so CVSS prioritization is mandatory. Finding rate benchmarking: 0.235 > 0.1 indicates security debt requiring architectural fixes, not just patching individual bugs.
Common Pitfalls
1. Confusing a vulnerability scan with a penetration test
Automated vulnerability scanners identify potential weaknesses based on version numbers and configuration patterns; penetration testers validate whether those weaknesses are actually exploitable and what an attacker could achieve. Both are necessary but neither alone is sufficient.
2. Conducting assessments on lab systems instead of production configurations
A security assessment of a hardened lab environment will miss the misconfigured VLAN, the default credential left on one production gateway, and the unpatched device that was added without going through the standard provisioning process.
3. Not scoping the assessment before beginning
An IoT security assessment without a defined scope will either be too broad (months of work without focus) or miss critical components (because they were assumed to be out of scope). Define scope, objectives, and success criteria before beginning.
4. Filing assessment reports without tracking remediation
Assessment findings are only valuable if they drive remediation. Track each finding in a risk register with an assigned owner, target remediation date, and verification evidence — and schedule a follow-up assessment to verify fixes.
Label the Diagram
💻 Code Challenge
16.7 Summary
STRIDE Classification: Each IoT threat maps to a specific STRIDE category – Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, or Elevation of Privilege – guiding the selection of appropriate countermeasures
DREAD Risk Scoring: Quantify threat severity by averaging five factors (Damage, Reproducibility, Exploitability, Affected users, Discoverability) on a 1-10 scale, with scores above 8.0 classified as CRITICAL and requiring immediate remediation
Attack Tree Probability: Sequential attack steps multiply (AND gate logic), so defense-in-depth dramatically reduces overall attack success – adding even a 50% defense layer halves the probability
Residual Risk Management: After mitigation, residual risk equals initial risk multiplied by (1 - mitigation effectiveness); unmitigated threats must be formally documented as accepted risk with management approval
Prioritization Strategy: Always address CRITICAL threats first regardless of budget constraints, then move to HIGH threats – fixing easy MEDIUM threats while leaving CRITICAL ones active is never acceptable
Concept Relationships
Understanding how threat assessment concepts interconnect:
Assessment Method
Input Requirements
Output
Best Used For
Limitation
STRIDE Analysis
System architecture, data flows
Categorized threat list (S/T/R/I/D/E)
Comprehensive threat discovery
Generates 30-50+ threats (need DREAD to prioritize)
DREAD Scoring
Identified threats (from STRIDE)
Risk score 0-10 per threat
Remediation prioritization
Subjective (two assessors may score differently)
Attack Trees
Attack goal, sequential steps
Probability of success
Quantifying defense-in-depth ROI
Time-intensive for complex systems
Residual Risk Calculation
Initial risk, mitigation effectiveness
Remaining risk after fixes
Acceptance decisions, insurance estimates
Assumes mitigation works as designed
Risk Acceptance Framework
Residual risk score, management authority
Go/no-go decision
Documenting unmitigated threats
Doesn’t reduce risk, only documents it
Key Insight: Combine methods for complete coverage. STRIDE identifies threats, DREAD prioritizes them, Attack Trees quantify defense effectiveness, Residual Risk shows what’s left after fixes, Risk Acceptance documents intentionally unmitigated threats.