16  Threat Modeling Assessment

16.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Demonstrate Threat Modeling Mastery: Apply STRIDE, DREAD, and attack tree concepts to assessment questions
  • Analyze Complex Scenarios: Evaluate multi-layer IoT security scenarios and identify root causes
  • Select Appropriate Mitigations: Justify security control choices for different threat types using risk scores
  • Synthesize Learning Resources: Connect videos and documentation to deepen threat modeling understanding

Key Concepts

  • Vulnerability assessment: An automated or manual scan of an IoT system to identify known vulnerabilities, misconfigurations, and security weaknesses without actively exploiting them.
  • Penetration test: An authorised simulation of a real attack against an IoT system that actively exploits identified vulnerabilities to demonstrate their actual impact — requires explicit written authorisation.
  • Security audit: A systematic review of an IoT system’s security controls, configurations, and processes against a defined standard or policy to identify compliance gaps.
  • Red team exercise: A realistic attack simulation conducted by an independent team operating as adversaries with no prior knowledge of the system’s defences, testing the complete detection and response capability.
  • Attack surface assessment: An enumeration of all interfaces through which an attacker could interact with an IoT system, used to identify underprotected entry points and prioritise hardening efforts.
  • Risk rating: The assessment output combining vulnerability severity (CVSS score), exploitability in the specific deployment context, and business impact to produce a prioritised remediation list.
In 60 Seconds

IoT security assessments systematically evaluate a deployed or designed system’s security posture — identifying vulnerabilities, measuring control effectiveness, and quantifying residual risk to drive prioritised remediation. Effective assessments combine automated scanning, manual penetration testing, and architecture review to cover technical vulnerabilities, configuration weaknesses, and design flaws that each method alone would miss.

This chapter helps you consolidate your understanding of IoT security threats through review and assessment. Think of it as a security audit of your own knowledge – identifying what you understand well and what needs more study ensures you are prepared to tackle real-world security challenges.

Threat Modeling Series:

Security Context:

16.2 Quiz 1: Critical Attack Scenario Analysis

16.3 Chapter Summary

Systematic threat modeling provides structured methodology for proactive security through five iterative steps: acquiring comprehensive architecture knowledge of IoT components and interactions, identifying entry points across physical/network/application interfaces, mapping data flow paths with encryption and authentication checkpoints, defining trust boundaries between device/gateway/cloud tiers, and conceiving plausible attack scenarios based on threat intelligence. This disciplined approach ensures comprehensive coverage of potential vulnerabilities before deployment.

Multiple threat taxonomies guide classification and prioritization. STRIDE maps threats to security properties: Spoofing violates authentication, Tampering violates integrity, Repudiation prevents accountability, Information Disclosure violates confidentiality, Denial of Service violates availability, and Elevation of Privilege violates authorization. The Open Threat Taxonomy categorizes threats as Physical (hardware damage), Resource (power/network disruption), Personnel (social engineering), and Technical (exploitation). ENISA provides IoT-specific frameworks covering devices, communications, data, services, and stakeholders.

Ten critical IoT attack scenarios demonstrate real-world exploitation paths: network eavesdropping for intelligence gathering, sensor manipulation causing safety failures, actuator sabotage for production disruption, administration system compromise enabling mass device control, protocol exploitation leveraging vulnerabilities, command injection for privilege escalation, stepping stone attacks for anonymity, DDoS botnet creation (Mirai-style), power manipulation for battery depletion, and ransomware attacks on critical infrastructure. Each scenario includes detailed attack steps, impact analysis, and targeted mitigations.

The comprehensive Python threat modeling framework implements DREAD scoring (calculating risk as average of five factors rated 1-10), attack tree analysis with probability propagation and critical path identification, automated threat identification matching asset properties to known vulnerabilities, MITRE ATT&CK phase mapping for IoT-specific techniques, residual risk calculation showing effectiveness of applied mitigations, attack surface scoring per asset based on threat count and severity, and mitigation coverage analysis identifying unprotected vulnerabilities. This framework enables data-driven security decisions with quantifiable risk metrics and prioritized remediation roadmaps.

16.4 Videos

Threat Modeling Overview
Threat Modeling Overview
From slides — frameworks (STRIDE, attack trees) and IoT-specific considerations.
Privacy Engineering Basics
Privacy Engineering Basics
From slides — aligning threat models with privacy-by-design principles.
Attack Trees and Misuse Cases
Attack Trees and Misuse Cases
From slides — constructing and using attack trees for IoT.

Common Pitfalls

Automated vulnerability scanners identify potential weaknesses based on version numbers and configuration patterns; penetration testers validate whether those weaknesses are actually exploitable and what an attacker could achieve. Both are necessary but neither alone is sufficient.

A security assessment of a hardened lab environment will miss the misconfigured VLAN, the default credential left on one production gateway, and the unpatched device that was added without going through the standard provisioning process.

An IoT security assessment without a defined scope will either be too broad (months of work without focus) or miss critical components (because they were assumed to be out of scope). Define scope, objectives, and success criteria before beginning.

Assessment findings are only valuable if they drive remediation. Track each finding in a risk register with an assigned owner, target remediation date, and verification evidence — and schedule a follow-up assessment to verify fixes.

16.7 Summary

  • STRIDE Classification: Each IoT threat maps to a specific STRIDE category – Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, or Elevation of Privilege – guiding the selection of appropriate countermeasures
  • DREAD Risk Scoring: Quantify threat severity by averaging five factors (Damage, Reproducibility, Exploitability, Affected users, Discoverability) on a 1-10 scale, with scores above 8.0 classified as CRITICAL and requiring immediate remediation
  • Attack Tree Probability: Sequential attack steps multiply (AND gate logic), so defense-in-depth dramatically reduces overall attack success – adding even a 50% defense layer halves the probability
  • Residual Risk Management: After mitigation, residual risk equals initial risk multiplied by (1 - mitigation effectiveness); unmitigated threats must be formally documented as accepted risk with management approval
  • Prioritization Strategy: Always address CRITICAL threats first regardless of budget constraints, then move to HIGH threats – fixing easy MEDIUM threats while leaving CRITICAL ones active is never acceptable
Concept Relationships

Understanding how threat assessment concepts interconnect:

Assessment Method Input Requirements Output Best Used For Limitation
STRIDE Analysis System architecture, data flows Categorized threat list (S/T/R/I/D/E) Comprehensive threat discovery Generates 30-50+ threats (need DREAD to prioritize)
DREAD Scoring Identified threats (from STRIDE) Risk score 0-10 per threat Remediation prioritization Subjective (two assessors may score differently)
Attack Trees Attack goal, sequential steps Probability of success Quantifying defense-in-depth ROI Time-intensive for complex systems
Residual Risk Calculation Initial risk, mitigation effectiveness Remaining risk after fixes Acceptance decisions, insurance estimates Assumes mitigation works as designed
Risk Acceptance Framework Residual risk score, management authority Go/no-go decision Documenting unmitigated threats Doesn’t reduce risk, only documents it

Key Insight: Combine methods for complete coverage. STRIDE identifies threats, DREAD prioritizes them, Attack Trees quantify defense effectiveness, Residual Risk shows what’s left after fixes, Risk Acceptance documents intentionally unmitigated threats.

See Also

Threat Modeling Series (Prerequisite Context):

Foundational Security:

Application of Assessment Results:

Compliance and Frameworks:

Practice and Validation:

Learning Resources:

16.8 What’s Next

If you want to… Read this
Understand threat modelling to scope assessments Threat Modelling and Mitigation
Apply STRIDE to systematically identify what to assess STRIDE Framework
Study the attacks your assessment should detect Threats Attacks and Vulnerabilities
Explore compliance requirements driving assessment scope Threats Compliance
Return to the security module overview IoT Security Fundamentals