42  Interactive IoT Security Tools

Learning Objectives

After using these interactive tools, you will be able to:

  • Calculate IoT device security risk scores using the DREAD methodology (Damage, Reproducibility, Exploitability, Affected users, Discoverability)
  • Evaluate how device design choices such as connectivity, update capability, and authentication strength impact overall security risk
  • Diagnose high-risk configurations and generate prioritized mitigation recommendations
  • Compare security postures across different IoT device types using quantitative risk scoring

Key Concepts

  • Threat modelling tool: Software (OWASP Threat Dragon, Microsoft Threat Modeling Tool) that guides the creation of Data Flow Diagrams and automatically suggests STRIDE threats for each element and data flow.
  • Network scanner: A tool (Nmap, Shodan) that discovers IoT devices on a network, identifies open ports and services, and reports version information enabling vulnerability assessment.
  • Protocol analyser: A tool (Wireshark, tcpdump) that captures and decodes network traffic, enabling verification of encryption, protocol conformance, and anomaly detection in IoT communications.
  • Fuzzer: A tool that generates malformed or random inputs to IoT device interfaces to discover crashes, hangs, or unexpected behaviour indicating exploitable vulnerabilities.
  • Risk calculator: A tool applying CVSS or DREAD scoring to enumerate and prioritise security risks, helping teams focus remediation effort on the highest-impact vulnerabilities.
In 60 Seconds

Interactive IoT security tools — threat model generators, attack simulators, vulnerability scanners, and risk calculators — accelerate security learning and analysis by making abstract concepts tangible and immediately testable. The most valuable tools are those that allow you to vary inputs and observe the resulting changes in threat coverage or risk scores, developing intuition for the sensitivity of security decisions.

IoT security threats are the various ways that connected devices and their data can be compromised. Think of your IoT system as a house – you need to understand how burglars might try to get in before you can choose the right locks, alarms, and security cameras. This chapter helps you understand the threats so you can build effective defenses.

“This is so cool!” Lila the LED exclaimed. “This chapter has an interactive tool where you can calculate the security risk of any IoT device! You plug in details about your device – like what type of connectivity it has, whether it can be updated, and how strong its authentication is – and the calculator gives you a risk score.”

Sammy the Sensor tried it out. “I put in my specs: Wi-Fi connected, firmware update capable, certificate-based authentication, and encrypted storage. My DREAD score came back moderate – 5.2 out of 10. Not bad! But it showed that my biggest weakness is that I am always connected to the internet, which increases the Exploitability score.”

“The tool uses the DREAD methodology,” Max the Microcontroller explained. “It checks four threat categories for your device and scores each one. High-risk devices – like those with no update mechanism or default passwords – get scores above 7, which means URGENT action needed. Low-risk devices below 3 just need regular monitoring.”

“The best part is the recommendations!” Bella the Battery said. “After calculating your score, the tool tells you exactly what to fix first. It might say ‘add TLS encryption’ or ‘enable secure boot.’ It takes the guesswork out of security planning. Try it yourself with different device configurations and see how each change affects the risk score!”

42.1 Interactive Security Assessment Tools

This chapter provides interactive tools for assessing IoT security risks and understanding attack surfaces.

How It Works: DREAD Risk Calculator

The interactive DREAD risk calculator uses a methodical scoring process:

Step 1: Input Device Characteristics

  • Network connectivity type (determines exploitability)
  • Data sensitivity level (affects damage potential)
  • Update capability (impacts reproducibility)
  • Physical access controls (influences discoverability)
  • Authentication strength (modifies affected users)

Step 2: Calculate Individual Threat Scores Each threat category (Physical, Network, Software, Data) is scored on five DREAD factors: - Damage (D): Potential harm if exploited (0-10 scale) - Reproducibility (R): How easily attack can be repeated - Exploitability (E): Skill level required to execute - Affected Users (A): Scope of impact - Discoverability (D): How easily vulnerability can be found

Step 3: Generate Risk Profile

  • Average DREAD factors for each threat category
  • Weight categories by deployment context (network threats weighted higher for internet-exposed devices)
  • Calculate overall risk score (0-10 scale)

Step 4: Prioritize Mitigations

  • Compare risk scores across categories
  • Generate ranked mitigation recommendations
  • Flag critical vulnerabilities (score ≥ 8.0) for immediate action

Why This Approach Works: DREAD provides quantitative risk assessment that converts subjective security concerns into comparable numeric scores, enabling data-driven prioritization of limited security budgets.

This interactive calculator uses the DREAD methodology (Damage, Reproducibility, Exploitability, Affected users, Discoverability) to assess security risk for IoT devices. It evaluates four threat categories based on device characteristics you specify.

How to use this tool:

  1. Adjust the device characteristics sliders below
  2. Observe how risk scores change for each threat category
  3. Review the overall risk level and mitigation recommendations
  4. Experiment with different configurations to understand risk factors

Learning objective: Understand how device design choices (connectivity, update capability, data sensitivity) directly impact security risk.

This tool visualizes the attack surface of a typical IoT system. Select each component to explore common attacks, risk levels, and mitigation strategies.

42.2 CVSS Score Calculator

Understanding how to categorize vulnerabilities helps prioritize remediation efforts:

Category CVSS Range Response Time Example
Critical 9.0 - 10.0 Immediate (24-48 hours) Remote code execution, default credentials
High 7.0 - 8.9 Urgent (1 week) SQL injection, privilege escalation
Medium 4.0 - 6.9 Standard (30 days) XSS, information disclosure
Low 0.1 - 3.9 Scheduled (90 days) Minor information leak

CVSS Components:

  • Attack Vector (AV): Network, Adjacent, Local, Physical
  • Attack Complexity (AC): Low, High
  • Privileges Required (PR): None, Low, High
  • User Interaction (UI): None, Required
  • Impact (C/I/A): None, Low, High for Confidentiality, Integrity, Availability
Concept Relationships
Concept Relates To Nature of Relationship
DREAD Methodology Risk Quantification DREAD converts qualitative threats to numeric scores
Attack Surface Entry Point Identification Attack surface maps all potential vulnerability points
Device Characteristics Risk Profile Physical/network attributes determine exposure level
Threshold Tuning False Positive Balance Thresholds trade detection sensitivity vs operational impact
Mitigation Recommendations Priority Ranking Risk scores drive resource allocation decisions
Security Tools Team Maturity Tool selection must match organizational capability
See Also

Risk Assessment:

Practical Application:

Tool Selection:

42.3 Knowledge Check

Scenario: AgriTech Solutions deploys 1,000 soil moisture sensors across 50 farms. Each sensor (STM32L0 MCU + LoRa radio) transmits moisture readings every 6 hours to a LoRaWAN gateway. Security audit identified 8 vulnerabilities. Limited €25K security budget requires prioritization.

Device Configuration (Input to DREAD Calculator): - Network Connectivity: LoRaWAN (LPWAN) - Data Sensitivity: 6/10 (soil moisture reveals crop type, irrigation patterns - moderate commercial sensitivity) - Update Capability: Manual Updates (requires field technician visit) - Physical Access: Public (sensors in open fields, accessible to anyone) - Authentication Strength: 4/10 (LoRaWAN AppKey shared across all 1,000 devices) - Deployment Scale: 1,000 devices

DREAD Calculator Output:

Overall Risk Score: 6.8/10 (High)

Threat Category Damage Reproducibility Exploitability Affected Users Discoverability Total Risk
Physical Threats 7 7 8 (public access) 2 (1,000 devices = score 2) 9 (visible in fields) 6.6
Network Threats 7 9 6 (LoRaWAN requires SDR) 3 7 (LoRaWAN scannable) 6.4
Software Threats 8 9 7 (manual updates = stale firmware) 3 8 7.0
Data Threats 6 8 7 (weak auth) 3 6 6.0

Mitigation Recommendations (from Calculator):

  1. CRITICAL: Network Exposure - LoRaWAN uses shared AppKey (all devices decrypt with same key)
    • Risk: Attacker with €200 LoRa SDR (HackRF, LimeSDR) can eavesdrop on all transmissions
    • Action: Migrate to LoRaWAN 1.1 with per-device session keys (NwkSKey, AppSKey unique per device)
  2. CRITICAL: Authentication - Shared AppKey means compromising one device compromises all
    • Risk: Attacker steals one sensor from field, extracts AppKey from flash memory, joins network as any device
    • Action: Provision unique AppKeys per device using secure OTAA (Over-The-Air Activation)
  3. HIGH: Physical Security - Sensors in open fields with no tamper detection
    • Risk: Attacker opens case, connects UART, reflashes firmware
    • Action: Add tamper-evident seals (€0.50/device), epoxy-fill case screws (€0.10/device)
  4. HIGH: Software Updates - Manual updates require technician visit (€50/device) → updates never happen
    • Risk: Known vulnerabilities remain unpatched for years
    • Action: Implement LoRaWAN FUOTA (Firmware Update Over-The-Air) using LoRa Alliance TS006 spec

Detailed Vulnerability Analysis (Mapped to DREAD Components):

42.3.1 Vulnerability 1: Shared LoRaWAN AppKey Across All 1,000 Devices

DREAD Calculation:

  • Damage: 9/10 (Compromise of one device compromises all 1,000)
  • Reproducibility: 10/10 (Works every time once AppKey extracted)
  • Exploitability: 6/10 (Requires LoRa SDR + knowledge of LoRaWAN protocol)
  • Affected Users: 10/10 (All 1,000 devices + all 50 farms)
  • Discoverability: 8/10 (LoRaWAN traffic visible on spectrum analyzer)
  • Total DREAD: (9+10+6+10+8)/5 = 8.6/10CRITICAL

Expected Loss Calculation:

  • Probability of exploitation: 40% (publicly known vulnerability, €200 tools)
  • Impact per farm: €5,000 (competitor learns irrigation strategy, steals crop timing data)
  • Farms affected: 50
  • Expected loss: 0.4 × 50 × €5,000 = €100,000

Mitigation Cost:

  • Re-provision 1,000 devices with unique AppKeys: 10 hours scripting + 1 week field deployment = €8,000
  • ROI: €100K expected loss vs €8K mitigation = 12.5x return

Decision: FIX IMMEDIATELY (P0 priority)

42.3.2 Vulnerability 2: No Firmware Signature Verification

DREAD Calculation:

  • Damage: 10/10 (Attacker can upload malicious firmware, full device control)
  • Reproducibility: 9/10 (Works if attacker has physical access)
  • Exploitability: 7/10 (Requires opening device, connecting UART)
  • Affected Users: 2/10 (Only physically accessed devices, unlikely to scale)
  • Discoverability: 5/10 (Requires source code or firmware analysis)
  • Total DREAD: (10+9+7+2+5)/5 = 6.6/10HIGH

Expected Loss:

  • Probability: 10% (requires physical access to unattended sensors)
  • Impact: €2,000/device (replacement + investigation)
  • Devices likely compromised: 10 (targeted attacks on high-value farms)
  • Expected loss: 0.1 × 10 × €2,000 = €2,000

Mitigation Cost:

  • Implement bootloader signature verification: 3 weeks development + testing = €12,000
  • ROI: €2K expected loss vs €12K cost = NEGATIVE 6xDEFER

Decision: P2 (High priority but not urgent) - High DREAD but low expected loss due to low probability

42.3.3 Vulnerability 3: Verbose Error Messages Expose Device Details

DREAD Calculation:

  • Damage: 4/10 (Information disclosure, not direct compromise)
  • Reproducibility: 10/10 (Error messages always visible)
  • Exploitability: 9/10 (Anyone with LoRa receiver can see)
  • Affected Users: 8/10 (All devices leak info)
  • Discoverability: 7/10 (Requires LoRa SDR but messages plainly visible)
  • Total DREAD: (4+10+9+8+7)/5 = 7.6/10HIGH

Expected Loss:

  • Probability: 60% (low barrier to exploitation)
  • Impact: €500 (competitor learns device model, firmware version, gateway location)
  • Expected loss: 0.6 × €500 = €300

Mitigation Cost:

  • Remove verbose errors, log server-side only: 2 hours development = €400
  • ROI: €300 loss vs €400 cost = NEGATIVE 1.3x

Decision: P3 (Medium priority) - High DREAD but very low expected loss → Fix if budget allows

42.3.4 Vulnerability 4: No Rate Limiting on LoRaWAN Join Requests

DREAD Calculation:

  • Damage: 6/10 (DoS attack prevents new devices from joining network)
  • Reproducibility: 9/10
  • Exploitability: 5/10 (Requires LoRa transmitter + knowledge of join protocol)
  • Affected Users: 4/10 (Only affects new device deployment)
  • Discoverability: 6/10
  • Total DREAD: (6+9+5+4+6)/5 = 6.0/10MEDIUM

Expected Loss: €1,000 (delayed deployment during attack)

Mitigation Cost: €3,000 (implement join request rate limiting on gateway)

Decision: P3 - Expected loss < mitigation cost → ACCEPT RISK (defer indefinitely)

Final Prioritization (Based on DREAD + Expected Loss):

Priority Vulnerability DREAD Expected Loss Mitigation Cost Fix?
P0 Shared AppKey 8.6 €100,000 €8,000 ✅ YES (12.5x ROI)
P1 No FUOTA updates 7.0 €50,000 (long-term) €10,000 ✅ YES (5x ROI)
P2 No tamper detection 6.6 €20,000 €600 (seals only) ✅ YES (33x ROI)
P2 No firmware signing 6.6 €2,000 €12,000 ❌ DEFER (negative ROI)
P3 Verbose error messages 7.6 €300 €400 ❌ DEFER (negative ROI)
P3 No rate limiting 6.0 €1,000 €3,000 ❌ ACCEPT RISK

Budget Allocation (€25K total): - P0: Unique AppKeys (€8K) ✅ - P1: FUOTA implementation (€10K) ✅ - P2: Tamper-evident seals (€0.60 × 1,000 = €600) ✅ - Total spent: €18,600 - Remaining budget: €6,400 → Allocate to security monitoring (LoRaWAN gateway intrusion detection)

Key Insight from DREAD Calculator:

The calculator highlighted Network Threats (7.0) and Software Threats (7.0) as highest risk. This matches the prioritized fixes: - Network Threats → Shared AppKey (network-layer vulnerability) - Software Threats → No FUOTA updates (software can’t be patched)

Without the calculator, the team might have prioritized firmware signing (sounds critical, high DREAD 6.6) over tamper seals (sounds minor, DREAD 6.6). But expected loss analysis shows tamper seals have 33x ROI (€20K loss prevented for €600 cost) while firmware signing has negative ROI (€2K loss vs €12K cost). The calculator’s recommendations prevented a €11.4K budget waste (€12K firmware signing cost - €600 optimal spend).

Lessons Learned:

  1. DREAD score alone is insufficient - Must combine with expected loss (probability × impact)
  2. Physical security (tamper seals) had best ROI - €0.60/device prevented €20K in targeted attacks
  3. Shared credentials are highest risk - DREAD 8.6, affects all devices, trivial to exploit once one device compromised
  4. Defer low-probability high-impact threats - Firmware signing (DREAD 6.6) deferred because physical access attacks are rare in agriculture (open fields, low-value targets compared to critical infrastructure)

Post-Implementation DREAD Re-Calculation (After Fixes):

  • Authentication Strength: 4/10 → 8/10 (unique AppKeys)
  • Update Capability: Manual → Automatic (FUOTA)
  • Physical Access: Public + tamper seals (partial mitigation)

New Overall Risk Score: 4.2/10 (Medium) - down from 6.8 (High)

Risk reduction: 38% decrease in overall risk score by addressing top 3 vulnerabilities within budget.

Security tools range from simple risk calculators to enterprise SIEM platforms. This framework helps select tools appropriate for your team’s skill level and IoT deployment scale.

Decision Criteria Basic Tools Intermediate Tools Advanced Tools Enterprise Tools
Team Size 1-5 people 5-20 people 20-100 people 100+ people
IoT Device Count 1-100 devices 100-10,000 devices 10,000-1M devices 1M+ devices
Security Expertise Generalists (embedded developers) 1 dedicated security person Security team (3-5 people) Security Operations Center (SOC)
Annual Security Budget €0-€10K €10K-€100K €100K-€500K €500K+
Risk Calculator ✅ DREAD spreadsheet (free) ✅ Interactive web calculator ⚠️ Too basic for scale ❌ Not scalable
Threat Modeling ✅ Manual STRIDE (whiteboard) ✅ Microsoft Threat Modeling Tool (free) ✅ IriusRisk, ThreatModeler ✅ Integrated with SDLC (Jira, GitHub)
Vulnerability Scanning ✅ Nmap, OpenVAS (free) ✅ Nessus Essentials (free, 16 IPs) ✅ Nessus Professional, Qualys ✅ Tenable.io, Rapid7 InsightVM
SIEM / Log Analysis ❌ Not needed (manual log review) ✅ Graylog, ELK Stack (self-hosted) ✅ Splunk, Sumo Logic ✅ Splunk Enterprise Security, QRadar
Penetration Testing ✅ Self-service (Metasploit, Burp) ✅ Annual 3rd-party pentest (€5K-€15K) ✅ Quarterly pentest + red team ✅ Continuous automated + manual
Attack Surface Visualizer ✅ Manual diagram (draw.io) ✅ Interactive dashboards (custom built) ✅ Commercial ASM tools (CyCognito, Censys) ✅ Integrated with CMDB, full automation
Firmware Analysis ✅ Binwalk, strings (manual) ✅ Firmware Analysis Toolkit (FAT) ✅ EMBA, Firmadyne ✅ IoT Inspector, Finite State
Code Analysis (SAST) ✅ Compiler warnings, cppcheck (free) ✅ SonarQube Community (free) ✅ Checkmarx, Veracode ✅ Synopsys Coverity, Fortify
Dynamic Testing (DAST) ✅ Manual testing (Burp Free) ✅ OWASP ZAP (free) ✅ Burp Suite Pro, Acunetix ✅ Veracode DAST, Rapid7 AppSpider
Compliance Tracking ✅ Excel checklist ✅ Compliance management spreadsheet ✅ OneTrust, Vanta ✅ ServiceNow GRC, Archer

Quick Selection Guide:

42.3.5 Scenario 1: IoT Startup (10 Devices, 3-Person Team, €5K Budget)

Tools to Use:

  • Risk Calculator: Build a simple DREAD spreadsheet (this chapter’s OJS calculator as template)
  • Threat Modeling: Manual STRIDE on whiteboard (2 hours per sprint)
  • Vulnerability Scanning: Nmap + OpenVAS (free, open-source)
  • Penetration Testing: Self-service with Metasploit + Burp Suite Community (free)
  • Code Analysis: Enable compiler warnings (-Wall -Wextra), run cppcheck (free)
  • Compliance: ETSI EN 303 645 Excel checklist (13 provisions)

Total Cost: €0 (all free tools) + 8 hours/month team time

What NOT to Use: Enterprise SIEM (overkill), commercial pentesting (too expensive), Attack Surface Management platforms (not needed at this scale)

42.3.6 Scenario 2: Growing IoT Company (5,000 Devices, 15-Person Team, €50K Budget)

Tools to Use:

  • Risk Calculator: Interactive web dashboard (custom-built based on this chapter’s OJS code)
  • Threat Modeling: Microsoft Threat Modeling Tool (free, structured diagrams)
  • Vulnerability Scanning: Nessus Professional (€3K/year for 256 IPs)
  • SIEM: Graylog self-hosted (free) or Splunk Free (500 MB/day)
  • Penetration Testing: Annual 3rd-party pentest (€10K)
  • Firmware Analysis: Firmware Analysis Toolkit (FAT) + Binwalk automation scripts
  • Code Analysis: SonarQube Community (free)
  • Compliance: Vanta or Drata (€10K-€20K/year for automated compliance tracking)

Total Cost: €30K-€40K (tools + services) + 1 dedicated security person

What NOT to Use: Splunk Enterprise Security (€100K+/year, overkill), continuous pentesting (€200K+/year), IoT Inspector (€50K+/year, only justified for safety-critical devices)

42.3.7 Scenario 3: Enterprise IoT (500K Devices, 50-Person Team, €500K Budget)

Tools to Use:

  • Risk Calculator: Integrated into Jira/ServiceNow (automated risk scoring per vulnerability)
  • Threat Modeling: IriusRisk or ThreatModeler (€20K-€50K/year, integrates with SDLC)
  • Vulnerability Scanning: Tenable.io (€50K/year for cloud-based scanning)
  • SIEM: Splunk Enterprise Security (€100K-€200K/year)
  • Penetration Testing: Quarterly manual pentest (€40K/year) + HackerOne bug bounty (€50K/year)
  • Attack Surface Management: CyCognito or Censys (€30K-€100K/year)
  • Firmware Analysis: IoT Inspector or Finite State (€50K-€150K/year, automated at scale)
  • Code Analysis: Synopsys Coverity + Fortify (€100K/year)
  • Compliance: ServiceNow GRC (€150K/year, full governance/risk/compliance platform)

Total Cost: €500K+ (tools + SOC team + services)

What NOT to Use: Free tools (don’t scale to 500K devices), manual processes (not sustainable), self-hosted SIEM (operational burden)

Decision Tree:

  1. How many devices?
    • <100: Manual tools (spreadsheets, whiteboards, free scanners)
    • 100-10,000: Scripted automation (custom dashboards, scheduled scans)
    • 10,000-1M: Commercial tools (Nessus, SonarQube, managed SIEM)
    • 1M: Enterprise platforms (Tenable, Splunk ES, continuous pentesting)

  2. Do you have dedicated security staff?
    • No: Use free tools only (Nmap, OpenVAS, cppcheck, Excel compliance checklists)
    • 1 person: Add commercial scanners (Nessus €3K/year) + annual pentest (€10K)
    • 3-5 people: Add SIEM (Splunk €50K/year) + quarterly pentest + compliance automation (Vanta €20K)
    • 10+ people: Full SOC with enterprise tools (€500K+ annual security budget)
  3. What’s your compliance requirement?
    • None (internal use only): Basic tools sufficient
    • ETSI EN 303 645 (consumer IoT): Add annual 3rd-party audit (€15K)
    • IEC 62443 (industrial IoT): Add certified SAST/DAST tools + penetration testing
    • FDA (medical IoT): Add IoT-specific firmware analysis (IoT Inspector) + continuous monitoring
  4. What’s your risk profile?
    • Low-risk (environmental sensors): Free tools + manual processes
    • Medium-risk (smart building): Commercial scanners + annual pentest
    • High-risk (industrial control): Quarterly pentest + red team exercises
    • Safety-critical (medical, automotive): Continuous automated testing + monthly pentest + bug bounty

Best For:

  • Basic Tools: Startups, prototypes, low-risk IoT, learning environments
  • Intermediate Tools: Commercial IoT products, 100-10,000 devices, small security teams
  • Advanced Tools: Large-scale IoT (>10K devices), industrial IoT, security teams of 5+
  • Enterprise Tools: Critical infrastructure, medical devices, automotive, financial IoT, >1M devices

The “Good Enough” Baseline (Works for 80% of IoT Companies): - Nmap (network scanning) - Free - Nessus Essentials (vulnerability scanning, 16 IPs) - Free - OWASP ZAP (web app testing) - Free - Binwalk + FAT (firmware analysis) - Free - SonarQube Community (code analysis) - Free - Excel (compliance checklist) - Free - Annual pentest (3rd party validation) - €10K

Total: €10K/year (just the pentest) + free tools. This baseline catches 80% of vulnerabilities and costs <€1/device/year for deployments up to 1,000 devices.

Common Mistake: Using Risk Calculators to Justify Ignoring High-Risk Issues

The Mistake: A team uses a DREAD risk calculator and discovers a vulnerability with a calculated score of 6.5/10 (Medium-High). Because it’s not 9.0+ (Critical), they deprioritize it, arguing “the tool says it’s only medium risk, so we’ll fix it next quarter.” Meanwhile, the vulnerability (e.g., missing authentication on firmware update endpoint) gets exploited within 2 weeks.

Why This Fails:

  1. Risk Scores Are Estimates, Not Certainties: DREAD scoring involves subjective judgments. One engineer rates Exploitability as 5/10 (requires network access), another rates it 8/10 (automated tools available). The score range is 5.0-8.0 depending on who calculates it. Using a single numeric score to defer mitigation ignores this uncertainty.

  2. Attackers Don’t Follow Your Risk Model: The calculator says “Discoverability: 4/10” because the vulnerability isn’t indexed by Shodan. But a security researcher publishes a blog post titled “How to Hack [Your Product] in 5 Minutes.” Overnight, Discoverability jumps from 4/10 to 10/10. Your 3-month-old risk assessment is now dangerously outdated.

  3. Low-Probability, High-Impact Events Still Happen: A vulnerability with 10% probability (Exploitability: 3/10) and catastrophic impact (Damage: 10/10) might score 6.0/10 average. The team defers it. Three months later, the 10% event occurs. A single successful exploit causes €1M in damages. The “low probability” didn’t prevent the incident - it just made the team complacent.

  4. Risk Calculators Don’t Account for Attacker Motivation: The calculator assumes rational attackers targeting high-value assets. But what if you’re targeted by a disgruntled ex-employee (Insider threat) or a researcher trying to make a name for themselves (Proof-of-concept exploit for a conference talk)? Motivation changes the risk profile completely. The calculator doesn’t model this.

  5. “Accept Risk” Becomes “Ignore Risk”: The team marks a 5.0/10 vulnerability as “Risk Accepted” based on calculator output. Six months later, no one remembers the accepted risk. It’s not tracked, not re-assessed, and not revisited when the threat landscape changes (e.g., a new automated exploit tool is released). “Accept risk” becomes permanent neglect.

Real-World Example: Ubiquiti UniFi SSRF Vulnerability (CVE-2020-8124)

In 2020, a Server-Side Request Forgery (SSRF) vulnerability was discovered in Ubiquiti’s UniFi network management software (widely used for enterprise Wi-Fi deployments).

Initial Risk Assessment (Hypothetical DREAD Scoring): - Damage: 6/10 (Could access internal network resources, but not full compromise) - Reproducibility: 9/10 (Works reliably) - Exploitability: 5/10 (Requires network access to UniFi controller) - Affected Users: 4/10 (Only affects deployments with internet-exposed controllers) - Discoverability: 4/10 (Requires source code analysis or deep packet inspection) - Average DREAD: (6+9+5+4+4)/5 = 5.6/10Medium Risk

What Happened:

  • A security researcher published a detailed exploit on GitHub with working proof-of-concept code
  • Discoverability jumped from 4/10 to 10/10 (publicly documented)
  • Exploitability jumped from 5/10 to 9/10 (exploit code available)
  • New DREAD: (6+9+9+4+10)/5 = 7.6/10High Risk

Result: Within weeks, automated scans began exploiting UniFi controllers globally. Attackers accessed internal networks, pivoted to other systems, exfiltrated data. Organizations that deprioritized the fix based on the initial 5.6/10 score suffered breaches. The risk model was correct at time T, but obsolete at time T+30 days.

The Correct Approach (Risk Calculators as Input, Not Decision):

Rule 1: Risk Scores Guide, They Don’t Decide

  • Calculator output is one input to the decision process, not the sole decision
  • Combine DREAD score with:
    • Regulatory requirements (ETSI Provision 1 = mandatory regardless of DREAD)
    • Business context (investor demo next week = fix cosmetic issues too)
    • Threat intelligence (zero-day just published = DREAD score is now stale)
    • Qualitative factors (team gut feeling: “this feels exploitable”)

Rule 2: Continuous Re-Assessment

  • Re-calculate risk scores when:
    • New exploit tool published (Exploitability increases)
    • Vulnerability appears in CISA KEV catalog (Known Exploited Vulnerabilities)
    • Similar vulnerability exploited in related products (increases probability)
    • Deployment scale changes (100 devices → 10,000 devices = Affected Users increases)
  • Cadence: Review “Accepted Risk” vulnerabilities quarterly (not never)

Rule 3: Risk Acceptance Requires Formal Process

  • Don’t just defer low-score vulnerabilities - formally document risk acceptance:
    • Who accepted the risk: CISO or security lead (named accountability)
    • Why accepted: “Expected loss €2K vs mitigation cost €10K” (quantitative justification)
    • Compensating controls: “Web Application Firewall blocks SSRF attacks” (partial mitigation)
    • Re-assessment date: “Revisit Q3 2025 or if exploit published” (not permanent)
    • Approval: Signed by executive (legal protection)

Rule 4: Treat “Medium” Risk as “High” for Certain Contexts

  • Safety-critical systems (medical, automotive): Medium (5.0-6.9) → treat as High (fix urgently)
  • Regulated industries (finance, healthcare): Compliance failure → treat as Critical regardless of DREAD
  • High-value targets (defense, critical infrastructure): Assume nation-state attackers → all risks elevated

Rule 5: Override the Calculator When Necessary

  • Example: Calculator says 5.8/10 (Medium) for missing rate limiting on login endpoint
    • Team Lead: “This is used by 50,000 customers. Credential stuffing attacks are automated and widespread. DREAD doesn’t capture the real-world threat. Override to High (7.5/10) and fix this sprint.”
    • Reason: Calculator uses generic assumptions. Domain knowledge reveals higher real-world risk.

Example of Proper Risk Acceptance Documentation:

  • Vulnerability ID: IOT-2024-042
  • DREAD Score: 5.2/10 (Medium)
  • Mitigation Cost: EUR 12,000 for firmware signing
  • Expected Loss: EUR 2,000 because exploitation requires physical access and the probability is low
  • Decision: ACCEPT RISK
  • Justification: Expected loss is lower than mitigation cost, and the devices stay in locked facilities
  • Compensating Controls: Tamper-evident seals (EUR 600) plus camera-based physical access monitoring
  • Re-Assessment Date: Q1 2025 or immediately if a physical attack is observed
  • Approver: Jane Doe, CISO (signed 2024-12-15)

Contrast with Wrong Approach:

  • Vulnerability ID: IOT-2024-042
  • DREAD Score: 5.2/10 (Medium)
  • Decision: DEFER
  • Justification: “Medium risk, will fix next quarter”

What’s Missing:

  • No expected loss calculation
  • No named approver
  • No compensating controls
  • No re-assessment trigger
  • No formal acceptance - just indefinite delay

Six months later: No one remembers why this was deferred. It’s not tracked. A physical attack occurs (attacker tailgates into facility). Device is compromised. Company is liable because they had no documented risk acceptance or compensating controls.

The “Override the Calculator” Checklist:

Before accepting a medium-risk vulnerability, ask: 1. ☑ Is this vulnerability in a safety-critical component? (Yes → override to High) 2. ☑ Is there a publicly available exploit? (Yes → override to Critical) 3. ☑ Does this violate compliance requirements? (Yes → override to Critical) 4. ☑ Are we a high-value target (critical infrastructure, defense, finance)? (Yes → override to High) 5. ☑ Has a similar vulnerability been exploited in related products? (Yes → increase Exploitability score)

If any checkbox is ticked, don’t blindly follow the calculator - escalate the risk manually.

The Lesson: Risk calculators are useful decision support tools, not decision replacement tools. They provide a structured framework for risk assessment, but human judgment, threat intelligence, and business context must override the calculator when necessary. Using a 5.6/10 DREAD score to justify ignoring a vulnerability that later causes a breach is negligent. The calculator didn’t fail - the decision-making process failed by treating the calculator as infallible.

DREAD quantifies threat severity by averaging five independent risk factors, each scored 0-10.

\[\text{DREAD Score} = \frac{D + R + E + A + D_{\text{discover}}}{5}\]

where: - \(D\) = Damage potential (0=negligible, 10=catastrophic) - \(R\) = Reproducibility (0=nearly impossible, 10=trivial) - \(E\) = Exploitability (0=PhD cryptographer, 10=script kiddie) - \(A\) = Affected users (0=single device, 10=entire fleet) - \(D_{\text{discover}}\) = Discoverability (0=requires source audit, 10=Shodan visible)

Worked Calculation: Brute Force Attack on Smart Door Lock

Given realistic IoT scenario with 10,000 deployed smart locks:

Step 1: Assess damage potential - Unauthorized physical access = 8/10 (property theft, privacy violation)

Step 2: Assess reproducibility - Attack works every time on unpatched devices = 9/10

Step 3: Assess exploitability - Automated tools available (Hydra, Medusa) = 8/10

Step 4: Assess affected users - 10,000 locks × 40% vulnerable = 4,000 affected = 7/10

Step 5: Assess discoverability - Shodan query reveals exposed devices = 10/10

Calculate DREAD: \[\text{Score} = \frac{8 + 9 + 8 + 7 + 10}{5} = \frac{42}{5} = 8.4\]

Result: 8.4/10 = CRITICAL priority (threshold: >8.0 requires immediate remediation).

Step 6: Calculate risk reduction after mitigation (account lockout after 5 failed attempts) \[R_{\text{new}} = 3/10, \quad E_{\text{new}} = 4/10\] \[\text{Score}_{\text{residual}} = \frac{8 + 3 + 4 + 7 + 10}{5} = 6.4\]

Risk reduction: \((8.4 - 6.4)/8.4 = 23.8\%\) (still HIGH, needs additional controls)

In practice: DREAD converts subjective threat assessment into comparable numeric scores, enabling data-driven prioritization when budget constraints force choosing between fixing 100 identified vulnerabilities.

42.4 What’s Next

If you want to… Read this
Apply threat modelling tools to the STRIDE framework STRIDE Framework
Use protocol analysis tools in hands-on security labs IoT Security Hands-On Labs
Apply tools to vulnerability and attack analysis Threats Attacks and Vulnerabilities
Use risk calculators in security assessments Threats Assessments
Return to the security module overview IoT Security Fundamentals

Common Pitfalls

Running a tool and accepting its output without understanding what it is testing and why provides false assurance. Understand the methodology behind each tool before trusting its results in security decisions.

Network scanners, fuzzers, and penetration testing tools can disrupt IoT device communications and trigger emergency responses on production systems. Always obtain explicit written authorisation before running any security tool on a production network.

Automated tools produce false positives (flagging secure configurations as vulnerabilities) and false negatives (missing vulnerabilities requiring manual discovery). Treat tool output as a starting point for manual validation, not a final assessment.

Vulnerability scanners rely on databases of known CVEs and signatures. Running an outdated scanner will miss vulnerabilities discovered since the last database update. Update all tool databases immediately before each assessment.