The Mistake: A security team downloads a STRIDE threat list, assigns DREAD scores to generic threats, and creates a prioritized remediation plan—all without understanding the specific IoT system architecture.
Example of Incorrect DREAD Scoring:
Generic Threat: “SQL injection in web interface” Team’s DREAD Score: Damage=10, Reproducibility=9, Exploitability=8, Affected=10, Discoverability=7 Overall: 8.8/10 (CRITICAL) Prioritization: Immediate remediation required
Reality Check: The IoT system in question is a sensor network that: - Has NO web interface (only MQTT communication) - Has NO database (data forwarded to cloud, not stored locally) - Has NO SQL anywhere in the stack
Result: Team wasted 40 hours implementing “SQL injection mitigations” for a non-existent attack surface.
Why This Happens:
- Using generic threat lists without system-specific analysis
- Copying DREAD scores from other projects
- Threat modeling by committee without technical review
- Skipping the “What are we building?” step of threat modeling
The Correct Approach:
Step 1: Understand the System (4-8 hours before STRIDE) - Draw architecture diagram showing all components - Document data flows (where does data enter, transform, exit?) - Identify trust boundaries (internet, network, device, cloud) - List all protocols, interfaces, and authentication mechanisms
Step 2: Component-Specific STRIDE (2 hours per major component) - Apply STRIDE to each component individually - Only score threats that are POSSIBLE given the component’s capabilities - Example: If component has no user input, skip “Tampering via input injection”
Step 3: Context-Specific DREAD (15 min per threat) - Damage: Based on actual data stored/processed by THIS component - Reproducibility: Based on actual access requirements for THIS deployment - Exploitability: Based on actual attacker skill needed for THIS implementation - Affected Users: Based on actual deployment scale for THIS system - Discoverability: Based on actual visibility for THIS device (Shodan? Internal only?)
Corrected Example:
Specific System: Industrial sensor network with MQTT broker Specific Threat: “MQTT topic injection allowing sensor spoofing”
Informed DREAD Scoring:
- Damage: 8/10 (false sensor data triggers safety shutdown, $50K production loss per incident)
- Reproducibility: 10/10 (any network access, trivial MQTT publish command)
- Exploitability: 7/10 (requires network access + knowledge of topic structure)
- Affected Users: 6/10 (affects 1 production line out of 5)
- Discoverability: 4/10 (internal network only, not Shodan-visible)
Overall: 7.0/10 (HIGH priority)
Mitigation: Implement MQTT access control lists (ACLs) restricting which clients can publish to sensor topics. Cost: $3K (2 weeks config work). Risk reduction: 7.0 → 2.5.
Contrast with Generic Approach: If the team had used generic DREAD scores without understanding the system, they might have scored this threat as 5/10 (MEDIUM) because “MQTT is just messaging, how bad can it be?” Missing the critical insight that false sensor data triggers production shutdowns.
Verification Checklist:
Key Takeaway: DREAD is a scoring tool, not a threat discovery tool. Understand the system FIRST, score threats SECOND. Generic threats scored in a vacuum lead to wasted remediation effort on non-existent risks while missing real vulnerabilities specific to your deployment.