The Mistake: A team uses a DREAD risk calculator and discovers a vulnerability with a calculated score of 6.5/10 (Medium-High). Because it’s not 9.0+ (Critical), they deprioritize it, arguing “the tool says it’s only medium risk, so we’ll fix it next quarter.” Meanwhile, the vulnerability (e.g., missing authentication on firmware update endpoint) gets exploited within 2 weeks.
Why This Fails:
Risk Scores Are Estimates, Not Certainties: DREAD scoring involves subjective judgments. One engineer rates Exploitability as 5/10 (requires network access), another rates it 8/10 (automated tools available). The score range is 5.0-8.0 depending on who calculates it. Using a single numeric score to defer mitigation ignores this uncertainty.
Attackers Don’t Follow Your Risk Model: The calculator says “Discoverability: 4/10” because the vulnerability isn’t indexed by Shodan. But a security researcher publishes a blog post titled “How to Hack [Your Product] in 5 Minutes.” Overnight, Discoverability jumps from 4/10 to 10/10. Your 3-month-old risk assessment is now dangerously outdated.
Low-Probability, High-Impact Events Still Happen: A vulnerability with 10% probability (Exploitability: 3/10) and catastrophic impact (Damage: 10/10) might score 6.0/10 average. The team defers it. Three months later, the 10% event occurs. A single successful exploit causes €1M in damages. The “low probability” didn’t prevent the incident - it just made the team complacent.
Risk Calculators Don’t Account for Attacker Motivation: The calculator assumes rational attackers targeting high-value assets. But what if you’re targeted by a disgruntled ex-employee (Insider threat) or a researcher trying to make a name for themselves (Proof-of-concept exploit for a conference talk)? Motivation changes the risk profile completely. The calculator doesn’t model this.
“Accept Risk” Becomes “Ignore Risk”: The team marks a 5.0/10 vulnerability as “Risk Accepted” based on calculator output. Six months later, no one remembers the accepted risk. It’s not tracked, not re-assessed, and not revisited when the threat landscape changes (e.g., a new automated exploit tool is released). “Accept risk” becomes permanent neglect.
Real-World Example: Ubiquiti UniFi SSRF Vulnerability (CVE-2020-8124)
In 2020, a Server-Side Request Forgery (SSRF) vulnerability was discovered in Ubiquiti’s UniFi network management software (widely used for enterprise Wi-Fi deployments).
Initial Risk Assessment (Hypothetical DREAD Scoring): - Damage: 6/10 (Could access internal network resources, but not full compromise) - Reproducibility: 9/10 (Works reliably) - Exploitability: 5/10 (Requires network access to UniFi controller) - Affected Users: 4/10 (Only affects deployments with internet-exposed controllers) - Discoverability: 4/10 (Requires source code analysis or deep packet inspection) - Average DREAD: (6+9+5+4+4)/5 = 5.6/10 → Medium Risk
What Happened:
- A security researcher published a detailed exploit on GitHub with working proof-of-concept code
- Discoverability jumped from 4/10 to 10/10 (publicly documented)
- Exploitability jumped from 5/10 to 9/10 (exploit code available)
- New DREAD: (6+9+9+4+10)/5 = 7.6/10 → High Risk
Result: Within weeks, automated scans began exploiting UniFi controllers globally. Attackers accessed internal networks, pivoted to other systems, exfiltrated data. Organizations that deprioritized the fix based on the initial 5.6/10 score suffered breaches. The risk model was correct at time T, but obsolete at time T+30 days.
The Correct Approach (Risk Calculators as Input, Not Decision):
Rule 1: Risk Scores Guide, They Don’t Decide
- Calculator output is one input to the decision process, not the sole decision
- Combine DREAD score with:
- Regulatory requirements (ETSI Provision 1 = mandatory regardless of DREAD)
- Business context (investor demo next week = fix cosmetic issues too)
- Threat intelligence (zero-day just published = DREAD score is now stale)
- Qualitative factors (team gut feeling: “this feels exploitable”)
Rule 2: Continuous Re-Assessment
- Re-calculate risk scores when:
- New exploit tool published (Exploitability increases)
- Vulnerability appears in CISA KEV catalog (Known Exploited Vulnerabilities)
- Similar vulnerability exploited in related products (increases probability)
- Deployment scale changes (100 devices → 10,000 devices = Affected Users increases)
- Cadence: Review “Accepted Risk” vulnerabilities quarterly (not never)
Rule 3: Risk Acceptance Requires Formal Process
- Don’t just defer low-score vulnerabilities - formally document risk acceptance:
- Who accepted the risk: CISO or security lead (named accountability)
- Why accepted: “Expected loss €2K vs mitigation cost €10K” (quantitative justification)
- Compensating controls: “Web Application Firewall blocks SSRF attacks” (partial mitigation)
- Re-assessment date: “Revisit Q3 2025 or if exploit published” (not permanent)
- Approval: Signed by executive (legal protection)
Rule 4: Treat “Medium” Risk as “High” for Certain Contexts
- Safety-critical systems (medical, automotive): Medium (5.0-6.9) → treat as High (fix urgently)
- Regulated industries (finance, healthcare): Compliance failure → treat as Critical regardless of DREAD
- High-value targets (defense, critical infrastructure): Assume nation-state attackers → all risks elevated
Rule 5: Override the Calculator When Necessary
- Example: Calculator says 5.8/10 (Medium) for missing rate limiting on login endpoint
- Team Lead: “This is used by 50,000 customers. Credential stuffing attacks are automated and widespread. DREAD doesn’t capture the real-world threat. Override to High (7.5/10) and fix this sprint.”
- Reason: Calculator uses generic assumptions. Domain knowledge reveals higher real-world risk.
Example of Proper Risk Acceptance Documentation:
- Vulnerability ID:
IOT-2024-042
- DREAD Score:
5.2/10 (Medium)
- Mitigation Cost:
EUR 12,000 for firmware signing
- Expected Loss:
EUR 2,000 because exploitation requires physical access and the probability is low
- Decision:
ACCEPT RISK
- Justification: Expected loss is lower than mitigation cost, and the devices stay in locked facilities
- Compensating Controls: Tamper-evident seals (
EUR 600) plus camera-based physical access monitoring
- Re-Assessment Date:
Q1 2025 or immediately if a physical attack is observed
- Approver:
Jane Doe, CISO (signed 2024-12-15)
Contrast with Wrong Approach:
- Vulnerability ID:
IOT-2024-042
- DREAD Score:
5.2/10 (Medium)
- Decision:
DEFER
- Justification: “Medium risk, will fix next quarter”
What’s Missing:
- No expected loss calculation
- No named approver
- No compensating controls
- No re-assessment trigger
- No formal acceptance - just indefinite delay
Six months later: No one remembers why this was deferred. It’s not tracked. A physical attack occurs (attacker tailgates into facility). Device is compromised. Company is liable because they had no documented risk acceptance or compensating controls.
The “Override the Calculator” Checklist:
Before accepting a medium-risk vulnerability, ask: 1. ☑ Is this vulnerability in a safety-critical component? (Yes → override to High) 2. ☑ Is there a publicly available exploit? (Yes → override to Critical) 3. ☑ Does this violate compliance requirements? (Yes → override to Critical) 4. ☑ Are we a high-value target (critical infrastructure, defense, finance)? (Yes → override to High) 5. ☑ Has a similar vulnerability been exploited in related products? (Yes → increase Exploitability score)
If any checkbox is ticked, don’t blindly follow the calculator - escalate the risk manually.
The Lesson: Risk calculators are useful decision support tools, not decision replacement tools. They provide a structured framework for risk assessment, but human judgment, threat intelligence, and business context must override the calculator when necessary. Using a 5.6/10 DREAD score to justify ignoring a vulnerability that later causes a breach is negligent. The calculator didn’t fail - the decision-making process failed by treating the calculator as infallible.