36  IoT Security Practice Exercises

Learning Objectives

After completing these exercises, you will be able to:

  • Classify IoT threat actors by capability, motivation, and likely attack vectors for a given deployment scenario
  • Apply the STRIDE framework systematically to enumerate threats across all trust boundaries in IoT architectures
  • Design layered defense strategies that prioritize mitigations for the most probable threat actors
  • Justify security investment decisions using quantitative risk assessment and cost-benefit analysis

Key Concepts

  • Security design review exercise: An exercise presenting an IoT architecture diagram and asking the participant to identify security weaknesses, missing controls, and misplaced trust boundaries.
  • STRIDE application exercise: A structured exercise applying each STRIDE category to a specific IoT component or data flow, producing a comprehensive threat list with corresponding mitigations.
  • Incident response tabletop: A scenario-based exercise simulating a security incident (device compromise, data breach, ransomware) to test whether the team can detect, contain, investigate, and recover from the incident using defined procedures.
  • Control mapping exercise: An exercise mapping identified threats to specific security controls from a framework (NIST CSF, IEC 62443), identifying gaps where threats lack adequate mitigations.
  • Risk prioritisation exercise: An exercise rating identified risks by impact and likelihood to produce a prioritised remediation list, practising the trade-off reasoning required in real security programmes.
In 60 Seconds

IoT security exercises — structured application of threat modelling, attack analysis, and control selection to realistic scenarios — build the applied judgment that distinguishes security engineers who can design secure systems from those who can only recite security principles. The most effective exercises present an incomplete or flawed security design and require you to identify what is missing and why it matters.

This hands-on chapter lets you practice IoT security concepts through exercises and scenarios. Think of it like a fire drill – practicing your response to threats in a safe environment prepares you to handle real incidents effectively. Working through these exercises builds the practical skills needed to secure real IoT deployments.

“Attention, team!” Max the Microcontroller blew an imaginary whistle. “Today we run security drills! Just like firefighters practice before a real fire, we practice handling security threats before a real attack hits.”

“The first exercise is threat actor analysis,” Sammy the Sensor said. “You look at a smart building with 500 IoT devices and figure out: Who might attack it? A script kiddy looking for fun? A cybercriminal wanting to steal data? A disgruntled employee who already has access? Each attacker type needs different defenses!”

“Then you apply STRIDE,” Lila the LED continued. “For each component – every sensor, every gateway, every cloud connection – you ask the six STRIDE questions. Can someone spoof its identity? Tamper with its data? Deny their actions? Steal information? Crash the service? Gain extra privileges? It is like a security checklist for every single part of the system.”

“Finally, you design layered defenses,” Bella the Battery wrapped up. “The goal is not to build one giant wall but many smaller walls. If the firewall fails, encryption still protects data. If encryption fails, access controls still limit damage. Practice these exercises carefully – the skills you build here are exactly what real security professionals use every day!”

36.1 Practice Exercises

Apply your knowledge of IoT threats and vulnerabilities with these hands-on exercises.

Concept Relationships
Concept Relates To Nature of Relationship
Threat Actor Classification STRIDE Framework Threat actors use STRIDE categories to exploit systems
DREAD Scoring Risk Prioritization DREAD quantifies threat severity to guide mitigation priority
Attack Vectors Defense in Depth Each attack vector requires specific defensive controls
Vulnerability Assessment Incident Response Discovered vulnerabilities inform response procedures
Default Credentials Authentication Controls Weak defaults are primary entry point for attackers
Network Segmentation Lateral Movement Prevention Segmentation limits attacker reach after breach
See Also

Related Security Topics:

Implementation:

Testing:

Objective: Learn to identify threat actors, understand their motivations, and design appropriate defenses for different attacker profiles.

Scenario: You’re deploying a smart building system with 500 IoT devices controlling HVAC, lighting, and access control.

Tasks:

  1. Identify potential threat actors for this deployment using the threat actor classification (script kiddies, cybercriminals, hacktivists, insiders, nation-states)
  2. Analyze capabilities and motivations: For each threat actor, determine what they might target and why
  3. Map attack vectors: Identify which attack vectors each threat actor would likely use (network, physical, web/API, firmware, side-channel)
  4. Design layered defenses: Create a defense strategy that addresses the top 3 most likely threat actors

Expected Outcome: A threat assessment document identifying:

  • 3-5 relevant threat actors with capability scores (1-5)
  • Attack scenarios for each actor
  • Prioritized mitigation controls (e.g., “Deploy network segmentation to prevent lateral movement from compromised HVAC controller to access control system”)

Practical Application: Understanding threat actors helps you allocate security budget appropriately—defending against nation-states requires different investments than defending against script kiddies.

Objective: Apply the STRIDE framework to systematically identify threats in an IoT system.

Scenario: Smart door lock system with: mobile app -> cloud API -> Wi-Fi router -> smart lock device

Tasks:

  1. Draw a data flow diagram showing all components and communication paths
  2. Apply STRIDE to each component:
    • Spoofing: Can attacker impersonate mobile app, cloud server, or lock?
    • Tampering: Can unlock commands, firmware, or stored keys be modified?
    • Repudiation: Can users deny unlocking the door? Is there an audit trail?
    • Information Disclosure: Can unlock codes, Wi-Fi credentials, or user data leak?
    • Denial of Service: Can attacker prevent legitimate users from unlocking?
    • Elevation of Privilege: Can guest user gain admin access?
  3. Document 10+ threats (at least one per STRIDE category plus duplicates)
  4. Prioritize using DREAD (Damage, Reproducibility, Exploitability, Affected users, Discoverability)
  5. Recommend mitigations for the top 5 highest-risk threats

Expected Outcome: Threat model document with:

  • Data flow diagram
  • Threat table with STRIDE category, description, DREAD score, and mitigation
  • Top 5 priority threats with detailed mitigation plans

Example Threat:

  • Category: Tampering (Replay Attack)
  • Description: Attacker captures “unlock” Wi-Fi command and replays it 3 hours later
  • DREAD Score: 8/10 (High damage, easy reproducibility, medium exploitability, high affected users, high discoverability)
  • Mitigation: Add timestamp + nonce to unlock commands; reject commands >30 seconds old

Objective: Conduct a practical vulnerability assessment on an IoT device to identify security weaknesses.

Scenario: You have a smart camera that you want to test for vulnerabilities before deploying 100 units in your facility.

Tasks:

  1. Network scanning: Use nmap to identify open ports and services:

    nmap -sV -p- 192.168.1.100
    • Document all open ports (common IoT: 22-SSH, 23-Telnet, 80-HTTP, 443-HTTPS, 554-RTSP, 8080-Web)
    • Check for unnecessary services that should be disabled
  2. Default credential testing: Try common default passwords:

    • admin/admin, admin/password, root/root, admin/12345
    • Document if any work (CRITICAL vulnerability if yes)
  3. Firmware analysis: Download firmware and analyze:

    binwalk -e firmware.bin
    strings firmware.bin | grep -i "password\|key\|secret"
    • Look for hardcoded credentials, API keys, or encryption keys
    • Check if firmware is encrypted/obfuscated
  4. Web interface testing: Access device web interface:

    • Test for SQL injection: admin' OR '1'='1
    • Test for XSS: <script>alert('XSS')</script>
    • Check if HTTPS is enforced or if HTTP is allowed

Expected Outcome: Vulnerability assessment report with:

  • List of open ports and services (with risk ratings)
  • Default credentials test results
  • Hardcoded secrets found in firmware
  • Web vulnerabilities discovered
  • Risk score (1-10) and remediation recommendations

Safety Note: Only test devices you own. Unauthorized testing is illegal.

Objective: Practice responding to a real-world IoT security incident using established incident response procedures.

Scenario: Your security monitoring detects unusual activity:

  • Smart sensor #42 (normally sends 10KB/hour) suddenly sends 10MB in 5 minutes
  • Traffic analysis shows sensor is communicating with unknown external IP 185.220.101.x (known botnet C&C server)
  • 50 other sensors are showing similar suspicious patterns

Tasks:

  1. Identification (5 minutes):
    • Confirm the incident is real (not false positive)
    • Classify severity: Low/Medium/High/Critical
    • Identify affected systems and data
  2. Containment (10 minutes):
    • Immediate: Network-isolate all 51 affected sensors (VLAN quarantine or firewall block)
    • Short-term: Disable sensor accounts/credentials to prevent further spread
    • Document all containment actions with timestamps
  3. Eradication (15 minutes):
    • Analyze one compromised sensor: what malware is running? How did it get there?
    • Identify root cause: Default password? Firmware vulnerability? Physical tampering?
    • Develop eradication plan: Firmware reflash? Password reset? Replace hardware?
  4. Recovery (10 minutes):
    • Restore sensors from clean firmware backup
    • Reset all credentials (unique per device)
    • Gradually return sensors to production with enhanced monitoring
  5. Lessons Learned (10 minutes):
    • Document timeline of events
    • Identify security control gaps that allowed compromise
    • Update security policies and implement new controls
    • Calculate cost of incident (downtime, labor, reputation)

Expected Outcome: Incident response report with:

  • Incident timeline (detection -> containment -> eradication -> recovery)
  • Root cause analysis (how did attacker compromise 51 sensors?)
  • Containment actions taken and their effectiveness
  • 5+ recommendations to prevent recurrence

Real-World Learning: This simulates the 2016 Mirai botnet incident. Understanding incident response procedures minimizes damage when (not if) breaches occur.

Objective: Conduct a systematic security audit using the OWASP IoT Top 10 checklist.

Scenario: You’re auditing a smart home hub device before recommending it for enterprise deployment.

Audit Checklist:

# Vulnerability Test Method Pass/Fail Notes
I1 Insufficient Security Configurability Check if password can be changed, complexity requirements
I2 Insecure Web Interface Test for XSS, CSRF, session management issues
I3 Insufficient Authentication Try default credentials, check for MFA support
I4 Insecure Network Services Port scan, check for Telnet, unencrypted services
I5 Insecure Software/Firmware Check for update mechanism, verify signatures
I6 Hardware Limitations Inspect for JTAG, UART, debug ports
I7 Insecure Cloud Interface Test API authentication, check for credential exposure
I8 Unintended Device Usage Review privacy policy, data collection scope
I9 Privacy Concerns Check what data is collected, where it’s stored
I10 Insecure Default Settings Review factory defaults, UPnP status

Deliverable: Complete audit report with findings, severity ratings, and remediation recommendations.

Objective: Design network segmentation for an IoT deployment using IEC 62443 zone principles.

Scenario: Manufacturing facility with:

  • 200 production sensors (temperature, pressure, flow)
  • 50 industrial cameras
  • 20 PLCs controlling machinery
  • 10 HMI workstations
  • Enterprise network with 500 employees

Tasks:

  1. Define zones based on security level requirements (SL1-SL4)
  2. Design conduits (controlled data flows between zones)
  3. Specify firewall rules for each zone boundary
  4. Plan monitoring for each zone

Expected Output:

  • Zone diagram with security levels
  • Firewall rule matrix
  • Monitoring strategy per zone

36.2 Additional Knowledge Checks

36.3 Knowledge Check

System Architecture: Smart lock with mobile app → cloud API → Wi-Fi router → Bluetooth-enabled smart lock device on front door.

Components:

  1. Mobile App (iOS/Android) - User interface for lock control
  2. Cloud API (AWS Lambda + API Gateway) - Authentication, authorization, command routing
  3. Wi-Fi Router (home network) - Network connectivity
  4. Smart Lock Device (ESP32 + motorized deadbolt) - Physical lock mechanism

Data Flow Diagram:

[User] → [Mobile App] → HTTPS → [Cloud API] → MQTT → [Wi-Fi Router] → BLE → [Smart Lock]

STRIDE Analysis (Systematic Threat Enumeration):

36.3.1 Component 1: Mobile App

STRIDE Threat Description DREAD Score Mitigation
S Spoofing Fake app impersonation Attacker publishes look-alike app to app store, steals user credentials 7.2 Code signing, TLS certificate pinning, user education
T Tampering Reverse engineering Attacker decompiles APK, extracts API keys, discovers backend endpoints 8.0 Code obfuscation (ProGuard), store secrets in OS keychain, API key rotation
R Repudiation No audit trail User unlocks door but claims they didn’t - no proof of action 4.5 Server-side logging of all unlock commands with timestamp + user ID
I Info Disclosure Hardcoded credentials Developer hardcodes test API key in app source code, visible in Git history 9.0 Never hardcode secrets, use environment variables, scan Git history for secrets
D DoS App crash via malformed input Attacker sends malformed JSON to app, causing crash loop 3.5 Input validation, exception handling, rate limiting
E Elevation of Privilege Debug mode left enabled Production build has debug logs enabled, exposing session tokens 6.8 Disable debug mode in release builds, use build variants

Top Priority: Info Disclosure (9.0) - Hardcoded API keys allow attackers to impersonate the entire app, sending unlock commands to all users’ locks.

Mitigation: Use AWS Secrets Manager to fetch API keys at runtime. Never commit secrets to Git. Use git-secrets pre-commit hook.

36.3.2 Component 2: Cloud API

STRIDE Threat Description DREAD Score Mitigation
S Spoofing Session hijacking Attacker steals JWT token, impersonates legitimate user 8.5 Short token expiry (15 min), refresh tokens, IP binding
T Tampering API injection SQL injection: user_id = 1 OR 1=1 returns all locks 9.5 Parameterized queries, ORM (prevent SQL injection), input sanitization
R Repudiation Log tampering Attacker deletes CloudWatch logs to hide unauthorized unlock 5.0 Immutable logs (write-once storage), SIEM integration
I Info Disclosure Verbose error messages Error message reveals database schema: Column 'user_password' not found 6.0 Generic error messages in production, detailed logs server-side only
D DoS Rate limiting bypass Attacker floods API with unlock requests, exceeds AWS Lambda concurrency 7.0 API Gateway throttling (100 req/sec/user), WAF rate limiting
E Elevation of Privilege Broken access control User A can unlock User B’s door by changing lock_id parameter 9.8 Authorization check: verify lock_id belongs to authenticated user before executing command

Top Priority: Elevation of Privilege (9.8) - Broken access control allows any user to control any lock. This is the most critical vulnerability.

Mitigation:

# BEFORE (vulnerable)
def unlock_door(request):
    lock_id = request.params['lock_id']
    execute_unlock(lock_id)  # No authorization check!

# AFTER (secure)
def unlock_door(request):
    lock_id = request.params['lock_id']
    user_id = get_authenticated_user(request)

    # Authorization: verify lock belongs to user
    if not db.query("SELECT 1 FROM user_locks WHERE user_id=? AND lock_id=?", user_id, lock_id):
        return error(403, "Forbidden: Lock does not belong to you")

    execute_unlock(lock_id)

36.3.3 Component 3: Wi-Fi Network

STRIDE Threat Description DREAD Score Mitigation
S Spoofing Rogue access point Attacker sets up fake Wi-Fi “HOME-WIFI” to capture credentials 7.5 WPA3-Enterprise (certificate-based), educate users about rogue APs
T Tampering Man-in-the-Middle Attacker on same Wi-Fi intercepts unlock command, modifies to “lock” instead 8.2 End-to-end TLS (mobile app to cloud), MQTT over TLS (cloud to device)
R Repudiation N/A Network layer doesn’t handle repudiation 0.0 -
I Info Disclosure Wi-Fi eavesdropping Attacker captures unencrypted MQTT messages revealing when door unlocks 8.0 WPA2/WPA3 encryption + MQTT over TLS (double encryption)
D DoS Wi-Fi jamming Attacker uses 2.4GHz jammer, blocks all Wi-Fi communication 6.0 Fallback to BLE mesh network, offline unlock codes
E Elevation of Privilege ARP spoofing Attacker performs ARP poisoning to become gateway, intercepts traffic 7.8 Static ARP entries (not scalable), network segmentation (IoT VLAN)

Top Priority: Man-in-the-Middle (8.2) - Without end-to-end encryption, attacker on Wi-Fi can modify unlock commands.

Mitigation: Implement MQTT over TLS with certificate pinning. Verify TLS certificates on device (reject self-signed certs).

36.3.4 Component 4: Smart Lock Device

STRIDE Threat Description DREAD Score Mitigation
S Spoofing BLE impersonation Attacker pretends to be mobile app via BLE, sends unlock command 9.0 BLE pairing with PIN code, whitelist trusted device MAC addresses
T Tampering Firmware replacement Attacker opens lock, connects UART, flashes malicious firmware 9.5 Secure boot (ESP32 eFuse), epoxy-filled screws, tamper detection
R Repudiation No local logging Lock has no storage for audit logs of physical unlock events 5.5 Add SD card for local event log, sync to cloud when Wi-Fi available
I Info Disclosure Hardcoded Wi-Fi password Wi-Fi credentials stored in plaintext flash memory 8.5 ESP32 flash encryption (AES-256), unique per-device credentials
D DoS Battery exhaustion Attacker sends 10,000 BLE unlock attempts, draining battery in 2 hours 6.5 Rate limiting (max 5 unlock attempts/minute), sleep mode between requests
E Elevation of Privilege Debug port enabled JTAG port enabled in production, attacker dumps firmware and keys 9.2 Disable JTAG via eFuse, use Secure Boot, remove debug headers from PCB

Top Priority: Tampering (9.5) - Physical access to UART allows firmware replacement, bypassing all software security.

Mitigation:

  1. Enable ESP32 Secure Boot (RSA-3072 signature verification)
  2. Burn eFuse to disable JTAG permanently: espefuse.py burn_efuse JTAG_DISABLE
  3. Tamper detection: Hall effect sensor detects case opening, triggers alarm + log entry

36.3.5 Cross-Component Threats

STRIDE Threat Description DREAD Score Mitigation
T Tampering Replay attack Attacker captures BLE unlock command, replays it 5 hours later 8.8 Add timestamp + nonce to unlock commands, reject commands >30 seconds old
I Info Disclosure Cloud data breach AWS S3 bucket misconfigured (public read), exposes all unlock history 9.0 S3 bucket policies (block public access), encryption at rest, access auditing
E Elevation of Privilege Shared credentials Family members share one account, no per-user access tracking 6.0 Multi-user support with individual accounts, role-based permissions (admin/guest)

DREAD Scoring Methodology:

Each threat scored 1-10 on five factors (average = total risk): - Damage: How bad is the impact? (1=minor inconvenience, 10=complete compromise) - Reproducibility: How easy to reproduce? (1=almost impossible, 10=works every time) - Exploitability: What skill required? (1=nation-state, 10=script kiddie) - Affected users: How many impacted? (1=single user, 10=all users) - Discoverability: How easy to find? (1=requires source code audit, 10=visible on Shodan)

Interactive DREAD Score Calculator

Practice scoring threats using the DREAD methodology. Adjust the sliders to evaluate a security threat.

\({dreadScore.toFixed(1)}</div> <div style="font-size: 24px; margin-top: 10px;">\){riskLevel} RISK

Interpretation:

  • CRITICAL (8.0-10.0): Fix immediately (P0 priority)
  • HIGH (6.0-7.9): Schedule for next sprint (P1 priority)
  • MEDIUM (4.0-5.9): Plan for upcoming release (P2 priority)
  • LOW (0-3.9): Backlog or risk acceptance (P3/P4 priority)

Example Scenarios to Try:

  • Default Telnet credentials: D=10, R=10, E=10, A=10, D=10 → DREAD 10.0 (CRITICAL)
  • Missing rate limiting: D=6, R=8, E=7, A=9, D=6 → DREAD 7.2 (HIGH)
  • Verbose error messages: D=5, R=7, E=6, A=8, D=4 → DREAD 6.0 (HIGH)
  • Missing audit logs: D=3, R=5, E=4, A=3, D=2 → DREAD 3.4 (LOW)

Top 5 Threats (Prioritized by DREAD):

  1. API Elevation of Privilege (9.8) - Broken access control allows user A to unlock user B’s door
    • Fix: Authorization check before executing commands
    • Timeline: CRITICAL - fix within 24 hours
  2. Device Firmware Tampering (9.5) - Physical access allows firmware replacement
    • Fix: Enable Secure Boot + disable JTAG
    • Timeline: URGENT - fix in next firmware release (1 week)
  3. API SQL Injection (9.5) - Tamper with SQL queries to access all locks
    • Fix: Migrate to parameterized queries
    • Timeline: CRITICAL - fix within 48 hours
  4. Device BLE Spoofing (9.0) - Attacker impersonates mobile app over BLE
    • Fix: BLE pairing with PIN, device whitelisting
    • Timeline: HIGH - fix in 2 weeks
  5. Cloud Data Breach (9.0) - Misconfigured S3 bucket exposes unlock history
    • Fix: S3 bucket policy audit, enable encryption
    • Timeline: CRITICAL - audit within 24 hours

Implementation Cost (Security Fixes): - API authorization fix: 4 hours development + 2 hours testing = €1,200 - SQL injection fix: 8 hours refactoring + 4 hours testing = €2,400 - Secure Boot enablement: 16 hours firmware + 8 hours testing = €4,800 - BLE pairing: 12 hours development + 4 hours testing = €3,200 - S3 audit: 2 hours audit + 1 hour fixes = €600 - Total: €12,200 security remediation

ROI: Cost of a single security breach (compromised smart lock used in burglary, lawsuit): €500,000+. Security investment: €12,200. Return: 40x.

This systematic STRIDE analysis ensures no threat category is overlooked and provides quantitative prioritization for remediation.

After completing a STRIDE threat model, you’ll have 20-50 identified threats. Limited budget and time require prioritization. This framework helps decide what to fix first.

Priority Level DREAD Score Exploitability Affected Users Implementation Cost Fix Timeline Examples
P0: Critical (Drop Everything) 9.0-10.0 High (script kiddie can exploit) All users Any cost justified 24-48 hours Default credentials, SQL injection, broken access control, RCE
P1: Urgent (Next Sprint) 7.0-8.9 Medium (requires network access) >50% of users <€20K 1-2 weeks Insecure firmware updates, missing authentication, XSS
P2: High (Planned Release) 5.0-6.9 Low (requires physical access) 10-50% of users <€10K 1-2 months Missing secure boot, weak encryption (DES), verbose error messages
P3: Medium (Backlog) 3.0-4.9 Very low (requires insider access) <10% of users <€5K 3-6 months Missing audit logs, weak password policy (6 chars), information disclosure
P4: Low (Nice-to-Have) 0.1-2.9 Extremely low (theoretical) Individual users <€2K 12+ months Missing rate limiting (already has WAF), verbose headers

Decision Rules:

Rule 1: High DREAD + High Exploitability = P0

  • If DREAD ≥ 9.0 AND exploitability ≥ 7 (script kiddie can exploit), fix immediately
  • Example: SQL injection (DREAD 9.5, Exploitability 9) → P0 (fix within 24 hours)

Rule 2: Safety-Critical = P0 (Regardless of DREAD)

  • If exploitation could cause physical harm or death, fix immediately
  • Example: Smart lock firmware tampering (DREAD 9.5) → P0 even though requires physical access
  • Example: Insulin pump command injection (DREAD 10.0) → P0

Rule 3: Compliance-Required = P1

  • If mitigation required for regulatory compliance, schedule for next release
  • Example: ETSI EN 303 645 Provision 1 (no default passwords) → P1 (required for EU sale)

Rule 4: Cost-Benefit Analysis for P2/P3

  • Calculate expected loss: (DREAD/10) × (Affected Users) × (Average Breach Cost per User)
  • Compare to mitigation cost. If expected loss > mitigation cost, fix. Otherwise, accept risk.
  • Example: Verbose error messages (DREAD 6.0, 100 users affected, €500 breach cost/user)
    • Expected loss: (6.0/10) × 100 × €500 = €30,000
    • Mitigation cost: €2,000 (code cleanup)
    • Decision: Fix (€30K loss vs €2K cost)

Rule 5: Defense-in-Depth Principle

  • If a single exploit can bypass all controls, prioritize higher
  • Example: No network segmentation + No device authentication + No encryption = P0
    • Fixing any one control breaks the exploit chain → all become P1

Rule 6: Publicly Known Vulnerability = +2 Priority Levels

  • If vulnerability has a CVE or public exploit code, increase urgency
  • Example: Heartbleed (OpenSSL) in IoT device (original DREAD 7.5) → P0 (public exploit available)

Cost-Constrained Scenarios:

36.3.6 Scenario A: Startup with €10K Security Budget

Identified Threats:

  1. SQL Injection (DREAD 9.5, fix cost €2K)
  2. No TLS (DREAD 8.0, fix cost €3K)
  3. Firmware not signed (DREAD 9.0, fix cost €6K)
  4. Verbose errors (DREAD 6.0, fix cost €1K)
  5. No rate limiting (DREAD 5.5, fix cost €2K)

Prioritization:

  • SQL Injection (P0) → Fix (€2K) - High DREAD, high exploitability
  • Firmware signing (P0) → Fix (€6K) - High DREAD, safety-critical
  • No TLS (P1) → Fix (€3K) - Compliance required (ETSI Provision 5)
  • Total spent: €11K (over budget by €1K)
  • Verbose errors (P2) → Defer (accept risk for now)
  • No rate limiting (P2) → Defer (WAF provides partial mitigation)

Budget Adjustment: Request additional €1K or find cheaper TLS solution (use AWS Certificate Manager - free).

36.3.7 Scenario B: Enterprise with €100K Security Budget

Identified Threats (same as above + more): - Fix all P0/P1/P2 threats immediately (€50K total) - Add security monitoring (€20K) - Penetration testing (€15K) - Security training for developers (€10K) - Remaining budget: €5K → Allocate to P3 threats or security tooling (SAST/DAST)

Risk Acceptance Template (for Deferred Threats):

For threats marked P3/P4 (deferred due to budget), document risk acceptance:

Threat ID Description DREAD Mitigation Cost Risk Acceptance
T-042 Verbose error messages expose database schema 6.0 €1,000 Accepted: Partial mitigation via WAF (blocks SQL injection). Expected loss €5K vs mitigation cost €1K. Fix in Q3 2025. Reviewed by: [CISO Name]

Best For:

  • P0: Threats exploitable by unskilled attackers (default credentials, SQL injection, broken access control)
  • P1: Threats requiring network access (MITM, session hijacking, firmware tampering with physical access)
  • P2: Threats requiring insider access or physical access (debug ports, missing audit logs)
  • P3: Theoretical threats or low-impact issues (information disclosure, weak password policy)
  • P4: Nice-to-have improvements (additional logging, cosmetic security headers)

The “Good Enough” Rule: Perfect security is impossible and infinitely expensive. The goal is risk-proportional security: allocate budget based on actual risk. A €50 smart bulb doesn’t need €10K of security engineering. A €5,000 insulin pump does. Use DREAD scoring to quantify risk and allocate budget accordingly.

Common Mistake: Focusing Only on High-Tech Attacks While Ignoring Simple Vulnerabilities

The Mistake: Security teams spend months implementing advanced mitigations (AI-based anomaly detection, blockchain for audit trails, quantum-resistant cryptography) while overlooking basic vulnerabilities like default credentials, missing input validation, or unencrypted communications. The reasoning is: “We need cutting-edge security to protect against sophisticated attackers.”

Why This Fails:

  1. Attackers Follow the Path of Least Resistance: 90% of real-world IoT breaches exploit basic vulnerabilities, not sophisticated zero-days. The 2016 Mirai botnet compromised 600,000 devices using default credentials (admin/admin) - a vulnerability detectable in 5 seconds with nmap. No zero-day exploit needed.

  2. Advanced Mitigations Depend on Basic Controls: AI-based anomaly detection is useless if the attacker logs in with “admin/admin” - the system sees legitimate authenticated traffic. Blockchain audit trails don’t prevent SQL injection. Quantum-resistant crypto doesn’t help if TLS is disabled entirely. Advanced security assumes basic security is already in place.

  3. Budget Misallocation: Implementing an AI-based intrusion detection system costs €50K-€200K. Fixing default credentials costs €500 (change one configuration file). The AI system provides marginal benefit (detects 2-5% more attacks). Fixing default credentials prevents 60% of attacks. The €50K is better spent on 100 smaller security fixes.

  4. Compliance Requires Basics First: ETSI EN 303 645 Provision 1 (no default passwords) is mandatory. No amount of advanced security compensates for failing Provision 1. Regulators check basics first. Products with AI-powered security but default credentials still fail certification.

Real-World Example: Ring Doorbell Camera Credential Stuffing (2019)

Ring doorbells had advanced features (AI-powered motion detection, cloud video analytics) but lacked basic security: - No rate limiting on login attempts - No mandatory 2FA (was optional) - No breach notification when credentials were used from new IP addresses

Attack: Credential stuffing (attackers used username/password pairs leaked from other sites like LinkedIn, Adobe). Attackers tried millions of credentials against Ring accounts.

Result:

  • 3,600+ Ring accounts compromised
  • Attackers watched live camera feeds, talked to children through doorbell speakers (extremely disturbing)
  • FTC settlement: $5.8 million fine + mandatory security improvements

The Irony: Ring’s AI-powered person detection (advanced feature) continued working perfectly during the breaches. The AI detected people at the door. It just didn’t matter because attackers had valid credentials. The basic security failure (no rate limiting, optional 2FA) negated all advanced features.

The Correct Prioritization (Basics Before Advanced):

Tier 1: Foundational Security (Fix These FIRST): 1. No default credentials (ETSI Provision 1) 2. Input validation (prevent SQL injection, XSS, command injection) 3. TLS encryption for all network communications 4. Authentication required for all interfaces 5. Authorization checks (verify user owns the resource) 6. Secure update mechanism (signed firmware)

Cost: €10K-€30K (basic security hygiene) Prevents: 80% of real-world attacks

Tier 2: Hardening (After Tier 1 is Complete): 7. Network segmentation (IoT VLAN) 8. Rate limiting (prevent brute force) 9. Audit logging (SIEM integration) 10. Secure boot (firmware signature verification) 11. Tamper detection (hardware security)

Cost: €30K-€100K Prevents: Additional 15% of attacks

Tier 3: Advanced Security (After Tiers 1 & 2 are Complete): 12. AI-based anomaly detection 13. Behavioral analytics 14. Threat intelligence feeds 15. Quantum-resistant cryptography (future-proofing) 16. Blockchain audit trails

Cost: €100K-€500K+ Prevents: Additional 3-5% of attacks (diminishing returns)

The “Fix the Door Before Installing the Alarm” Principle:

Imagine a house with: - No door lock (anyone can walk in) - Windows left open - Spare key under the mat - BUT: AI-powered facial recognition security cameras

Would the AI cameras prevent burglary? No. The burglar walks in through the unlocked door. The camera records it, but the theft still happens.

IoT Security Equivalent:

  • No authentication (unlocked door)
  • No encryption (open windows)
  • Default credentials (key under mat)
  • BUT: AI-powered anomaly detection

The AI system logs the intrusion, but the attacker still controls your devices because basic authentication was missing.

Implementation Order (Real-World Example):

Smart Building System (500 sensors, €20K security budget):

Wrong Approach (Advanced-First): - €15K: Deploy AI-based anomaly detection system - €5K: Blockchain audit trail for tamper-proof logs - €0: Authentication (no budget left) - Result: Attackers use default credentials to log in. AI system sees “legitimate” authenticated traffic. Breach occurs. Advanced systems were useless.

Right Approach (Basics-First): - €2K: Eliminate default credentials (unique passwords per device) - €3K: Enable TLS encryption (MQTT over TLS) - €4K: Implement topic-level ACLs (authorization) - €2K: Add input validation (prevent injection attacks) - €3K: Secure update mechanism (signed firmware) - €2K: Rate limiting (prevent brute force) - €4K: Security audit (verify controls work) - Total: €20K (all spent on basics) - Result: 80% of attack vectors blocked. Remaining budget next year can fund advanced features.

The Data-Driven Argument:

According to Verizon’s 2023 Data Breach Investigations Report: - 86% of breaches exploited known vulnerabilities (not zero-days) - 61% of breaches involved credential misuse (stolen, default, weak passwords) - 13% of breaches exploited novel or unknown vulnerabilities

Interpretation: Fixing basics (credentials, known vulnerabilities, access control) prevents 86% of breaches. Advanced security (zero-day detection, AI systems) addresses the remaining 14%. Spending 80% of your budget on advanced security that addresses 14% of threats is misallocation.

The Exception (When to Invest in Advanced Security First):

Only invest in advanced security before basics if: 1. You’re a high-value target (critical infrastructure, defense, financial sector) where nation-state attackers will use zero-days 2. Basics are already implemented and audited (you’ve completed Tiers 1 & 2) 3. Compliance explicitly requires it (e.g., NIST SP 800-53 Rev 5 SI-4 requires advanced monitoring for federal systems)

For 95% of IoT deployments, basic security is both necessary and sufficient. Master the fundamentals before pursuing advanced techniques. The Ring doorbell breach, Mirai botnet, and countless other incidents prove that basic security failures cause far more damage than advanced attack techniques.

Security risk is the expected loss from a threat, calculated as the product of attack probability, impact magnitude, and vulnerability severity.

\[\text{Risk} = P(\text{Attack}) \times I(\text{Impact}) \times V(\text{Vulnerability})\]

where each factor is normalized to [0,1] scale.

Worked Calculation: Smart Building HVAC Compromise

Given: - Attack probability (Mirai-style credential stuffing): \(P = 0.65\) (65% based on Shodan exposure) - Impact (downtime + equipment damage): \(I = 0.80\) (80% of $500K = $400K) - Vulnerability severity (default credentials + Telnet): \(V = 0.90\) (CVSS 9.0/10)

Step 1: Calculate base risk \[R_{\text{base}} = 0.65 \times 0.80 \times 0.90 = 0.468\]

Step 2: Apply mitigation effectiveness (unique passwords + disabled Telnet = 95% reduction) \[R_{\text{residual}} = R_{\text{base}} \times (1 - 0.95) = 0.468 \times 0.05 = 0.0234\]

Step 3: Convert to expected annual loss \[\text{Expected Loss} = R_{\text{residual}} \times I_{\text{max}} = 0.0234 \times \$500\text{K} = \$11.7\text{K}\]

Result: Mitigations reduce expected annual loss from $234K to $11.7K (95% reduction). Mitigation cost of $25K pays for itself in 2.1 months.

In practice: Traditional IT risk models assume data-only impact. IoT adds physical consequences (HVAC failure in hospital = patient safety), requiring cyber-physical risk quantification.

Interactive Risk Quantification Calculator

Calculate expected annual loss and ROI for security mitigations using the Risk = P × I × V formula.

<div style="font-size: 12px; opacity: 0.8;">Base Risk</div>
<div style="font-size: 24px; font-weight: bold;">${(baseRisk * 100).toFixed(1)}%</div>
<div style="font-size: 12px; opacity: 0.8;">Residual Risk</div>
<div style="font-size: 24px; font-weight: bold;">${(residualRisk * 100).toFixed(1)}%</div>
<div style="font-size: 12px; opacity: 0.8;">Risk Reduction</div>
<div style="font-size: 24px; font-weight: bold;">${formatPercent(riskReduction)}</div>
<div style="font-size: 12px; opacity: 0.8;">Expected Loss (Before)</div>
<div style="font-size: 20px; font-weight: bold;">${formatCurrency(expectedLossBefore)}</div>
<div style="font-size: 12px; opacity: 0.8;">Expected Loss (After)</div>
<div style="font-size: 20px; font-weight: bold;">${formatCurrency(expectedLossAfter)}</div>
<div style="font-size: 12px; opacity: 0.8;">Annual Savings</div>
<div style="font-size: 20px; font-weight: bold;">${formatCurrency(expectedLossBefore - expectedLossAfter)}</div>
<div style="font-size: 12px; opacity: 0.8;">Return on Investment (ROI)</div>
<div style="font-size: 24px; font-weight: bold;">${formatPercent(roi)}</div>
<div style="font-size: 12px; opacity: 0.8;">Payback Period</div>
<div style="font-size: 24px; font-weight: bold;">${paybackMonths.toFixed(1)} months</div>

Interpretation:

  • ROI > 100%: Excellent investment - mitigation pays for itself and saves money
  • ROI 0-100%: Good investment - mitigation reduces risk cost-effectively
  • ROI < 0%: Poor investment - mitigation costs more than expected savings (may still be required for compliance)
  • Payback < 12 months: Fast return on investment
  • Payback > 24 months: Slow return - consider alternatives or accept risk

Try These Scenarios:

  1. Default credentials fix: P=0.65, I=0.80, V=0.90, Max=$500K, Effectiveness=0.95, Cost=$2K → ROI ~11,000%
  2. Firmware signing: P=0.30, I=0.85, V=0.85, Max=$400K, Effectiveness=0.90, Cost=$25K → ROI ~280%
  3. Advanced AI IDS: P=0.10, I=0.40, V=0.50, Max=$200K, Effectiveness=0.30, Cost=$50K → ROI ~-82% (negative)

36.4 What’s Next

If you want to… Read this
Review the threat modelling concepts applied in exercises Threat Modelling and Mitigation
Study the STRIDE framework used in analysis exercises STRIDE Framework
Apply exercise skills in hands-on labs IoT Security Hands-On Labs
Practise additional scenarios and self-assessment Security Practice
Return to the security module overview IoT Security Fundamentals

Common Pitfalls

Security exercises are not about producing the correct output — they are about building the analytical reasoning process. If you can complete an exercise but cannot explain your reasoning for each decision, repeat it with the focus on justification rather than answers.

The tendency to practise only scenarios where you feel confident leaves critical skill gaps unaddressed. Deliberately select exercises in areas that feel difficult — these are exactly the areas where practice provides the most benefit.

Open-ended exercises completed without time pressure do not prepare you for timed assessments or real project decisions made under deadline. Practise with realistic time constraints to develop the discipline of efficient security reasoning.

Real security problems rarely have a single root cause. Practise finding multiple vulnerabilities in each exercise scenario before stopping, developing the habit of systematic rather than opportunistic security analysis.