32  IoT Security Practice and Assessment

Learning Objectives

After completing this practice section, you will be able to:

  • Apply STRIDE threat modeling systematically to real-world IoT system architectures
  • Implement and test security controls on ESP32 including secure boot simulation, network defense, and access control
  • Calculate and interpret DREAD risk scores for IoT security scenarios
  • Evaluate and compare defense-in-depth strategies across device, network, and application layers
In 60 Seconds

Practising IoT security through structured exercises, scenario analysis, and self-assessment questions consolidates conceptual knowledge into the applied judgment needed for real security decisions. The most effective practice tasks are scenario-based: given a specific IoT deployment with described constraints, identify the most critical threats, select the appropriate controls, and justify the trade-offs.

This hands-on chapter lets you practice IoT security concepts through exercises and scenarios. Think of it like a fire drill – practicing your response to threats in a safe environment prepares you to handle real incidents effectively. Working through these exercises builds the practical skills needed to secure real IoT deployments.

“Time to practice!” Max the Microcontroller put on his coach hat. “The best way to learn security is by DOING it. These exercises let you try out STRIDE threat modeling, set up security controls on real ESP32 boards, and analyze attack scenarios – all in a safe practice environment.”

Sammy the Sensor agreed. “It is like a fire drill for cybersecurity! You do not want the first time you handle a security incident to be during an actual attack. By practicing with realistic scenarios – like a smart building with 500 IoT devices – you develop the instincts to spot and stop threats quickly.”

“DREAD risk assessment is one of my favorite tools,” Lila the LED said. “You score each threat on five factors: Damage, Reproducibility, Exploitability, Affected users, and Discoverability. Each gets a score from 0 to 10, and the total tells you how urgently you need to fix it. A threat that is easy to exploit and affects many users gets fixed first!”

“The visual reference gallery here is also super helpful,” Bella the Battery noted. “Real security professionals use these kinds of diagrams every day to communicate risks to their teams. Being able to read and create security diagrams is a skill that will serve you well throughout your career. Now go practice – your future secure IoT systems depend on it!”

32.1 Introduction

This section provides comprehensive practical resources for IoT security including hands-on labs, exam preparation materials, and advanced technical concepts. The content has been organized into focused chapters for effective learning.

32.3 Chapter Structure

This practice section is organized into three focused chapters:

32.3.1 1. Security Practice Labs

Time: ~60 min | Level: Intermediate-Advanced

Hands-on security assessment labs:

  • Lab 1: IoT device security audit checklist (physical, network, authentication, firmware, privacy)
  • Lab 2: Network segmentation for IoT devices (guest network and VLAN configuration)
  • Lab 3: HTTPS certificate verification using OpenSSL and browser tools

Key Takeaway: Practical skills developed through hands-on labs translate directly to real-world security assessments.

32.3.2 2. Exam Preparation Guide

Time: ~45 min | Level: Advanced

Comprehensive study materials:

  • Key concepts to master (CIA triad, OWASP Top 10, defense-in-depth)
  • Memory aids and mnemonics for exam recall
  • Practice problems with detailed solutions
  • Time management strategies for different question types
  • Common mistakes and red flags to avoid

Key Takeaway: Structured exam preparation with practice problems builds confidence and ensures concept mastery.

32.3.3 3. Advanced Security Concepts

Time: ~50 min | Level: Advanced

Deep technical knowledge:

  • Cryptographic strength and brute-force analysis
  • Secure boot chain of trust implementation
  • TLS 1.3 vs DTLS performance comparison
  • STRIDE threat modeling methodology
  • Side-channel attacks and mitigations

Key Takeaway: Understanding security at a technical level enables design of production-grade secure systems.

32.4 Knowledge Check

Scenario: A commercial building deploys 200 IoT-connected HVAC controllers to manage heating, cooling, and air quality across 30 floors. Each controller communicates with a central gateway via BACnet protocol, and the gateway exposes a web interface for building managers.

System Components:

  1. 200 HVAC controllers (temperature sensors + actuators for dampers/fans)
  2. 1 central gateway (aggregates controller data, hosts web UI)
  3. BACnet protocol (unencrypted, unauthenticated by default)
  4. Web management interface (HTTPS with username/password authentication)

Step-by-Step STRIDE Analysis:

Calculate DREAD scores for threat prioritization by adjusting the five risk factors below:

Example: BACnet Protocol Vulnerability

Threat: Spoofing Attack (false temperature reading injection)

Using the calculator above with realistic values: - Damage: 8/10 (incorrect HVAC response causes discomfort, potential $50K production loss in adjacent data center) - Reproducibility: 10/10 (trivial BACnet command, no authentication required) - Exploitability: 7/10 (requires network access + knowledge of BACnet protocol structure) - Affected Users: 6/10 (affects 5 floors out of 30, approximately 200 occupants) - Discoverability: 4/10 (internal network only, not internet-exposed or Shodan-visible)

DREAD Score: (8 + 10 + 7 + 6 + 4) / 5 = 7.0 (HIGH priority)

Risk reduction with mitigation: After implementing BACnet Secure Connect (BACnet/SC) with device authentication: - Exploitability drops: 7 → 2 (now requires compromised authenticated device) - Affected Users drops: 6 → 1 (authentication limits scope to single controller if compromised)

DREAD after mitigation: (8 + 10 + 2 + 1 + 4) / 5 = 5.0 (MEDIUM)

ROI calculation: Mitigation cost $25K (BACnet/SC deployment), reduces risk score by 28.6%, preventing $50K annual expected loss → 100% ROI in first year.

Component 1: HVAC Controllers

STRIDE Category Threat Likelihood Impact Mitigation
Spoofing Attacker impersonates legitimate controller, sends false temperature readings HIGH (no authentication) MEDIUM (false data triggers incorrect HVAC actions) Implement controller certificates, mutual TLS
Tampering Attacker modifies controller firmware to alter temperature setpoints MEDIUM (requires network access) HIGH (comfort + energy waste) Code signing, secure boot
Repudiation Controller denies sending command that caused system failure LOW (logs exist) LOW (attribution issue only) Tamper-proof logging with timestamps
Information Disclosure Attacker eavesdrops on BACnet traffic, learns building occupancy patterns HIGH (unencrypted protocol) MEDIUM (privacy, security planning intel) Encrypt BACnet with TLS wrapper
Denial of Service Attacker floods controllers with requests, crashes HVAC system MEDIUM (network accessible) HIGH (building evacuation, production loss) Rate limiting, input validation
Elevation of Privilege Attacker gains admin access to controller, reprograms behavior MEDIUM (default credentials) HIGH (full system control) Unique credentials, RBAC

Component 2: Central Gateway

STRIDE Category Threat Likelihood Impact Mitigation
Spoofing Attacker spoofs gateway IP, controllers send data to attacker MEDIUM (requires ARP poisoning) HIGH (data exfiltration) Static ARP entries, switch port security
Tampering Attacker modifies gateway config to disable safety limits HIGH (web interface vulnerable) CRITICAL (safety incident) Config integrity checks, change auditing
Repudiation Admin denies making temperature changes that caused complaint LOW (audit logs present) LOW (internal dispute) Non-repudiation via signed audit logs
Information Disclosure Attacker steals building manager credentials from gateway HIGH (weak session management) HIGH (full building control) MFA, encrypted credential storage
Denial of Service Attacker crashes gateway with malformed web requests MEDIUM (public-facing) CRITICAL (all 200 controllers lose management) WAF, input validation, redundant gateway
Elevation of Privilege Guest user escalates to admin via web UI exploit MEDIUM (unpatched software) CRITICAL (full system takeover) Patch management, least privilege

Prioritization Using DREAD Scores:

Top 5 Threats (Highest Risk):

  1. Gateway Tampering (config modification): DREAD 8.6 (Damage=9, Repro=10, Exploit=7, Affected=9, Discover=8)
  2. Controller Spoofing (false temp data): DREAD 7.8 (Damage=8, Repro=9, Exploit=8, Affected=7, Discover=7)
  3. Information Disclosure (BACnet eavesdrop): DREAD 7.2 (Damage=7, Repro=10, Exploit=8, Affected=6, Discover=5)
  4. Gateway Elevation of Privilege: DREAD 7.0 (Damage=9, Repro=8, Exploit=6, Affected=8, Discover=4)
  5. Controller Denial of Service: DREAD 6.8 (Damage=8, Repro=7, Exploit=6, Affected=7, Discover=7)

Mitigation Implementation Plan:

Priority Threat Mitigation Action Cost Timeline Risk Reduction
1 Gateway config tampering Implement config integrity checks (hash verification on startup) + RBAC with approval workflow $8K 2 weeks 8.6 → 2.1 (75% reduction)
2 Controller spoofing Deploy BACnet Secure Connect (BACnet/SC) with TLS + device certificates $25K 6 weeks 7.8 → 3.2 (59% reduction)
3 BACnet eavesdropping Enable encryption via BACnet/SC (same as #2) Included 6 weeks 7.2 → 2.8 (61% reduction)
4 Gateway privilege escalation Patch web UI software + enable MFA for admin accounts $3K 1 week 7.0 → 3.5 (50% reduction)
5 Controller DoS Implement rate limiting (50 requests/second per controller) + input validation $5K 2 weeks 6.8 → 4.0 (41% reduction)

Total Investment: $41K over 6 weeks Risk Reduction: 37.4 combined DREAD score → 15.6 (58% overall reduction)

Key Insight: STRIDE analysis systematically identifies 30+ potential threats. DREAD scoring prioritizes the 5 highest-risk issues for limited budget. Addressing these 5 threats provides 58% risk reduction for $41K investment, compared to addressing all 30 threats for $200K+ investment.

Choosing between different security assessment approaches depends on your resources, expertise, and risk tolerance:

Assessment Type Cost Expertise Required Depth Speed Best For
Automated Vulnerability Scanner (Nessus, Qualys, OpenVAS) $5-15K/year Low (point-and-click) Shallow (known CVEs only) Fast (1000 devices/hour) Large IoT fleets, continuous monitoring, compliance scanning
Penetration Testing (Third-party security firm) $15-50K per engagement N/A (outsourced) Deep (custom exploits, chaining) Slow (2-4 weeks per engagement) Critical systems, pre-deployment validation, compliance requirements
Red Team Exercise (Adversary simulation) $50-200K per engagement N/A (specialized firm) Very deep (full attack kill chain) Very slow (8-12 weeks) High-value targets, nation-state threat model, board-level security validation
Manual Code Review (Source code audit) $10-30K per 10K LOC High (security engineers) Very deep (logic flaws, backdoors) Slow (500-1000 LOC/day) Custom firmware, open-source components, supply chain verification
Threat Modeling Workshop (STRIDE/DREAD) $5-10K per system Medium (facilitated session) Medium (design-level flaws) Medium (2-3 days per system) New system designs, major architecture changes, compliance

Decision Matrix:

START: What is your primary goal?

├─ Continuous monitoring of known vulnerabilities?
│  └─ Use: Automated Scanner ($5-15K/year)
│     └─ Examples: Tenable.io, Qualys VMDR, Rapid7 InsightVM
│
├─ Validate security before production deployment?
│  └─ Budget > $50K?
│     ├─ YES → Use: Penetration Test + Code Review
│     └─ NO → Use: Threat Modeling + Automated Scanner
│
├─ Prove security to board/investors/regulators?
│  └─ Use: Red Team Exercise + Third-party attestation
│     └─ Examples: SOC 2 Type II, ISO 27001 certification
│
├─ Understand security of acquired/open-source code?
│  └─ Use: Manual Code Review + Binary analysis
│     └─ Tools: IDA Pro, Ghidra, Binwalk
│
└─ Limited budget (<$5K)?
   └─ Use: Open-source scanner + in-house threat modeling
      └─ Tools: OpenVAS, OWASP ZAP, free STRIDE workshops

Hybrid Approach for Maximum Coverage:

Tier 1 (Every system, every quarter): Automated vulnerability scanning - Cost: $10K/year - Coverage: 100% of internet-facing and internal IoT devices - Detects: Known CVEs, misconfigurations, missing patches

Tier 2 (Critical systems, annually): Penetration testing - Cost: $25K/year - Coverage: 20% of fleet (high-value targets) - Detects: Exploitable vulnerability chains, logic flaws

Tier 3 (New systems, at design phase): Threat modeling - Cost: $8K per new system - Coverage: All new deployments before production - Detects: Architecture-level security gaps

Tier 4 (On-demand, as needed): Code review + Red team - Cost: $50K budget reserved - Triggered by: Major incidents, M&A due diligence, regulatory audit

Total Annual Security Assessment Budget: $93K + on-demand reserves Expected Detection: 95%+ of security issues before exploitation

Key Principle: Layer assessment methods to achieve breadth (scanners) AND depth (pentesting) within budget constraints.

Common Mistake: Performing DREAD Scoring Before Understanding the System

The Mistake: A security team downloads a STRIDE threat list, assigns DREAD scores to generic threats, and creates a prioritized remediation plan—all without understanding the specific IoT system architecture.

Example of Incorrect DREAD Scoring:

Generic Threat: “SQL injection in web interface” Team’s DREAD Score: Damage=10, Reproducibility=9, Exploitability=8, Affected=10, Discoverability=7 Overall: 8.8/10 (CRITICAL) Prioritization: Immediate remediation required

Reality Check: The IoT system in question is a sensor network that: - Has NO web interface (only MQTT communication) - Has NO database (data forwarded to cloud, not stored locally) - Has NO SQL anywhere in the stack

Result: Team wasted 40 hours implementing “SQL injection mitigations” for a non-existent attack surface.

Why This Happens:

  1. Using generic threat lists without system-specific analysis
  2. Copying DREAD scores from other projects
  3. Threat modeling by committee without technical review
  4. Skipping the “What are we building?” step of threat modeling

The Correct Approach:

Step 1: Understand the System (4-8 hours before STRIDE) - Draw architecture diagram showing all components - Document data flows (where does data enter, transform, exit?) - Identify trust boundaries (internet, network, device, cloud) - List all protocols, interfaces, and authentication mechanisms

Step 2: Component-Specific STRIDE (2 hours per major component) - Apply STRIDE to each component individually - Only score threats that are POSSIBLE given the component’s capabilities - Example: If component has no user input, skip “Tampering via input injection”

Step 3: Context-Specific DREAD (15 min per threat) - Damage: Based on actual data stored/processed by THIS component - Reproducibility: Based on actual access requirements for THIS deployment - Exploitability: Based on actual attacker skill needed for THIS implementation - Affected Users: Based on actual deployment scale for THIS system - Discoverability: Based on actual visibility for THIS device (Shodan? Internal only?)

Corrected Example:

Specific System: Industrial sensor network with MQTT broker Specific Threat: “MQTT topic injection allowing sensor spoofing”

Informed DREAD Scoring:

  • Damage: 8/10 (false sensor data triggers safety shutdown, $50K production loss per incident)
  • Reproducibility: 10/10 (any network access, trivial MQTT publish command)
  • Exploitability: 7/10 (requires network access + knowledge of topic structure)
  • Affected Users: 6/10 (affects 1 production line out of 5)
  • Discoverability: 4/10 (internal network only, not Shodan-visible)

Overall: 7.0/10 (HIGH priority)

Mitigation: Implement MQTT access control lists (ACLs) restricting which clients can publish to sensor topics. Cost: $3K (2 weeks config work). Risk reduction: 7.0 → 2.5.

Contrast with Generic Approach: If the team had used generic DREAD scores without understanding the system, they might have scored this threat as 5/10 (MEDIUM) because “MQTT is just messaging, how bad can it be?” Missing the critical insight that false sensor data triggers production shutdowns.

Verification Checklist:

Key Takeaway: DREAD is a scoring tool, not a threat discovery tool. Understand the system FIRST, score threats SECOND. Generic threats scored in a vacuum lead to wasted remediation effort on non-existent risks while missing real vulnerabilities specific to your deployment.

32.5 What’s Next

If you want to… Read this
Review security foundations to fill knowledge gaps Security Foundations
Study advanced concepts for more challenging practice Security Advanced Concepts
Prepare systematically for formal examination Security Exam Preparation
Deepen understanding through security labs Security Labs
Return to the security module overview IoT Security Fundamentals

Common Pitfalls

Re-reading notes feels productive but produces minimal long-term retention. Active practice — writing answers to security scenario questions from memory, then checking — is far more effective. Use spaced repetition for key security definitions and frameworks.

Real IoT security decisions involve resource constraints, budget limits, regulatory requirements, and competing stakeholder priorities. Practice with scenarios that include these constraints rather than idealised cases where the ‘correct’ answer is obvious.

Incorrect answers on practice questions are the most valuable learning data. For every wrong answer, identify whether the error was factual (wrong definition), reasoning (correct facts, wrong application), or context (right answer for wrong scenario).

The natural tendency to practise familiar material reinforces what is already known while leaving gaps in difficult areas. Deliberately weight practice towards the topics and question types that feel most challenging.

32.6 Summary

This practice section provides the hands-on component of IoT security learning:

  • Labs: Develop practical skills through device audits, network segmentation, and certificate verification
  • Exam Prep: Build confidence with structured study materials and practice problems
  • Advanced Concepts: Understand security at a technical level for production system design

Complete these chapters to transform security knowledge into practical competence.

Concept Relationships

Understanding how security practice concepts interconnect:

Practice Area Theory Foundation Validates Understanding Of Enables Real-World Skill
STRIDE Analysis Threat taxonomy (S/T/R/I/D/E categories) Attack vector identification, security properties mapping Systematic threat modeling for new IoT systems
DREAD Scoring Risk = Likelihood × Impact Threat prioritization, resource allocation Data-driven security investment decisions
Attack Trees Probability propagation, AND/OR logic Defense-in-depth effectiveness, weakest link analysis Quantifying ROI of security controls
Network Security Analyzer OSI layer threats, protocol vulnerabilities MITM detection, encryption verification Real-time security monitoring configuration
IoT Security Posture Assessment CIA triad context, risk factors Asset criticality, exposure assessment Executive security dashboards

Key Insight: Practice exercises simulate real security decisions you’ll face: Which vulnerability to fix first? How much to spend on encryption? What security level for this device? Quantitative methods (DREAD, attack trees) turn subjective “security is important” into objective “fix this threat for $X to avoid $Y loss.”

See Also

Practice Components:

Foundational Theory:

Real-World Application:

Interactive Tools:

  • IoT Security Posture Assessment (in this chapter) - Risk calculator
  • Network Security Analyzer (in this chapter) - Protocol vulnerability detector
  • Simulations Hub - Additional security simulators

Learning Hubs: