26  Simulation-Driven Development

26.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Apply simulation-driven development workflows across project phases
  • Implement the IoT testing pyramid with appropriate test coverage
  • Configure hardware-in-the-loop (HIL) testing environments

Key Concepts

  • Hardware-in-the-Loop (HIL): A testing technique where physical IoT hardware runs production firmware while external simulation hardware injects realistic sensor inputs and monitors outputs
  • Software-in-the-Loop (SIL): Testing where firmware compiled for the target MCU runs in a PC-based simulator with simulated peripherals; faster than HIL but less representative of real hardware timing
  • Test Coverage: The fraction of code paths or requirements exercised by a test suite; 100% line coverage does not guarantee correctness — test cases must also be meaningful
  • Regression Test: A test that verifies a previously working function still works after code changes; essential for catching unintended side effects of firmware modifications
  • Protocol Emulator: A device or software that mimics the behavior of a communication partner (cloud server, BLE central, Modbus master) for testing without a real counterpart
  • Fault Injection: Deliberately introducing hardware faults (low voltage, temperature extremes, bit errors) during testing to verify the firmware handles failure conditions gracefully
  • Test Automation: Using scripts or frameworks to execute tests automatically and compare results to expected values, enabling CI/CD workflows for embedded firmware
In 60 Seconds

Simulation-based testing and validation bridges the gap between unit tests on developer machines and full field testing by providing controlled, reproducible environments — from hardware-in-the-loop rigs that inject real sensor signals to network emulators that test firmware behavior under specific packet loss and latency conditions.

  • Follow best practices for simulation-to-hardware transitions
  • Integrate simulation testing into CI/CD pipelines

Design methodology gives you a structured, proven process for creating IoT systems from initial concept to finished product. Think of it like following a recipe when cooking a complex meal – the methodology tells you what to do first, how to handle each step, and how to bring everything together into a successful final result.

“Simulation-driven development means you test in the virtual world FIRST, then move to real hardware,” explained Max the Microcontroller. “It is like a testing pyramid – lots of small, fast unit tests at the bottom, some integration tests in the middle, and a few full hardware tests at the top.”

Sammy the Sensor described Hardware-in-the-Loop testing: “That is when you connect REAL hardware to a simulated environment. For example, a real ESP32 board connected to a virtual sensor that sends fake temperature data. This way you test the real microcontroller without needing the actual sensor. It catches bugs that pure simulation misses.”

“The transition from simulation to hardware is where many teams stumble,” warned Lila the LED. “Code that works perfectly in the simulator sometimes fails on real hardware because of timing differences, electrical noise, or memory constraints. Always plan for a testing phase where you run the same tests on both.” Bella the Battery added, “Good testing saves you from shipping broken devices to customers. Test early, test often, test on real hardware before shipping!”

26.2 Prerequisites

Before diving into this chapter, you should be familiar with:

26.3 Simulation-Driven Development Workflow

Estimated time: ~15 min | Intermediate | P13.C03.U07

Hardware simulation workflow diagram with four horizontal swim lanes representing development phases. Phase 1 (teal) shows virtual prototyping with iteration loops for simulation testing. Phase 2 (orange) shows hardware validation with physical prototype testing. Phase 3 (navy) shows optimization on real hardware. Phase 4 (gray) shows production deployment. Arrows connect phases showing progression from design through production.

Hardware simulation workflow showing development progression from virtual prototyping (Phase 1) through hardware validation (Phase 2), optimization (Phase 3), and production deployment (Phase 4). Four horizontal swim lanes show the iterative process: teal phase for cost-free simulation (80-90% of development), orange for initial hardware validation, navy for optimization on physical hardware, and gray for production scaling. Iteration loops between phases enable rapid debugging before costly production investment.
Figure 26.1: Hardware simulation workflow showing development progression from virtual prototyping (Phase 1) through hardware validation (Phase 2), optimization (Phase 3), and production deployment (Phase 4). The teal phase represents cost-free simulation enabling 80-90% of development work, orange represents initial hardware validation, navy represents optimization on physical hardware, and gray represents production scaling. Iteration loops in Phase 1 and 2 enable rapid debugging before costly production investment.

26.3.1 Phase 1: Design and Prototype

  1. Circuit Design: Build circuit in simulator (Wokwi, Tinkercad)
  2. Firmware Development: Write and test code in simulation
  3. Debugging: Use simulator debugging tools
  4. Iteration: Rapidly test design variations

Duration: Days to weeks

Cost: $0 (time only)

26.3.2 Phase 2: Hardware Validation

  1. Assemble Breadboard: Build physical circuit matching simulation
  2. Flash Firmware: Upload simulated code to real hardware
  3. Initial Testing: Verify basic functionality
  4. Debug Differences: Address any simulation vs. reality gaps

Duration: Days

Cost: $20-200 (components)

26.3.3 Phase 3: Optimization

  1. Performance Tuning: Optimize on real hardware
  2. Edge Case Testing: Test failure modes
  3. Environmental Testing: Temperature, power, interference
  4. Long-Term Stability: Multi-day/week tests

Duration: Weeks to months

Cost: $50-500 (additional components, test equipment)

26.3.4 Phase 4: Production

  1. PCB Design: Create custom PCB from proven design
  2. Manufacturing: Produce boards
  3. Flashing and Testing: Automated test fixtures
  4. Deployment: Field installation

Duration: Months

Cost: $500-$10,000+ (depends on quantity)

Key Insight: Simulation enables 80-90% of development without hardware, reserving expensive physical testing for validation and optimization.

26.4 Best Practices

Estimated time: ~10 min | Intermediate | P13.C03.U08

26.4.1 Start with Simulation

  • Design circuits in simulator first
  • Validate logic before hardware investment
  • Share designs with team/community for review

26.4.2 Modular Design

  • Write testable functions (pure logic)
  • Separate hardware abstraction layer
  • Enable unit testing in simulation

26.4.3 Document Assumptions

  • Note differences between simulation and reality
  • Document unsimulated features
  • Plan physical testing for critical aspects

26.4.4 Version Control

  • Save simulation projects in git
  • Track firmware changes alongside circuit design
  • Enable collaboration

26.4.5 Continuous Integration

Integrate simulation into CI/CD:

# GitHub Actions example
name: Firmware Test

on: [push]

jobs:
  simulate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Install Renode
        run: |
          wget https://builds.renode.io/renode-latest.linux-portable.tar.gz
          tar xzf renode-latest.linux-portable.tar.gz
      - name: Run tests
        run: renode-test firmware.resc

26.4.6 Transition Planning

When moving from simulation to physical hardware, plan systematically to catch differences between virtual and real environments. A comprehensive transition checklist is provided in the Testing and Validation Guide section below, covering hardware-specific validation, timing, resource management, and production readiness checks.

26.5 Testing and Validation Guide

Comprehensive testing strategies ensure simulated designs translate successfully to production hardware.

26.5.1 Testing Pyramid for IoT

Effective IoT testing follows a layered approach, balancing automation, cost, and real-world validation:

Level Scope Tools Automation Execution Time
Unit Tests Individual functions PlatformIO, Unity High (95%+) Seconds
Integration Component interaction HIL rigs Medium (60-80%) Minutes
System End-to-end flow Testbeds Medium (40-60%) Hours
Field Real environment Pilot deployment Low (10-20%) Days-Weeks

Pyramid Strategy:

  • 70% Unit Tests: Fast, cheap, catches logic bugs early in simulation
  • 20% Integration Tests: Validates component interactions with hardware-in-the-loop
  • 9% System Tests: Full system validation on physical testbeds
  • 1% Field Tests: Real-world environmental validation with pilot deployments

Calculate the cost-effectiveness of the IoT testing pyramid for a 1,000-unit production run:

Interactive Testing Cost Calculator:

Example Calculation (default values):

Unit tests (70% of test effort): \[Cost_{unit} = N_{tests} \times t_{exec} \times rate_{dev} = 500 \times 2 \text{ sec} \times \$0/\text{hour (automated)} = \$0\]

Integration tests (20% of effort, HIL rig required): \[Cost_{integ} = N_{HIL} \times cost_{rig} + t_{setup} \times rate_{dev} = 5 \times \$200 + 20 \text{ hours} \times \$50/\text{hour} = \$2,000\]

System tests (9% of effort, full physical testbeds): \[Cost_{system} = N_{testbeds} \times cost_{hw} + t_{testing} \times rate_{dev} = 10 \times \$500 + 40 \text{ hours} \times \$50/\text{hour} = \$7,000\]

Field tests (1% of effort, pilot deployment): \[Cost_{field} = N_{pilots} \times cost_{deploy} = 50 \times \$100 = \$5,000\]

\[Cost_{total} = \$0 + \$2,000 + \$7,000 + \$5,000 = \$14,000 \text{ for 1,000-unit validation}\]

Bug cost comparison: Finding a bug in unit tests costs $0 (automated). Finding the same bug after 1,000 units ship costs $50 (labor) × 1,000 = $50,000 in field support. The pyramid prevents $36,000 in losses per critical bug ($50,000 field cost - $14,000 testing investment = $36,000 saved).

26.5.2 Hardware-in-the-Loop (HIL) Testing

Bridge simulation and physical hardware for comprehensive validation:

Component Purpose Example Setup Cost
DUT (Device Under Test) Target hardware ESP32 development board $10-50
Sensor Simulator Generate test inputs DAC + signal generator software $20-100
Network Simulator Control connectivity Raspberry Pi with traffic shaping $50-150
Power Monitor Measure consumption INA219 current sensor $10-30
Test Controller Orchestrate tests Python scripts on PC $0 (software)
Environmental Chamber Temperature/humidity Programmable chamber (optional) $500-5000

HIL Architecture:

Test Controller (PC running Python)
    |
    +-> Sensor Simulator (DAC outputs fake sensor signals)
    +-> Network Simulator (Raspberry Pi controls Wi-Fi/MQTT)
    +-> Power Monitor (INA219 measures current draw)
    +-> DUT (ESP32 firmware under test)
            |
        Serial Monitor (capture logs, responses)

Example HIL Test Script (Python):

import serial
import time

# Setup
dut = serial.Serial('/dev/ttyUSB0', 115200)
sensor_sim = initialize_dac()
power_monitor = INA219()

# Test Case: Temperature threshold trigger
sensor_sim.set_voltage(1.5)  # Simulate 25C
time.sleep(2)
assert read_mqtt_publish() == "25.0", "Expected temp 25C"

sensor_sim.set_voltage(2.0)  # Simulate 30C
time.sleep(2)
assert read_mqtt_publish() == "30.0", "Expected temp 30C"

# Validate power consumption
current_mA = power_monitor.read_current()
assert current_mA < 150, f"Excessive current: {current_mA}mA"

26.5.3 Test Cases Checklist

Systematically validate all critical functionality before production deployment:

Functional Tests:

Stress Tests:

Environmental Tests:

Security Tests:

Power Consumption Tests:

26.5.4 Test Report Template

Document every test execution for traceability and debugging:

# IoT Device Test Report

**Test:** [Test Name - e.g., "Temperature Sensor Accuracy Validation"]
**Date:** [YYYY-MM-DD]
**Tester:** [Name]
**Device:** [Model, Hardware Revision, Firmware Version]
**Result:** [PASS / FAIL / INCONCLUSIVE]

## Test Environment
- Temperature: [C]
- Humidity: [%]
- Power Supply: [Voltage, Source]
- Network: [Wi-Fi SSID, MQTT Broker URL]

## Test Steps
1. [Action taken - e.g., "Set DHT22 to read 25.0C using calibrated reference"]
2. [Action taken - e.g., "Wait 5 seconds for sensor stabilization"]
3. [Action taken - e.g., "Read value from serial monitor"]
4. [Action taken - e.g., "Compare reading to expected value +/-0.5C"]

## Expected Result
[Detailed description of expected behavior]

## Actual Result
[Detailed description of observed behavior]

## Pass/Fail Criteria
- Reading accuracy: +/-0.5C -> PASS/FAIL
- Response time: <2 seconds -> PASS/FAIL
- MQTT topic: 'sensors/temp' -> PASS/FAIL

## Evidence
- Screenshot: `test_screenshots/temp_accuracy_001.png`
- Serial log: `logs/temp_test_2025-12-12_14-30.txt`
- MQTT capture: `pcap/mqtt_publish_temp.pcap`

## Notes
- Sensor showed slight drift after 1-hour operation
- Recommended: Add periodic calibration check in production firmware

## Follow-Up Actions
- [ ] Investigate long-term drift (schedule 24-hour stability test)
- [ ] Document calibration procedure in user manual

26.5.5 Automated Testing with CI/CD

Integrate simulation testing into continuous integration pipelines:

GitHub Actions Example (PlatformIO + Wokwi):

name: IoT Firmware Test

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Set up Python
        uses: actions/setup-python@v4
        with:
          python-version: '3.10'

      - name: Install PlatformIO
        run: |
          pip install platformio

      - name: Run Unit Tests
        run: |
          cd firmware
          pio test -e native

      - name: Build Firmware
        run: |
          cd firmware
          pio run -e esp32dev

      - name: Run Wokwi Simulation Tests
        run: |
          npm install -g @wokwi/cli
          wokwi-cli simulate --timeout 30s wokwi.toml

      - name: Upload Test Results
        if: always()
        uses: actions/upload-artifact@v3
        with:
          name: test-results
          path: firmware/.pio/test/

Benefits of Automated Testing:

  • Catch regressions immediately (every code commit tested)
  • Consistent test environment (reproducible results)
  • Fast feedback loop (results in <5 minutes)
  • Documentation of test history (pass/fail trends over time)
  • Confidence for code reviews (tests must pass before merge)

26.5.6 Simulation-to-Hardware Transition Checklist

Before deploying firmware validated in simulation to physical hardware, verify these critical differences:

Hardware-Specific Validation:

Timing and Performance:

Resource Management:

Production Readiness:

26.6 Knowledge Check

Test your understanding of simulation-driven development concepts.

Scenario: Testing ESP32 thermostat firmware with simulated temperature sensor inputs and real relay outputs.

Hardware Setup:

  • DUT: ESP32 running thermostat firmware
  • Sensor Simulator: DAC (MCP4725) generating voltage = simulated temperature
  • Power Monitor: INA219 measuring current
  • Test Controller: Python script on PC

Test Case: Heating Cycle Validation

# test_thermostat_heating.py
import serial
import smbus
import time

# I2C devices
bus = smbus.SMBus(1)
DAC_ADDR = 0x62  # MCP4725
INA219_ADDR = 0x40

# Serial to ESP32
esp32 = serial.Serial('/dev/ttyUSB0', 115200, timeout=1)

def set_simulated_temp(celsius):
    """Convert temperature to DAC voltage (10mV per °C)"""
    voltage = celsius * 0.01
    dac_value = int((voltage / 3.3) * 4095)
    bus.write_i2c_block_data(DAC_ADDR, 0x40, [(dac_value >> 4) & 0xFF, (dac_value << 4) & 0xFF])
    print(f"Simulated temp: {celsius}°C (DAC: {dac_value})")

def read_power():
    """Read current from INA219"""
    raw = bus.read_word_data(INA219_ADDR, 0x04)
    current_mA = (raw >> 8 | (raw & 0xFF) << 8) * 0.1
    return current_mA

def read_esp32_state():
    """Parse ESP32 serial output"""
    line = esp32.readline().decode().strip()
    if "RELAY:" in line:
        return "ON" if "ON" in line else "OFF"
    return None

# Test Sequence
print("=== Starting HIL Test ===")

# Set thermostat setpoint to 22°C
esp32.write(b"SET 22\n")
time.sleep(1)

# Test 1: Cold start (18°C → should turn heater ON)
print("\nTest 1: Cold Start")
set_simulated_temp(18.0)
time.sleep(5)

relay_state = read_esp32_state()
assert relay_state == "ON", f"Expected relay ON, got {relay_state}"
current = read_power()
assert current > 100, f"Expected >100mA (relay active), got {current:.1f}mA"
print("✓ Heater activated correctly")

# Test 2: Heat up to setpoint (18→22°C)
print("\nTest 2: Gradual Warm-up")
for temp in range(18, 23):
    set_simulated_temp(float(temp))
    time.sleep(10)
    print(f"  Temperature: {temp}°C, Relay: {read_esp32_state()}")

# At 22°C, relay should turn OFF
relay_state = read_esp32_state()
assert relay_state == "OFF", f"Expected relay OFF at setpoint, got {relay_state}"
print("✓ Heater deactivated at setpoint")

# Test 3: Overshoot (22→24°C) - ensure no overheat
print("\nTest 3: Overshoot Protection")
set_simulated_temp(24.0)
time.sleep(5)
relay_state = read_esp32_state()
assert relay_state == "OFF", "Heater should remain OFF above setpoint"
print("✓ Overshoot protection working")

# Test 4: Rapid temperature drop (simulating door open)
print("\nTest 4: Rapid Drop Response")
set_simulated_temp(16.0)  # Door opened, cold air rushes in
time.sleep(2)
relay_state = read_esp32_state()
assert relay_state == "ON", "Heater should respond quickly to sudden drop"
print("✓ Fast response to temperature drop")

print("\n=== All HIL Tests PASSED ===")

Results:

=== Starting HIL Test ===

Test 1: Cold Start
Simulated temp: 18.0°C (DAC: 2234)
✓ Heater activated correctly

Test 2: Gradual Warm-up
  Temperature: 18°C, Relay: ON
  Temperature: 19°C, Relay: ON
  Temperature: 20°C, Relay: ON
  Temperature: 21°C, Relay: ON
  Temperature: 22°C, Relay: OFF
✓ Heater deactivated at setpoint

Test 3: Overshoot Protection
Simulated temp: 24.0°C (DAC: 2979)
✓ Overshoot protection working

Test 4: Rapid Drop Response
Simulated temp: 16.0°C (DAC: 1986)
✓ Fast response to temperature drop

=== All HIL Tests PASSED ===

Bug Found: During testing, discovered firmware had 5-second delay before checking temperature again after relay state change. This caused 10-second response time to sudden drops. Fixed by reducing delay to 1 second.

Value: HIL testing revealed timing issue that pure software testing couldn’t catch. Real sensor would take hours to naturally vary temperature; HIL completed in 2 minutes.

How to allocate testing effort across the IoT testing pyramid:

Project Type Unit Tests Integration Tests System Tests Field Tests Rationale
Prototype/MVP 40% 30% 20% 10% Fast iteration, basic validation
Consumer Product 70% 20% 8% 2% High volume, preventative bug catching
Industrial IoT 60% 25% 10% 5% Reliability critical, controlled environment
Safety-Critical (Medical/Automotive) 50% 30% 15% 5% Regulatory compliance, extensive validation
Research/Academic 30% 40% 20% 10% Exploration, protocol development

Decision Matrix:

Answer these questions to determine your distribution:

Q1: What is the cost of field failure?

  • Low (hobbyist project, easy to update) → More field testing acceptable
  • Medium (consumer product, OTA updates) → Standard pyramid (70/20/8/2)
  • High (industrial, hard to access) → More integration/system testing
  • Critical (safety, lives at risk) → Maximum test coverage at all levels

Q2: How mature is your technology stack?

  • Proven libraries, well-tested protocols → Standard pyramid
  • New protocols, custom hardware → Increase integration testing (35%)
  • Bleeding edge, unproven → Increase all testing levels

Q3: Team size and expertise?

  • Solo developer → Focus on unit tests (fast feedback)
  • Small team (2-5) → Standard pyramid with CI/CD
  • Large team (10+) → Can afford more system/field testing

Interactive Test Budget Allocator:

Example: Smart Irrigation System (1,000 units deployed)

Project Context:

  • Consumer product (not safety-critical)
  • Uses proven ESP32 + standard sensors
  • Team of 3 developers
  • Field failures cost $50/unit (service call)

Default Allocation (70/20/8/2 pyramid):

  • Unit Tests: 70% × 200 = 140 hours
    • Test all sensor reading logic
    • Test water scheduling algorithms
    • Test MQTT message formatting
    • Target: 85% code coverage
  • Integration Tests: 20% × 200 = 40 hours
    • Test ESP32 + sensor communication (I2C)
    • Test MQTT connection/reconnection
    • Test valve control (relay switching)
    • Target: All interfaces validated
  • System Tests: 8% × 200 = 16 hours
    • End-to-end: Sensor → Cloud → Control
    • 48-hour soak test
    • Power consumption verification
  • Field Tests: 2% × 200 = 4 hours
    • Beta deployment to 10 users
    • Monitor for 2 weeks
    • Collect crash logs and feedback

ROI Calculation:

  • 200 hours @ $50/hour = $10,000 testing cost
  • Catches 95% of bugs before deployment
  • Prevents 50 service calls × $50 = $2,500 savings in first month
  • Breaks even after 4 months
Common Mistake: Testing Only the Happy Path

The Mistake: A developer tests an MQTT-based IoT sensor with perfect Wi-Fi and continuous cloud connectivity. Firmware passes all tests. In production, devices disconnect randomly and never reconnect, requiring power cycles.

Why It Happens:

Developers test what they expect to work, not what can go wrong:

Happy Path Testing (What Beginners Test):

void loop() {
  float temp = readSensor();
  mqttPublish("sensors/temp", temp);
  delay(60000);
}

Test case: 1. Turn on device ✓ 2. Verify MQTT messages arrive ✓ 3. Declare “works!” ✓

What Wasn’t Tested:

  • Wi-Fi disconnection (router reboot)
  • MQTT broker downtime
  • Network congestion (packet loss)
  • Power brownout (voltage sag)
  • Memory leaks (long-running)
  • Clock drift (NTP sync failure)
  • Sensor read failure
  • Full flash storage

Real-World Failure Modes:

Scenario Frequency Impact if Untested
Wi-Fi disconnect Daily Device stuck, no data
MQTT broker down Weekly Queue overflow, crash
Sensor returns NaN 0.1% of reads Publish invalid data
Flash write fails After 100k cycles Config lost, factory reset needed
Memory leak After 72 hours Watchdog reset loop
Clock not synced After power loss Timestamps wrong

The Fix: Chaos Engineering for IoT

Test failure modes explicitly:

# Chaos test script
import pytest
import time

def test_wifi_disconnect():
    """Simulate Wi-Fi drop and recovery"""
    device = connect_to_device()

    # Normal operation
    assert device.is_connected()
    assert device.mqtt_status() == "Connected"

    # Kill Wi-Fi for 30 seconds
    wifi_router.disable()
    time.sleep(30)

    # Device should buffer data locally
    assert device.buffer_size() > 0

    # Restore Wi-Fi
    wifi_router.enable()
    time.sleep(60)  # Allow reconnection

    # Device should reconnect and flush buffer
    assert device.is_connected()
    assert device.mqtt_status() == "Connected"
    assert device.buffer_size() == 0  # Buffer cleared

def test_mqtt_broker_down():
    """MQTT broker unreachable"""
    device = connect_to_device()

    # Block MQTT port with firewall
    firewall.block_port(1883)

    # Wait for keepalive timeout
    time.sleep(120)

    # Device should detect failure and retry with exponential backoff
    assert device.mqtt_status() == "Reconnecting"
    assert device.retry_count() > 0
    assert device.retry_delay() >= 30  # Backoff to 30+ seconds

    # Restore connectivity
    firewall.unblock_port(1883)
    time.sleep(60)

    # Should eventually reconnect
    assert device.mqtt_status() == "Connected"

def test_sensor_failure():
    """Sensor returns NaN"""
    device = connect_to_device()

    # Disconnect sensor physically (or inject NaN)
    device.inject_sensor_error()

    # Device should NOT publish NaN
    messages = mqtt_subscriber.get_last_messages(5)
    for msg in messages:
        assert not math.isnan(msg['temperature'])

    # Device should log error
    assert "Sensor read failed" in device.get_logs()

Production-Ready Code (Handling Failures):

// State machine for robust MQTT handling
enum State { WIFI_CONNECTING, MQTT_CONNECTING, CONNECTED, ERROR };
State currentState = WIFI_CONNECTING;
int retryCount = 0;
unsigned long lastRetry = 0;

void loop() {
  switch(currentState) {
    case WIFI_CONNECTING:
      if (WiFi.status() == WL_CONNECTED) {
        currentState = MQTT_CONNECTING;
        retryCount = 0;
      } else if (millis() - lastRetry > 30000) {
        WiFi.reconnect();
        lastRetry = millis();
        retryCount++;
        if (retryCount > 10) {
          ESP.restart();  // Reboot after 10 failures
        }
      }
      break;

    case MQTT_CONNECTING:
      if (client.connect("sensor-01")) {
        currentState = CONNECTED;
        retryCount = 0;
      } else if (millis() - lastRetry > (5000 << retryCount)) {  // Exponential backoff
        lastRetry = millis();
        retryCount++;
      }
      break;

    case CONNECTED:
      if (!client.connected()) {
        currentState = MQTT_CONNECTING;
      } else {
        float temp = readSensor();
        if (!isnan(temp)) {
          client.publish("sensors/temp", String(temp).c_str());
        } else {
          Serial.println("ERROR: Sensor returned NaN");
        }
      }
      client.loop();
      break;
  }
  delay(100);
}

Failure Mode Checklist:

Remember: Murphy’s Law governs IoT. If something CAN fail, it WILL fail—usually at 3 AM on a Sunday. Test failure modes, not just success.

26.8 Summary

  • Simulation-driven workflow enables 80-90% of development without hardware through four phases: design, validation, optimization, and production
  • Testing pyramid balances automation and real-world validation: 70% unit tests, 20% integration, 9% system, 1% field tests
  • Hardware-in-the-loop (HIL) testing bridges simulation and physical hardware for comprehensive validation
  • Best practices include starting with simulation, modular design, documenting assumptions, version control, and CI/CD integration
  • Transition checklists ensure successful migration from simulation to physical hardware deployment

26.10 Concept Relationships

Prerequisites:

Builds Toward:

Complements:

26.11 See Also

Common Pitfalls

Writing tests to match existing code produces tests that pass the implementation, not tests that verify requirements. Tests written after implementation tend to miss edge cases the developer didn’t consider when writing the code. Write test cases from requirements before writing implementation code.

100% line coverage can be achieved by calling every function once without checking the results. Tests that don’t assert correctness provide false confidence. Every test must have at least one assertion that fails if the behavior is wrong.

Firmware that passes all tests on a development board with 2× the production MCU’s flash and RAM, or running at room temperature, may fail on the constrained production hardware at -20°C or +70°C. Always run the final test suite on actual production hardware under the full rated environmental range.

IoT devices run for years. A 24-hour burn-in test catches obvious failures but misses slow degradation from repeated flash erase cycles, capacitor aging, connector fretting corrosion, and firmware memory leaks. Plan for HALT (Highly Accelerated Life Test) and HASS testing before volume production.

26.12 What’s Next

The next section covers Programming Paradigms and Tools, which explores the various approaches and utilities for organizing embedded software. Understanding different programming paradigms helps you choose the right architecture for your specific IoT application.

Previous Current Next
Simulating Hardware Programming Simulating Testing and Validation Testing and Validation