1569  Test Automation and CI/CD for IoT

1569.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Design CI/CD Pipelines: Build automated testing pipelines for IoT firmware
  • Set Up Device Farms: Create infrastructure for automated hardware testing
  • Implement Test Metrics: Track coverage, quality, and testing effectiveness
  • Maintain Test Documentation: Create traceability matrices for compliance

1569.2 Prerequisites

Before diving into this chapter, you should be familiar with:

NoteKey Takeaway

In one sentence: Automated testing catches bugs before they reach production and enables rapid iteration.

Remember this rule: If it’s not automated, it’s not tested. Manual testing doesn’t scale and doesn’t prevent regressions.


1569.3 Continuous Integration for IoT

Automate testing on every code commit.

1569.3.1 CI Pipeline Stages

Continuous Integration pipeline flowchart showing build, unit tests, simulation, hardware tests, security scan, and deployment stages

CI/CD pipeline flowchart for IoT firmware
Figure 1569.1: CI/CD pipeline for IoT firmware: Build, test, simulate, validate on hardware, security scan before release

1569.3.2 GitHub Actions CI Pipeline

# .github/workflows/firmware-ci.yml
name: Firmware CI

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Install PlatformIO
        run: |
          pip install platformio

      - name: Build firmware
        run: |
          pio run -e esp32dev

      - name: Upload build artifacts
        uses: actions/upload-artifact@v3
        with:
          name: firmware.bin
          path: .pio/build/esp32dev/firmware.bin

  unit-tests:
    runs-on: ubuntu-latest
    needs: build
    steps:
      - uses: actions/checkout@v3

      - name: Install Unity test framework
        run: |
          git clone https://github.com/ThrowTheSwitch/Unity.git

      - name: Run unit tests
        run: |
          gcc test/*.c Unity/src/unity.c -o test_runner
          ./test_runner

      - name: Generate coverage report
        run: |
          gcov test/*.c
          lcov --capture --directory . --output-file coverage.info
          genhtml coverage.info --output-directory coverage_html

      - name: Upload coverage to Codecov
        uses: codecov/codecov-action@v3
        with:
          file: coverage.info

  qemu-simulation:
    runs-on: ubuntu-latest
    needs: build
    steps:
      - uses: actions/checkout@v3

      - name: Download firmware
        uses: actions/download-artifact@v3
        with:
          name: firmware.bin

      - name: Install QEMU
        run: |
          sudo apt-get install qemu-system-xtensa

      - name: Run firmware in QEMU
        run: |
          timeout 60s qemu-system-xtensa \
            -M esp32 -kernel firmware.bin \
            -serial stdio > qemu_output.txt || true

      - name: Validate QEMU output
        run: |
          grep "Boot successful" qemu_output.txt
          grep "Wi-Fi connected" qemu_output.txt

  hardware-test:
    runs-on: self-hosted  # Requires device farm
    needs: build
    steps:
      - uses: actions/checkout@v3

      - name: Download firmware
        uses: actions/download-artifact@v3
        with:
          name: firmware.bin

      - name: Flash to test device
        run: |
          esptool.py --port /dev/ttyUSB0 write_flash 0x10000 firmware.bin

      - name: Run integration tests
        run: |
          pytest tests/integration/ -v --device /dev/ttyUSB0

  security-scan:
    runs-on: ubuntu-latest
    needs: build
    steps:
      - uses: actions/checkout@v3

      - name: Download firmware
        uses: actions/download-artifact@v3
        with:
          name: firmware.bin

      - name: Scan for secrets
        run: |
          trufflehog filesystem . --json > secrets_report.json

      - name: Scan for vulnerabilities
        run: |
          binwalk -e firmware.bin
          firmwalker firmware.bin.extracted/ > vulnerabilities.txt

      - name: Fail if secrets found
        run: |
          if [ -s secrets_report.json ]; then
            echo "Hardcoded secrets detected!"
            cat secrets_report.json
            exit 1
          fi

1569.4 Device Farm for Hardware Testing

Problem: CI/CD needs real hardware, but devices are physical.

Solution: Device farm - racks of real devices connected to CI/CD infrastructure.

1569.4.1 Commercial Device Farms

Service Focus Pricing
AWS Device Farm Cloud-based testing on real devices Pay per minute
Firebase Test Lab Android/iOS app testing Free tier available
Golioth Device Test Lab IoT-specific testing Enterprise

1569.4.2 DIY Device Farm Setup

Component Purpose Example
USB hubs Connect multiple devices 20-port powered USB hub
Power relays Reboot devices remotely USB-controlled relay board
UART adapters Serial console access FTDI FT232 (x10 devices)
Wi-Fi access point Isolated test network Raspberry Pi 4 as AP
Test automation server Run tests in parallel Jenkins on Ubuntu server

1569.4.3 Device Farm Test Example

# device_farm_test.py
import pytest
from device_farm import DeviceFarm

@pytest.fixture(scope="module")
def farm():
    return DeviceFarm(config="farm_config.yml")

def test_firmware_on_all_devices(farm):
    """Flash and test firmware on all available devices"""

    devices = farm.get_available_devices()  # Returns 10 ESP32 dev boards
    assert len(devices) >= 10, "Not enough devices in farm"

    results = []
    for device in devices:
        # Flash firmware
        farm.flash_device(device, "firmware.bin")

        # Reboot
        farm.reboot_device(device)

        # Run test suite
        test_result = farm.run_tests(device, timeout=300)
        results.append({
            'device_id': device.id,
            'passed': test_result.passed,
            'failed': test_result.failed
        })

    # All devices must pass
    failures = [r for r in results if r['failed'] > 0]
    assert len(failures) == 0, f"Devices failed: {failures}"

1569.5 Test Metrics and Documentation

1569.5.1 Key Test Metrics

Track these metrics to measure test effectiveness:

Metric Formula Target Purpose
Code Coverage (Lines executed / Total lines) x 100% 80%+ Ensure adequate test breadth
Defect Density Bugs found / 1000 lines of code <5 per KLOC Measure code quality
Mean Time to Detect (MTTD) Average time from bug intro to detection <1 week Measure test effectiveness
Test Pass Rate (Passed tests / Total tests) x 100% >95% Identify flaky tests
Field Failure Rate Failures in field / Devices deployed <1% first year Validate pre-release testing

1569.5.2 Metrics Dashboard Example

Firmware Version 2.3.1 - Test Metrics

Code Coverage:
  Unit tests: 87% (target: 80%) PASS
  Integration tests: 42% (additional coverage)
  Total coverage: 92%

Defect Density:
  Total LOC: 15,000
  Bugs found in testing: 23
  Density: 1.5 bugs/KLOC PASS (target: <5)

Test Execution:
  Unit tests: 1,247 tests, 1,247 passed (100%) PASS
  Integration tests: 89 tests, 84 passed (94%) WARNING
  End-to-end tests: 12 tests, 10 passed (83%) FAIL

MTTD:
  Average: 4.2 days PASS (target: <7 days)
  Longest: 18 days (memory leak in sleep mode)

Field Metrics (from v2.3.0):
  Deployed devices: 10,000
  Field failures: 47 (0.47%) PASS
  Top failure: Wi-Fi reconnection timeout (18 devices)

1569.6 Requirements Traceability

Why traceability matters: - Regulatory compliance: FDA, automotive (ISO 26262) require proof of testing - Audit trail: Understand why tests exist, what they validate - Regression prevention: Ensure tests cover all requirements

1569.6.1 Traceability Matrix

Requirement ID Requirement Test ID Test Type Status
REQ-001 Device boots in <5s UT-015, IT-003 Unit, Integration Pass
REQ-002 Wi-Fi reconnects after dropout IT-022, E2E-005 Integration, E2E Pass
REQ-003 Battery life >2 years IT-030, SOAK-001 Integration, Field Pending
REQ-004 Firmware signed with RSA-2048 ST-012 Security Pass
REQ-005 Temperature range: -40C to +85C ENV-001, ENV-002 Environmental Pass

1569.6.2 Traceability Tools

Tool Purpose Integration
JIRA + Xray Link requirements to test cases CI/CD reporting
TestRail Test management with requirements linking Automation APIs
Polarion Full ALM (Application Lifecycle Management) Enterprise

1569.7 Test Strategy Optimization

1569.7.1 Tiered Testing Approach

Not all tests should run on every commit:

Tier Trigger Duration Tests
Tier 1 (Commit) Every commit <10 min Unit tests, lint, build
Tier 2 (PR) Pull request <30 min Integration, security scan
Tier 3 (Nightly) Scheduled <4 hours HIL, extended integration
Tier 4 (Release) Release candidate <24 hours Soak test, full regression

1569.7.2 Test Selection Optimization

# Intelligent test selection based on changed files
def select_tests(changed_files):
    tests_to_run = set()

    for file in changed_files:
        if file.startswith("src/wifi/"):
            tests_to_run.add("tests/unit/test_wifi.py")
            tests_to_run.add("tests/integration/test_wifi_connectivity.py")
        elif file.startswith("src/sensor/"):
            tests_to_run.add("tests/unit/test_sensor.py")
            tests_to_run.add("tests/integration/test_sensor_accuracy.py")
        elif file.startswith("src/mqtt/"):
            tests_to_run.add("tests/unit/test_mqtt.py")
            tests_to_run.add("tests/integration/test_mqtt_publish.py")
            tests_to_run.add("tests/integration/test_mqtt_subscribe.py")

    # Always run critical path tests
    tests_to_run.add("tests/unit/test_boot.py")
    tests_to_run.add("tests/integration/test_critical_path.py")

    return list(tests_to_run)

1569.8 Knowledge Check


1569.9 Summary

Test automation enables quality at scale:

  • CI/CD Pipelines: Automated build, test, and deployment on every commit
  • Device Farms: Real hardware testing integrated into CI/CD
  • Tiered Testing: Fast tests per-commit, comprehensive tests nightly/weekly
  • Metrics: Track coverage, defect density, MTTD, and field failure rate
  • Traceability: Link requirements to tests for regulatory compliance

1569.10 What’s Next?

Continue your testing journey with these chapters: