void test_temperature_alert_triggered(void){// Arrange: Set mock temperature to 35°C (above threshold) set_mock_temperature(35.0);// Act: Check if alert should triggerbool alert = check_temperature_alert(&mock_sensor,30.0);// Assert: Alert should be triggered TEST_ASSERT_TRUE(alert);}void test_sensor_unavailable_returns_error(void){// Arrange: Sensor not available set_mock_availability(false);// Act: Attempt to read temperature SensorStatus status = read_sensor_safe(&mock_sensor);// Assert: Should return error status TEST_ASSERT_EQUAL(SENSOR_ERROR, status);}
Benefits: - Tests run on your laptop (no hardware required) - Tests run in milliseconds (no I2C delays) - Tests are deterministic (no environmental noise) - Can simulate sensor failures, edge cases
1576.6 Code Coverage Targets
What is code coverage? Percentage of code lines executed during testing.
Coverage does not equal Quality: 100% coverage doesn’t mean bug-free. It means every line was executed at least once—not that every edge case was tested.
1576.7 Worked Example: Achieving Coverage Targets
Scenario: Developing firmware for a connected insulin pump. The FDA requires documented evidence that safety-critical code paths have been thoroughly tested.
Given: - Total firmware: 45,000 lines of C code - Safety-critical modules: 8,200 lines (glucose calculation, dosing algorithm, alert system) - Target: 100% branch coverage for safety-critical, 85% for business logic
Analysis of uncovered branches:
$ gcov dosing_algorithm.c --branch-probabilitiesUncovered branches analysis:- Line 234: else branch (invalid sensor reading)- never executed- Line 456: boundary case (glucose< 20 mg/dL)- never tested- Line 512: timeout path (sensor response > 5s)- never tested- Line 678: dual-sensor disagreement (>15% difference)- never tested- Line 823: battery critical during dose (<5%)- never tested
Finding: 73% of uncovered branches are error handling and edge cases
Test categories needed:
// Category 1: Boundary conditions (28% of new tests)void test_glucose_at_lower_boundary(void){// Test glucose = 20 mg/dL (minimum valid) set_mock_glucose(20); assert(calculate_dose()>=0);}// Category 2: Error injection (35% of new tests)void test_sensor_timeout_triggers_alert(void){ configure_mock_sensor_delay(6000);// 6 second delay SensorResult result = read_glucose_with_timeout(5000); assert(result.status == SENSOR_TIMEOUT); assert(alert_triggered(ALERT_SENSOR_FAILURE));}// Category 3: State transitions (22% of new tests)void test_dose_abort_on_battery_critical(void){ set_battery_level(4);// 4% battery start_dose_delivery(5.0);// 5 units assert(dose_state()== DOSE_ABORTED); assert(units_delivered()==0);}// Category 4: Concurrent conditions (15% of new tests)void test_dual_sensor_disagreement(void){ set_sensor_a_reading(120); set_sensor_b_reading(145);// 20.8% difference GlucoseResult result = get_calibrated_glucose(); assert(result.confidence == LOW); assert(result.requires_fingerstick ==true);}
Key Insight: High coverage in safety-critical systems requires intentional testing of failure modes, not just happy paths.
1576.8 Knowledge Check
Show code
InlineKnowledgeCheck({questionId:"kc-testing-unit-1",question:"You're building a smart door lock with Wi-Fi connectivity. Your firmware has 15,000 lines of code. You have time to write 200 unit tests. How should you allocate testing effort to maximize defect detection?",options: ["Write 200 tests evenly distributed across all modules to achieve uniform coverage percentage","Focus all 200 tests on the lock mechanism control code since it's the most critical function","Use risk-based allocation: 50% on lock control, 30% on Wi-Fi connectivity, 20% on crypto/security","Write tests only for code that has caused bugs in the past, ignoring untested new features" ],correctAnswer:2,feedback: ["Incorrect. Uniform coverage treats all code as equally important. A bug in the LED blink function is not as critical as a bug in lock authentication.","Incorrect. While lock control is critical, ignoring Wi-Fi connectivity and security would leave major attack surfaces untested.","Correct! Risk-based test allocation focuses effort on: 1) Safety-critical functions (lock control - life/property), 2) High-failure-rate components (Wi-Fi - connectivity bugs dominate support), 3) Security-sensitive code (authentication bypass = recall).","Incorrect. While historical data is valuable, new features often contain the most bugs (untested code paths)." ],hint:"Think about the impact and likelihood of different failure types."})
Show code
InlineKnowledgeCheck({questionId:"kc-testing-unit-2",question:"Your team implements mutation testing on firmware for a battery-powered IoT sensor. The test suite has 87% line coverage. Mutation testing introduces 140 mutations (change > to >=, flip true/false). Results: 98 mutations killed, 42 survived. What does this reveal?",options: ["70% mutation score is excellent - the test suite is ready for production","42 surviving mutations indicate weak or missing assertions - tests execute code but don't validate correctness","Mutation testing is flawed - surviving mutations are in non-critical code","Increase line coverage to 100% to kill all mutations" ],correctAnswer:1,feedback: ["Incorrect. A 70% mutation score means 30% of bugs are NOT caught - that's 1 in 3 bugs escaping to production.","Correct! Mutation testing reveals the difference between 'code executed' vs 'behavior validated'. Tests that pass regardless of code changes have weak assertions.","Incorrect. Surviving mutations indicate missing tests, not non-critical code. If code is critical enough to write, it's critical enough to test.","Incorrect. Line coverage doesn't improve mutation score - the problem isn't executing code, it's validating correctness." ],hint:"Think about the difference between 'this line ran' vs 'this line produced the correct result'."})
1576.9 Summary
Unit testing forms the foundation of IoT quality assurance:
Test pure logic: Data processing, business logic, protocol parsing
Mock hardware: Abstract interfaces allow testing without physical devices
Set risk-based coverage: 100% for safety-critical, 85%+ for core logic
Measure quality, not just coverage: Use mutation testing to validate assertion strength
Run fast: Unit tests should execute in seconds, enabling frequent runs
1576.10 What’s Next?
Continue your testing journey with these chapters: