25  Domain Knowledge Checks

25.1 Knowledge Checks and Exercises

Estimated Time: 45 min | Complexity: Intermediate

Key Concepts

  • Status LED: Single or RGB LED communicating device state through colour, blink pattern, and intensity without requiring a screen.
  • Haptic Feedback: Vibration motor or piezoelectric actuator providing tactile confirmation of user actions in eyes-free situations.
  • Blink Pattern Language: Standardised mapping of LED blink rates and colours to device states (solid=connected, slow blink=connecting, fast blink=error).
  • Progressive Disclosure: Showing minimal status normally and revealing detail (via button press or app) only when the user needs it.
  • Multimodal Redundancy: Pairing visual LED feedback with auditory beeps so users with visual impairments can still perceive device state changes.
  • Ambient Display: Always-visible low-attention status indicator (e.g. LED strip) that communicates information without requiring active focus.
  • Notification Centre Integration: Routing device status events to the OS notification centre so users have a consolidated history of device activity.

Test your understanding of IoT application domains with these quizzes, scenario-based exercises, and hands-on design challenges. Each section targets a different application domain, building from factual recall to applied problem-solving.

MVU: Minimum Viable Understanding

If you remember only 3 things from these exercises:

  1. Domain Requirements Drive Everything: Each IoT application domain has a unique requirement profile – healthcare demands sub-second latency with 99.99% reliability and regulatory compliance, while agriculture tolerates hourly updates but requires 5-10 year battery life across vast areas. Choosing the wrong profile wastes resources or creates dangerous gaps.

  2. Coverage Thresholds Determine Success or Failure: Smart city deployments (parking, waste, lighting) consistently show that below 80% sensor coverage, users lose trust and abandon the system entirely. Partial coverage is often worse than no coverage because it creates false confidence.

  3. False Positive Cost Varies by Domain: A false alarm in consumer fitness tracking is a minor annoyance, but in healthcare it can trigger unnecessary medical procedures, and in industrial settings it causes costly shutdowns. Always calculate Positive Predictive Value using real-world prevalence, not just sensitivity and specificity.

Quick Self-Test: Before starting, can you name the top connectivity technology for agricultural IoT and explain why? (Answer: LoRaWAN – long range covers entire farms, battery lasts 5-10 years, no carrier fees.)

25.2 Learning Objectives

By completing these exercises, you will:

  • Assess your understanding of domain requirements across latency, reliability, scale, and power
  • Apply domain selection frameworks to real-world scenarios
  • Identify appropriate technologies for specific application needs
  • Diagnose common deployment pitfalls and justify appropriate solutions
  • Calculate key metrics including ROI payback period, Positive Predictive Value, and coverage thresholds

These exercises test what you have learned about different IoT application areas – healthcare, agriculture, smart cities, and more. Do not worry if you cannot answer every question; the point is to discover which topics you understand well and which ones need more study. Think of it as a practice test that helps you focus your revision on the areas where it will have the most impact.

Testing what you know is like a treasure hunt – you find out what you already have and what you still need to discover!

25.2.1 The Sensor Squad Adventure: The Big IoT Quiz Show

The Sensor Squad was invited to participate in the famous IoT Quiz Show! Each team member had to answer questions about their favorite topic.

Round 1 – Thermo the Temperature Sensor went first. The host asked: “Where would you rather work, Thermo – a hospital or a farm?”

Thermo thought carefully. “In a hospital, I need to be SUPER precise and report every single second because a patient’s life might depend on me. But on a farm, I can relax a bit and report every hour – the crops won’t mind waiting! Different places need different speeds.”

Round 2 – Connectivity Carl was next. The host asked: “How would you connect 25,000 garbage bins across a whole city?”

Carl laughed. “You can’t use Wi-Fi – you’d need thousands of routers! And Bluetooth can only reach across a room. I’d use LoRaWAN – one antenna on a tall building can reach bins for miles around, and the batteries last for YEARS!”

Round 3 – Pixel the Camera Sensor got the trickiest question: “A parking app says a spot is empty, but when the driver arrives, it’s taken. What went wrong?”

Pixel explained: “If you only put sensors on HALF the parking spots, the app can’t see the other half. It’s like trying to do a jigsaw puzzle with half the pieces missing – you can’t trust the picture! You need sensors on at least 80% of spots.”

Final Round – The whole Squad worked together on this one: “What’s the most important thing about IoT in different places?”

They answered together: “Every place is different! A hospital needs speed and accuracy. A farm needs long battery life and wide coverage. A city needs everything to work together. The secret is matching the right sensor to the right job!”

25.2.2 Key Words for Kids

Word What It Means
Domain Requirements The special rules each place has for how sensors should work
Coverage Threshold The minimum number of sensors you need before the system actually helps people
False Alarm When a sensor says something is wrong but everything is actually fine
ROI (Return on Investment) How quickly the money you spend on sensors pays for itself by saving money

25.3 How IoT Requirements Vary by Domain

Before diving into the quizzes, this diagram shows how different IoT application domains map to different requirement priorities. Understanding these trade-offs is the foundation for every question that follows.

Quadrant diagram showing IoT application domains mapped by latency requirements (vertical axis from seconds to milliseconds) and reliability requirements (horizontal axis from 95% to 99.999%). Healthcare and autonomous vehicles appear in the high-reliability, low-latency quadrant. Agriculture and smart home appear in the lower-reliability, higher-latency quadrant. Smart cities and manufacturing span the middle. Each domain also shows its primary connectivity technology and regulatory framework.

IoT Application Domain Requirements Matrix – Each domain prioritizes different technical requirements based on its operational context and safety criticality

25.4 Quiz 1: Domain Requirements

25.5 Quiz 2: Smart Cities

25.6 Quiz 3: Transportation and V2X

25.7 Quiz 4: Agriculture and Healthcare

25.8 Quiz 5: Manufacturing and Smart Home

25.9 Interactive Calculators

Use these calculators to explore how different parameters affect IoT deployment decisions.

25.9.1 PPV Calculator for Healthcare Wearables

Understanding how sensitivity, specificity, and disease prevalence interact to determine Positive Predictive Value is critical for healthcare IoT deployment decisions.

Key Insight: Notice how PPV drops dramatically at low prevalence even with high sensitivity and specificity. A 92% sensitive, 88% specific device has 51% PPV at 12% prevalence but only 9% PPV at 1% prevalence.

25.9.2 ROI Payback Calculator for Smart City Deployments

Calculate how quickly an IoT deployment pays for itself through operational savings.

Key Insight: Payback period under 2 years indicates strong ROI. Smart city deployments often save 10-30% of operational costs, making them financially attractive despite high upfront investment.

25.9.3 OEE Calculator for Manufacturing IoT

Calculate Overall Equipment Effectiveness from availability, performance, and quality metrics.

Key Insight: OEE multiplies three factors, so even small losses in each category compound into significant total losses. An 85% x 90% x 95% machine operates at only 72.7% effectiveness, wasting over a quarter of potential output.

25.10 Scenario Exercises

The following scenarios require you to apply knowledge from multiple domains simultaneously. Use this decision framework to structure your thinking:

Decision tree flowchart for IoT deployment planning. Starting from 'Define Domain Requirements', the flow branches into three parallel evaluation paths: Technical Feasibility (connectivity, power, data volume), Economic Viability (ROI calculation, payback period, total cost of ownership), and Regulatory Compliance (industry standards, data privacy, safety certification). All three paths converge at a 'Go/No-Go Decision' node, which leads to either 'Proceed with Pilot' or 'Redesign Approach' based on whether all three evaluations pass.

IoT Deployment Decision Framework – Systematic approach to evaluating IoT project feasibility across technical, economic, and regulatory dimensions

25.10.1 Scenario 1: Smart City Waste Management

Context: A city currently spends $45M annually on garbage collection with trucks following fixed routes regardless of bin fill levels. Analysis shows 40% of stops find bins less than 50% full, wasting $18M in fuel and labor.

Question: The city considers deploying fill-level sensors to 25,000 bins across 500 square miles.

  1. Domain Fit: Would “Smart Environment” (air quality) or “Smart Waste Management” (part of Smart Cities) be the correct domain classification?

  2. Connectivity Choice: Which technology - Wi-Fi, cellular (NB-IoT), or LoRaWAN - makes most sense for 25,000 bins spread across 500 square miles?

  3. ROI Calculation: If sensors cost $3M to deploy and save $10M annually through route optimization, what is the payback period?

  4. Coverage Threshold: At what percentage of bin coverage would the system provide reliable route optimization?

Answers:

  1. Smart Waste Management falls under Smart Cities infrastructure (not Smart Environment which focuses on air quality, fires, earthquakes)
  2. LoRaWAN or NB-IoT - both provide city-scale coverage with 10-year battery life. Wi-Fi would require 5,000+ access points.
  3. Payback = $3M / $10M = 0.3 years (3.6 months) - extremely fast ROI
  4. 80%+ coverage required for reliable route optimization; below this, drivers can’t trust the system

25.10.2 Scenario 2: Healthcare Remote Patient Monitoring

Context: A hospital system wants to implement remote monitoring for 2,000 heart failure patients post-discharge to reduce 30-day readmissions (currently 25% readmission rate, costing $50M annually).

Question: Design the monitoring approach.

  1. Sensor Selection: What vital signs should be monitored for heart failure patients?

  2. Alert Threshold Design: How would you balance sensitivity (catching deterioration) vs. specificity (avoiding alert fatigue)?

  3. Connectivity: Should devices use Wi-Fi (patient home), cellular, or Bluetooth to smartphone?

  4. Regulatory Considerations: What FDA and HIPAA requirements apply?

Considerations:

  1. Weight (daily), blood pressure, heart rate, SpO2, and symptoms questionnaire. Weight gain of >2 lbs/day indicates fluid retention.
  2. Use trending (weight increase over 3 days) rather than single readings. Multi-parameter algorithms (weight + BP + symptoms) improve specificity.
  3. Cellular or Bluetooth-to-smartphone with cellular backup. Wi-Fi only works if patient has reliable home internet.
  4. FDA Class II for devices making clinical claims; HIPAA for all data transmission and storage; need BAA with cloud providers.

25.10.3 Scenario 3: Wearable Fitness Tracker Accuracy

Context: A user notices their fitness tracker shows heart rate of 175 BPM during a casual 3 mph walk (actual: ~100 BPM).

Question: Explain what’s happening and how to fix it.

  1. Root Cause: What causes PPG optical heart rate sensors to produce wildly inaccurate readings during movement?

  2. Cadence Confusion: If the user is walking at 170 steps per minute, how does this affect the heart rate reading?

  3. Solutions: What can the user do to get more accurate readings during exercise?

  4. Design Implications: How should fitness apps communicate heart rate data quality to users?

Answers:

  1. Motion artifacts - arm movement causes the sensor to shift against skin, creating light intensity variations interpreted as pulse beats.
  2. Walking cadence (170 steps/min) matches typical exercise heart rates (170 BPM), making it impossible for the algorithm to distinguish motion from pulse.
  3. Wear device tighter; use chest strap for exercise; use arm bands instead of wrist; trust average HR over instantaneous readings.
  4. Show confidence indicators; flag data as “motion-affected”; recommend chest strap for serious training; use accelerometer to detect high-motion periods and reduce HR display confidence.

25.11 Understanding Check: Building Automation

Scenario: A 200,000 sq ft office building has 180 VAV (Variable Air Volume) boxes. The building engineer suspects many are malfunctioning, causing simultaneous heating and cooling in the same zones.

Questions to Consider:

  1. What sensors would you deploy to detect VAV box faults?
  2. How would you identify “simultaneous heating and cooling” from sensor data?
  3. What is the energy impact of this fault?
  4. How would you prioritize repairs across 180 boxes?

Key Insights:

  • Supply air temperature + zone temperature + damper position + reheat valve position sensors per zone
  • Fault signature: reheat valve open (>10%) while damper fully open AND zone temperature below setpoint
  • Energy impact: 15-30% zone energy waste from the affected boxes
  • Prioritize by: (1) energy waste magnitude, (2) comfort complaints, (3) repair cost

Scenario: A hospital system is evaluating a consumer wearable that detects atrial fibrillation (AFib) for remote patient monitoring of 10,000 post-discharge cardiology patients. The device costs $150 per patient and promises to reduce emergency room visits by detecting AFib early.

Given:

  • Wearable sensitivity: 92% (detects 92% of actual AFib episodes)
  • Wearable specificity: 88% (correctly identifies 88% of non-AFib as normal)
  • AFib prevalence in patient population: 12% (1,200 of 10,000 patients have AFib)
  • Cost per false positive: $850 (unnecessary cardiology follow-up, ECG, maybe cardioversion)
  • Cost per true positive: $200 (planned cardiology visit, medication adjustment)
  • Cost per false negative: $12,000 (missed AFib leads to stroke, ER visit, hospitalization)
  • Cost per true negative: $0 (no action needed)

Steps:

Step 1: Calculate Expected Outcomes Over 1 Year

Build a confusion matrix for 10,000 patients:

Actually Has AFib Actually No AFib Total
Device Says AFib True Positive (TP) False Positive (FP) Positive Tests
Device Says Normal False Negative (FN) True Negative (TN) Negative Tests
Total 1,200 (12% prevalence) 8,800 10,000

Calculate each cell: - TP (Sensitivity x Positive): 0.92 x 1,200 = 1,104 true AFib detections - FN (Missed AFib): 1,200 - 1,104 = 96 missed AFib cases - TN (Specificity x Negative): 0.88 x 8,800 = 7,744 correctly identified as normal - FP (False alarms): 8,800 - 7,744 = 1,056 false AFib alerts

Step 2: Calculate Positive Predictive Value (PPV)

PPV = TP / (TP + FP) = 1,104 / (1,104 + 1,056) = 1,104 / 2,160 = 51.1%

Translation: When the wearable alerts “AFib detected,” only 51% of the time is it actually AFib. The other 49% are false positives.

Step 3: Calculate Financial Impact

Annual costs: - TP cost: 1,104 x $200 = $220,800 (planned interventions) - FP cost: 1,056 x $850 = $897,600 (unnecessary follow-ups) - FN cost: 96 x $12,000 = $1,152,000 (missed strokes) - TN cost: 7,744 x $0 = $0 - Wearable hardware: 10,000 x $150 = $1,500,000

Total annual cost: $3,770,400

Step 4: Compare to Baseline (No Wearable)

Without wearables, assume all AFib is detected only when symptoms occur: - Detected in ER (symptomatic): 40% x 1,200 x $12,000 = $5,760,000 - Undetected (asymptomatic): 60% x 1,200 x $0 (no immediate cost, but future stroke risk)

Baseline cost: ~$5,760,000 (conservative, ignores future complications)

Result: Wearable deployment saves $1,989,600 annually despite 49% false positive rate.

Step 5: Sensitivity Analysis on PPV

What if we could improve specificity from 88% to 95% with better algorithms?

Recalculate with 95% specificity: - FP drops to: (1 - 0.95) x 8,800 = 440 false positives - New PPV: 1,104 / (1,104 + 440) = 71.5%

New FP cost: 440 x $850 = $374,000 (saves $523,600/year)

Key Insights:

1. PPV depends on prevalence, not just sensor accuracy: The same 92% sensitivity / 88% specificity sensor would have: - PPV = 51% at 12% prevalence (this case) - PPV = 22% at 3% prevalence (screening general population) - PPV = 79% at 30% prevalence (high-risk ICU patients)

2. Specificity matters more than sensitivity for rare conditions: Improving specificity from 88% to 95% (7 points) has bigger impact than improving sensitivity from 92% to 99% (7 points) when prevalence is low.

3. False positives have hidden costs: Beyond the $850 direct cost, false alarms cause: - Patient anxiety and loss of trust - Alert fatigue (clinicians ignore future alerts) - Decreased app usage (patients disable notifications)

4. Deployment decision depends on cost ratios: This deployment works because: - False positive cost ($850) << False negative cost ($12,000) - True positive intervention ($200) is cheap compared to ER cost - The 14x cost ratio between FN and FP justifies accepting 49% false positive rate

Decision Rule: Deploy only if:

(FN_cost x FN_count + FP_cost x FP_count) < Baseline_cost

In this case: ($1,152K + $897K + $1,500K) < $5,760K → Deploy

If false positive cost were $2,000 instead of $850: - New FP cost: 1,056 x $2,000 = $2,112,000 - Total: $1,152K + $2,112K + $1,500K = $4,764K → Still worth it

If prevalence were only 3% (general population screening): - TP drops to 276 (sensitivity x 300 actual AFib) - FP jumps to 1,164 (12% of 9,700 non-AFib) - PPV drops to 19% (4 out of 5 alerts are false!) - New FP cost: 1,164 x $850 = $989,400 - Total: $331K (TP) + $989K (FP) + $216K (FN) + $1,500K = $3,036K - Baseline: $1,440K - Do NOT deploy (costs more than benefit)

Key Takeaway: Always calculate PPV for your specific patient population before deploying diagnostic IoT. Sensitivity and specificity alone don’t tell you whether the system is cost-effective. Prevalence, cost ratios, and false positive tolerance determine deployment viability.

25.12 Self-Assessment Checklist

Before moving to the next chapter, verify you can:

25.13 Summary

Mind map diagram summarizing key takeaways from the domain knowledge checks. The central node 'IoT Domain Knowledge' branches into six domains: Healthcare (regulatory compliance, PPV calculations, clinical-grade accuracy), Smart Cities (80% coverage threshold, unified platforms, cross-domain data), Transportation (DSRC for V2V, 70-80% crash prevention, sub-10ms latency), Agriculture (per-animal baselines, LoRaWAN connectivity, precision over uniform), Manufacturing (edge analytics, OEE monitoring, predictive maintenance), and Smart Home (multi-sensor fusion, false trigger reduction, schedule learning).

Key Takeaways by Domain – Summary of critical success factors and common pitfalls for each IoT application domain covered in these exercises

These exercises tested your understanding across all IoT application domains. Here are the essential takeaways:

Core Principles Validated:

  • Domain requirements vary dramatically – from 10ms latency and 99.999% reliability for autonomous vehicles to hourly updates at 95% reliability for agriculture. There is no universal IoT solution.
  • Technology selection must match domain constraints – power budget, range, data volume, and regulatory environment all constrain which connectivity, processing, and sensor technologies are viable.
  • Coverage thresholds determine adoption – smart city systems consistently fail below 80% sensor coverage because partial data creates user distrust. Plan for full coverage from the start, even if phased.

Domain-Specific Insights:

  • Healthcare IoT requires clinical validation and regulatory compliance (FDA, HIPAA), adding 12-24 months and 3-5x cost. Always calculate PPV using real population prevalence, not just sensitivity/specificity.
  • Smart cities succeed with unified platforms (like Barcelona’s Sentilo) that enable cross-domain data sharing, generating 30-50% additional value beyond siloed deployments.
  • V2X communication prioritizes reliability and instant messaging (no handshake) over data rates. DSRC (802.11p) was purpose-built for this constraint.
  • Agriculture benefits from per-animal and per-zone calibration rather than uniform thresholds. LoRaWAN provides the optimal connectivity trade-off for vast, low-power deployments.
  • Manufacturing IoT generates massive data volumes that require edge analytics to reduce by 95-99%. OEE calculations reveal that seemingly good individual metrics (85%, 90%, 95%) compound into significant losses (72.7%).
  • Smart home automation achieves reliability through multi-sensor fusion rather than single-sensor triggers, reducing false automations by 90%+.
In 60 Seconds

This chapter covers domain knowledge checks, explaining the core concepts, practical design decisions, and common pitfalls that IoT practitioners need to build effective, reliable connected systems.

25.14 See Also

Explore related topics across modules:

25.15 What’s Next

Chapter Description
Smart Cities Urban infrastructure at scale
Transportation V2X and connected vehicles
Healthcare Clinical-grade monitoring
Agriculture Precision farming
Manufacturing Industry 4.0
Overview Return to domain landscape