What Practitioners Do Wrong: Creating IoT devices optimized for ideal lab conditions (stable Wi-Fi, clean hands, quiet environment, full attention) that fail when deployed in messy, unpredictable real-world contexts.
The Problem: Lab testing reveals technical functionality (“Does the button work?”) but misses context-specific failures (“Does the button work with wet, soapy hands while multitasking in a noisy kitchen?”).
Real-World Example: A smart kitchen scale designed for recipe portion control tested perfectly in the lab with engineers carefully placing ingredients. After launch, real-world failure modes emerged:
| Clean, dry surface |
Countertop wet from washing vegetables |
Capacitive touch buttons didn’t respond when wet |
| Quiet room |
Kitchen exhaust fan + running water + conversation |
Audio feedback (beeps) inaudible 80% of the time |
| Full attention |
User multitasking (stirring pot, reading recipe, answering phone) |
Timeout after 10s of inactivity reset tare weight - user didn’t notice |
| Strong Wi-Fi |
Home Wi-Fi congested (3 roommates streaming) |
Recipe sync from app took 8-15 seconds, blocking scale use |
| Clean hands |
Flour-covered hands from kneading dough |
Touchscreen required 3-4 taps to register, leaving flour smudges |
Measured Impact (6 Months Post-Launch):
| Button press success rate |
99.2% |
73% (wet hands, distractions) |
-26% |
| Recipe sync completion |
98% (< 2s on lab Wi-Fi) |
64% (timeouts, congestion) |
-34% |
| User satisfaction (SUS) |
85 (excellent) |
52 (failing) |
-39% |
| Product returns |
< 3% (projected) |
23% (actual) |
+667% |
| Support tickets |
< 5/day (projected) |
78/day (actual) |
+1460% |
Root Cause: The design team never tested in a real kitchen during meal preparation. They used lab equipment on a clean desk with full attention. All usability testing was in-office with engineers, not home cooks.
The Correct Approach — Context-Driven Testing:
Phase 1: Contextual Inquiry (Before Design) - Observe 10 home cooks preparing meals in their kitchens - Document: wet hands 62% of the time, loud noise 78%, multitasking 91% - Insight: Touchscreen-only interface won’t work; need physical buttons + visual feedback
Phase 2: Prototype Testing in Real Context (During Design) - Test breadboard prototype in actual kitchens during cooking - Discovered: flour smudges made screen unreadable; wet hands disabled capacitive touch - Redesign: Physical buttons with tactile feedback, waterproof membrane, high-contrast LED display
Phase 3: Field Beta (Before Launch) - 50 units deployed to real users for 4 weeks - Tracked: actual button press success (94%), real-world Wi-Fi performance (85% sync success) - Caught: timeout too short (extended from 10s to 45s)
Redesigned Solution Based on Real Context:
| Capacitive touchscreen |
→ Physical membrane buttons (work when wet) |
| Audio feedback only |
→ Audio + bright LED ring (visible while stirring pot) |
| 10-second timeout |
→ 45-second timeout (accounts for multitasking) |
| Wi-Fi required |
→ Bluetooth to phone as fallback (more reliable at close range) |
| Smooth glass surface |
→ Textured silicone grip (doesn’t slide on wet counter) |
Measured Results After Redesign (6 Months Post-Relaunch):
| Button success rate (wet hands) |
73% |
96% |
+31% |
| User satisfaction (SUS) |
52 |
79 |
+52% |
| Product returns |
23% |
6% |
74% reduction |
| Support tickets |
78/day |
9/day |
88% reduction |
Key Lesson: Always test IoT prototypes in the actual environment where they’ll be used (real kitchens, real offices, real factories) with representative users (home cooks, not engineers) performing realistic tasks (making dinner while tired, not carefully following test scripts). Context-of-use analysis is not optional—it’s the difference between a product that works in theory and one that works in practice.
12.4 Social Context
Social context considers who else is present and social dynamics.
12.4.1 Key Factors
12.4.2 Example: Smart Speaker in Living Room