Worked example: For a 15-question quiz, target correct answers are \(\lceil 0.8 \times 15 \rceil = 12\). If a learner moves from 8/15 to 12/15, score rises from 53.3% to 80%, crossing mastery with four additional correct answers.
Formative Assessment: Low-stakes evaluation during learning to identify gaps before summative assessment.
Knowledge Check: Short inline quiz verifying understanding of a specific concept immediately after it is introduced.
Spaced Repetition: Learning technique reviewing material at increasing intervals to improve long-term retention.
Retrieval Practice: Actively recalling information from memory, which strengthens retention more than re-reading.
Metacognition: Awareness and understanding of one’s own thought processes and knowledge gaps, critical for self-directed learning.
Application Question: Assessment item requiring learners to apply knowledge to a novel scenario rather than recall facts.
Misconception: Incorrect mental model a learner holds that interferes with understanding correct concepts.
For Beginners: People & Context Assessment
This chapter reviews UX design principles for IoT and helps you assess your understanding. Think of it as a portfolio review where you check that your design skills cover all the essential areas. Strong UX fundamentals are what separate IoT products that people love from ones they abandon after a week.
By the end of this chapter, you will be able to:
Distinguish between effective and ineffective user research approaches for IoT design
Evaluate persona quality and identify when personas represent mainstream versus niche users
Apply context-of-use analysis across all five dimensions to assess IoT design decisions
Assess ethical compliance in IoT user research scenarios involving vulnerable populations
14.2 Quick Quiz
What is the “curse of knowledge” in design?
Designers know too much technical detail
Once you understand something, you can’t imagine not understanding it
Users reject technology they don’t understand
Knowledge transfer is difficult
What is contextual inquiry?
Asking users about their context
Shadowing users during normal activities in their environment (2-4 hours)
Survey about context of use
Testing in different contexts
How many participants are typically sufficient for qualitative interviews?
3-5
8-15
50-100
500+
What makes an effective persona?
Marketing demographics only
Based on single real user
Fictional but realistic, based on patterns across multiple users
Aspirational ideal user
How many personas should a project typically have?
1 (focus!)
3-5 total (primary + secondary + anti)
10+ (cover all users)
As many as there are user types
What is the primary purpose of journey mapping?
Document user’s physical journey
Visualize experience over time, identify pain points and opportunities
Map product features
Create marketing materials
What are the five dimensions of context analysis?
Who, What, When, Where, Why
Physical, Social, Temporal, Technical, Cultural
Hardware, Software, Network, Users, Environment
Indoor, Outdoor, Mobile, Stationary, Shared
Why is it problematic to recruit only early adopters for research?
They’re too expensive
They’re not representative of mass market (tolerate complexity, forgive bugs)
They don’t provide useful feedback
Regulations prohibit it
Quiz Answers
B - The curse of knowledge means once you understand something, you can’t imagine not understanding it
B - Contextual inquiry is shadowing users for 2-4 hours in their natural environment
B - 8-15 participants typically achieves saturation for qualitative research
C - Effective personas are fictional but realistic, synthesized from multiple research participants
B - 3-5 total personas (1 primary, 2-3 secondary, possibly an anti-persona)
B - Journey mapping visualizes experience over time to identify pain points
B - Physical, Social, Temporal, Technical, and Cultural
B - Early adopters tolerate complexity that mainstream users won’t accept
14.3 Interactive Knowledge Checks
14.4 Comprehensive Review Quiz
Quiz: User Research Fundamentals
14.5 Interactive Quiz: Test Your Knowledge
Show code
viewof uq1_answer = Inputs.radio( ["A) They didn't add enough features","B) Curse of knowledge - designed for themselves","C) Too much user research","D) Security systems are inherently complex"], {label:"Question 1: Engineers design a security system that users find confusing. What trap did they fall into?",value:null})uq1_feedback = {if (uq1_answer ===null) return"";const correct ="B) Curse of knowledge - designed for themselves";const isCorrect = uq1_answer === correct;returnhtml`<div style="padding: 15px; margin: 10px 0; border-radius: 5px; background-color: ${isCorrect ?'#d4edda':'#f8d7da'}; border: 1px solid ${isCorrect ?'#c3e6cb':'#f5c6cb'}; color: ${isCorrect ?'#155724':'#721c24'}"> <strong>${isCorrect ?'Correct!':'Incorrect'}</strong><br/>${isCorrect ?'The curse of knowledge means once you understand something, you cannot imagine not understanding it. Engineers design systems that make sense to themselves but baffle users.':'The curse of knowledge is the trap here. Engineers assume users understand concepts they find obvious. The solution is user research with non-technical people.'} </div>`;}viewof uq2_answer = Inputs.radio( ["A) Users are lying","B) Analytics features are useless","C) Stated preferences differ from revealed behavior","D) Interview data is unreliable"], {label:"Question 2: Users say they want analytics but 95% never use them. What principle is this?",value:null})uq2_feedback = {if (uq2_answer ===null) return"";const correct ="C) Stated preferences differ from revealed behavior";const isCorrect = uq2_answer === correct;returnhtml`<div style="padding: 15px; margin: 10px 0; border-radius: 5px; background-color: ${isCorrect ?'#d4edda':'#f8d7da'}; border: 1px solid ${isCorrect ?'#c3e6cb':'#f5c6cb'}; color: ${isCorrect ?'#155724':'#721c24'}"> <strong>${isCorrect ?'Correct!':'Incorrect'}</strong><br/>${isCorrect ?'People are poor at predicting their own behavior. Users report aspirational desires ("I want analytics") but actual behavior reveals true priorities. Watch what users DO, not what they SAY.':'The principle is stated vs. revealed behavior. Users say they want features but don\'t actually use them. Observe actual behavior, not just stated preferences.'} </div>`;}viewof uq3_answer = Inputs.radio( ["A) 8-15 participants","B) 3-5 participants","C) 50-100 participants","D) 500+ participants"], {label:"Question 3: How many participants are sufficient for qualitative research?",value:null})uq3_feedback = {if (uq3_answer ===null) return"";const correct ="A) 8-15 participants";const isCorrect = uq3_answer === correct;returnhtml`<div style="padding: 15px; margin: 10px 0; border-radius: 5px; background-color: ${isCorrect ?'#d4edda':'#f8d7da'}; border: 1px solid ${isCorrect ?'#c3e6cb':'#f5c6cb'}; color: ${isCorrect ?'#155724':'#721c24'}"> <strong>${isCorrect ?'Correct!':'Incorrect'}</strong><br/>${isCorrect ?'Qualitative research achieves saturation around 8-15 participants. 5 participants find ~85% of usability issues, 8-10 capture most user needs, 12-15 near complete saturation. Quality matters more than quantity.':'8-15 participants is typically sufficient for qualitative research to reach saturation. More than 15 shows diminishing returns. Save large samples for quantitative surveys.'} </div>`;}viewof uq4_answer = Inputs.radio( ["A) Physical, Social, Temporal, Technical, Cultural","B) Who, What, When, Where, Why","C) Hardware, Software, Network, Users, Environment","D) Indoor, Outdoor, Mobile, Stationary, Shared"], {label:"Question 4: What are the five dimensions of context analysis?",value:null})uq4_feedback = {if (uq4_answer ===null) return"";const correct ="A) Physical, Social, Temporal, Technical, Cultural";const isCorrect = uq4_answer === correct;returnhtml`<div style="padding: 15px; margin: 10px 0; border-radius: 5px; background-color: ${isCorrect ?'#d4edda':'#f8d7da'}; border: 1px solid ${isCorrect ?'#c3e6cb':'#f5c6cb'}; color: ${isCorrect ?'#155724':'#721c24'}"> <strong>${isCorrect ?'Correct!':'Incorrect'}</strong><br/>${isCorrect ?'The five dimensions are: Physical (environment, space, sensory), Social (presence, privacy, norms), Temporal (time, urgency, patterns), Technical (connectivity, ecosystem, skills), and Cultural (language, values, accessibility).':'The five dimensions are Physical, Social, Temporal, Technical, and Cultural. These cover everything from environment and who else is present to connectivity and language/accessibility.'} </div>`;}viewof uscore = {const answers = [uq1_answer, uq2_answer, uq3_answer, uq4_answer];const correct_answers = ["B) Curse of knowledge - designed for themselves","C) Stated preferences differ from revealed behavior","A) 8-15 participants","A) Physical, Social, Temporal, Technical, Cultural" ];const answered = answers.filter(a => a !==null).length;const correct_count = answers.filter((a, i) => a === correct_answers[i]).length;if (answered ===0) {returnhtml`<div style="padding: 20px; margin: 20px 0; background-color: #e7f3ff; border-radius: 8px; border-left: 5px solid #2196F3;"> <h3 style="margin-top: 0; color: #1976D2;">Quiz Progress</h3> <p>Answer all 4 questions to see your score!</p> </div>`; }if (answered <4) {returnhtml`<div style="padding: 20px; margin: 20px 0; background-color: #fff3cd; border-radius: 8px; border-left: 5px solid #ffc107;"> <h3 style="margin-top: 0; color: #856404;">Quiz Progress</h3> <p>You've answered ${answered} out of 4 questions. ${correct_count} correct so far!</p> </div>`; }const percentage = (correct_count /4) *100;let grade, message, color, bgColor;if (percentage >=75) { grade ="A"; message ="Excellent! You understand user research principles well."; color ="#155724"; bgColor ="#d4edda"; } elseif (percentage >=50) { grade ="B"; message ="Good work! Review the feedback to strengthen understanding."; color ="#856404"; bgColor ="#fff3cd"; } else { grade ="C"; message ="Keep learning! Review the chapter materials."; color ="#721c24"; bgColor ="#f8d7da"; }returnhtml`<div style="padding: 20px; margin: 20px 0; background-color: ${bgColor}; border-radius: 8px; border-left: 5px solid ${color};"> <h3 style="margin-top: 0; color: ${color};">Final Score: ${correct_count}/4 (${percentage}%) - Grade: ${grade}</h3> <p style="font-size: 1.1em;">${message}</p> </div>`;}
Interactive Quiz: Match Concepts
Interactive Quiz: Sequence the Steps
14.6 Resources for Further Learning
User Research Books:
“Interviewing Users” by Steve Portigal
“Observing the User Experience” by Elizabeth Goodman, Mike Kuniavsky, and Andrea Moed
“Research Methods for User Experience” by Elizabeth Lawson
Worked Example: Conducting a Contextual Inquiry for Smart Kitchen Design
Scenario: Your team is designing a smart kitchen assistant to help busy parents prepare weeknight dinners. You’ve conducted a 3-hour contextual inquiry with Sarah, a working mother of two.
Observation Notes (excerpt from 3-hour session):
5:45 PM - Sarah arrives home, immediately checks phone for missed calls while unlocking door 5:47 PM - Opens refrigerator, stares for 30 seconds: “I forgot what I have” 5:50 PM - Kids interrupt 3 times asking for snacks while Sarah searches for recipe on iPad 6:05 PM - Starts cooking, but hands are covered in flour—shouts “Hey Siri, set timer 15 minutes” (Siri mishears: “50 minutes”) 6:12 PM - Realizes wrong timer, washes hands, corrects manually 6:25 PM - Oil starts smoking on stove while Sarah helps daughter with homework 6:40 PM - Dinner ready, family eats, Sarah looks exhausted
Key Insights Extracted:
Context: Divided Attention
Observation: Sarah interrupted 8 times in 60 minutes (kids, doorbell, phone)
Insight: Kitchen assistant must handle interruptions gracefully, allow pausing mid-recipe
Design implication: Voice interaction for hands-free operation, visual progress indicator showing “where you left off”
Context: Dirty Hands
Observation: Sarah’s hands were wet/dirty 40% of the time
Insight: Touchscreens are unusable in real cooking contexts
Design implication: Primary interface = voice, secondary = large physical buttons, NOT touchscreen tablet
Context: Decision Fatigue
Observation: Sarah spent 15 minutes deciding what to cook
Insight: Don’t need more recipes—need decision support based on available ingredients
Design implication: “What can I make with chicken, broccoli, and rice?” → suggest 3 recipes, not 300
Social: Family members interrupting, need to supervise kids
Temporal: 5:30-7pm weeknights, rushed, exhausted from work
Technical: Has smartphone, smart speaker, basic appliances (no sous vide machine)
Skill: Intermediate cook, knows basics but not advanced techniques
Quantitative Data:
Interruptions: 8 per hour
Hands dirty/wet: 40% of cooking time
Decision time: 15 minutes (what to cook)
Actual vs. estimated time gap: +40% over recipe estimate
Design Decisions Driven by This Inquiry:
Feature
Why (Based on Observation)
How
Voice-first interface
Hands dirty 40% of time, multitasking
“Hey Kitchen, next step” advances recipe
Inventory-based suggestions
Sarah stares at fridge, forgets contents
“3 recipes with your chicken and broccoli”
Interruptible recipes
8 interruptions/hour
Visual progress bar, “You’re on Step 4 of 7”
Stove temp monitoring
Oil smoking incident
Temperature sensor alerts via voice + light
Realistic time estimates
Recipes underestimate by 40%
Show “Active time” + “Total time” separately
Key Insight: Contextual inquiry revealed that the core problem ISN’T lack of recipes (Sarah has Pinterest boards with 500+ recipes). The problem is decision paralysis (what to make with available ingredients?), divided attention (interruptions), and unrealistic expectations (recipe times). A smart kitchen assistant that solves these problems succeeds where recipe apps fail.
Decision Framework: Choosing Research Methods for IoT Projects
Use this framework to select appropriate user research methods based on project stage, resources, and research questions:
Research Question
Best Method
Sample Size
Duration
Cost
When to Use
“What problems do users have?”
Contextual inquiry (observe in context)
8-12 users
2-4 hours/user
\[$ | Early discovery, before design |
| **"What do users say they want?"** | Interviews | 12-15 users | 30-60 min/user | \]
Full discovery + iterative testing + survey validation
Comprehensive, mixed methods
$50k+
Agency-led research, large sample quantitative validation
Professional quality
Resource Constraints:
If you have limited time (2 weeks): - 5 user interviews (3 days) - 5 usability tests (3 days) - Persona + journey map synthesis (4 days)
If you have limited budget ($0): - Guerrilla testing (test strangers at coffee shops) - Remote unmoderated usability tests (UsabilityHub, free tier) - Analytics review (existing data)
If you have limited access to users: - Expert heuristic evaluation (internal team reviews design against usability principles) - Cognitive walkthrough (step through tasks, identify friction) - Competitive analysis (learn from similar products)
Common Mistakes:
❌ Skipping research entirely: “We’re engineers, we know what users need” → 60-70% of features are never used. Research prevents waste.
❌ Only doing surveys: “We asked 500 users what they want” → Stated preferences ≠ revealed behavior. Surveys miss context.
❌ Testing with wrong users: “We tested with our engineering team” → Engineers aren’t representative users. Test with target demographics.
❌ Only testing once: “We did usability testing before launch” → Iterative testing (test → fix → test again) finds 3x more issues than single-pass testing.
Key Insight: Match research method to research question. Contextual inquiry reveals problems users didn’t know they had. Interviews reveal stated needs. Usability testing reveals whether designs work. Surveys validate hypotheses at scale. Use multiple methods for comprehensive understanding.
Common Mistake: Confusing User Research with Usability Testing
The Mistake: Conducting usability testing (users try your prototype) and calling it “user research,” missing the critical discovery phase where you learn what problems to solve before designing solutions.
Why It Fails:
User research and usability testing serve different purposes at different stages:
Research Type
When
Goal
Output
Example Question
User Research (Discovery)
BEFORE design
Understand problems, contexts, needs
Personas, journey maps, problem statements
“What makes cooking stressful for busy parents?”
Usability Testing (Validation)
DURING/AFTER design
Evaluate whether solution works
Usability issues, task success rates
“Can users set a timer using this interface?”
Real-World Consequences:
Company A (Skipped Discovery Research): - Jumped straight to building smart home hub with 50 features - Usability testing: “Can users configure automations?” → Yes, 80% success rate - Result: Product launched with good usability but solved problems no one had - Outcome: 12% adoption rate, product discontinued after 18 months - Root cause: Built the wrong thing (well)
Company B (Did Discovery Research First): - Spent 6 weeks on contextual inquiry: What problems do users have? - Discovered: Users don’t want 50 features—they want 3 things that “just work” - Built simple solution focused on 3 core problems - Usability testing validated design worked - Result: 58% adoption rate, profitable in 12 months
In 60 Seconds
This chapter covers people & context assessment, explaining the core concepts, practical design decisions, and common pitfalls that IoT practitioners need to build effective, reliable connected systems.
Root cause: Built the right thing
How to Avoid This Mistake:
Phase 1: Discovery Research (Weeks 1-6)
Purpose: Understand the problem space BEFORE designing solutions
Methods:
Contextual inquiry (observe users in their environment)
Purpose: Evaluate whether your solution solves the discovered problems
Methods:
Usability testing with prototypes
A/B testing of design alternatives
Heuristic evaluation (expert review)
Questions:
Can users complete critical tasks?
Do they understand the interface?
Are there friction points?
Does it solve the problems identified in Phase 1?
Output:
Validated design ready for implementation
List of usability issues to fix
Task success rates, time-on-task metrics
Red Flags You Skipped Discovery:
❌ First user interaction is “try this prototype”
❌ No personas or journey maps exist
❌ Can’t articulate what problem you’re solving in one sentence
❌ Feature list driven by engineering brainstorming, not user needs
❌ Testing focuses on “Can they use it?” not “Does it solve their problem?”
Success Indicators:
✅ Spent significant time observing users BEFORE designing
✅ Can explain user goals, frustrations, and contexts in detail
✅ Design decisions trace back to specific research insights
✅ Usability testing validates solutions to known problems
Key Insight: Usability testing tells you if users CAN use your product. Discovery research tells you if users WANT to use your product. Skipping discovery leads to building the wrong thing efficiently. Both are essential, but discovery must come first.
Common Pitfalls
1. Over-Engineering the Initial Prototype
Adding too many features before validating core user needs wastes weeks of effort on a direction that user testing reveals is wrong. IoT projects frequently discover that users want simpler interactions than engineers assumed. Define and test a minimum viable version first, then add complexity only in response to validated user requirements.
2. Neglecting Security During Development
Treating security as a phase-2 concern results in architectures (hardcoded credentials, unencrypted channels, no firmware signing) that are expensive to remediate after deployment. Include security requirements in the initial design review, even for prototypes, because prototype patterns become production patterns.
3. Ignoring Failure Modes and Recovery Paths
Designing only for the happy path leaves a system that cannot recover gracefully from sensor failures, connectivity outages, or cloud unavailability. Explicitly design and test the behaviour for each failure mode and ensure devices fall back to a safe, locally functional state during outages.