14  People & Context Assessment

Quiz mastery targets are easiest to plan with threshold math:

\[ C_{\text{target}} = \left\lceil 0.8 \times N_{\text{questions}} \right\rceil \]

Worked example: For a 15-question quiz, target correct answers are \(\lceil 0.8 \times 15 \rceil = 12\). If a learner moves from 8/15 to 12/15, score rises from 53.3% to 80%, crossing mastery with four additional correct answers.

14.1 Learning Objectives

Key Concepts

  • Formative Assessment: Low-stakes evaluation during learning to identify gaps before summative assessment.
  • Knowledge Check: Short inline quiz verifying understanding of a specific concept immediately after it is introduced.
  • Spaced Repetition: Learning technique reviewing material at increasing intervals to improve long-term retention.
  • Retrieval Practice: Actively recalling information from memory, which strengthens retention more than re-reading.
  • Metacognition: Awareness and understanding of one’s own thought processes and knowledge gaps, critical for self-directed learning.
  • Application Question: Assessment item requiring learners to apply knowledge to a novel scenario rather than recall facts.
  • Misconception: Incorrect mental model a learner holds that interferes with understanding correct concepts.

This chapter reviews UX design principles for IoT and helps you assess your understanding. Think of it as a portfolio review where you check that your design skills cover all the essential areas. Strong UX fundamentals are what separate IoT products that people love from ones they abandon after a week.

By the end of this chapter, you will be able to:

  • Distinguish between effective and ineffective user research approaches for IoT design
  • Evaluate persona quality and identify when personas represent mainstream versus niche users
  • Apply context-of-use analysis across all five dimensions to assess IoT design decisions
  • Assess ethical compliance in IoT user research scenarios involving vulnerable populations

14.2 Quick Quiz

  1. What is the “curse of knowledge” in design?
      1. Designers know too much technical detail
      1. Once you understand something, you can’t imagine not understanding it
      1. Users reject technology they don’t understand
      1. Knowledge transfer is difficult
  2. What is contextual inquiry?
      1. Asking users about their context
      1. Shadowing users during normal activities in their environment (2-4 hours)
      1. Survey about context of use
      1. Testing in different contexts
  3. How many participants are typically sufficient for qualitative interviews?
      1. 3-5
      1. 8-15
      1. 50-100
      1. 500+
  4. What makes an effective persona?
      1. Marketing demographics only
      1. Based on single real user
      1. Fictional but realistic, based on patterns across multiple users
      1. Aspirational ideal user
  5. How many personas should a project typically have?
      1. 1 (focus!)
      1. 3-5 total (primary + secondary + anti)
      1. 10+ (cover all users)
      1. As many as there are user types
  6. What is the primary purpose of journey mapping?
      1. Document user’s physical journey
      1. Visualize experience over time, identify pain points and opportunities
      1. Map product features
      1. Create marketing materials
  7. What are the five dimensions of context analysis?
      1. Who, What, When, Where, Why
      1. Physical, Social, Temporal, Technical, Cultural
      1. Hardware, Software, Network, Users, Environment
      1. Indoor, Outdoor, Mobile, Stationary, Shared
  8. Why is it problematic to recruit only early adopters for research?
      1. They’re too expensive
      1. They’re not representative of mass market (tolerate complexity, forgive bugs)
      1. They don’t provide useful feedback
      1. Regulations prohibit it
  1. B - The curse of knowledge means once you understand something, you can’t imagine not understanding it
  2. B - Contextual inquiry is shadowing users for 2-4 hours in their natural environment
  3. B - 8-15 participants typically achieves saturation for qualitative research
  4. C - Effective personas are fictional but realistic, synthesized from multiple research participants
  5. B - 3-5 total personas (1 primary, 2-3 secondary, possibly an anti-persona)
  6. B - Journey mapping visualizes experience over time to identify pain points
  7. B - Physical, Social, Temporal, Technical, and Cultural
  8. B - Early adopters tolerate complexity that mainstream users won’t accept

14.3 Interactive Knowledge Checks

14.4 Comprehensive Review Quiz

14.5 Interactive Quiz: Test Your Knowledge

14.6 Resources for Further Learning

User Research Books:

  • “Interviewing Users” by Steve Portigal
  • “Observing the User Experience” by Elizabeth Goodman, Mike Kuniavsky, and Andrea Moed
  • “Research Methods for User Experience” by Elizabeth Lawson

Methods and Frameworks:

  • Contextual Design methodology (Holtzblatt & Jones)
  • Persona development guidelines
  • Customer Journey Mapping techniques

Tools:

  • UserTesting.com - Remote user testing
  • Maze - Prototype testing
  • Dovetail - Research analysis
  • Optimal Workshop - Card sorting and tree testing

14.7 Concept Relationships

This assessment chapter synthesizes concepts from the entire series:

  • User Research Fundamentals → Curse of knowledge and stated vs. revealed behavior form the foundation for understanding why research is essential
  • Research Methods → Contextual inquiry, interviews, and sample sizes determine the quality of insights gathered
  • Personas and Journey Maps → Transform research data into actionable design tools that guide team decisions
  • Context Analysis → Five dimensions (Physical, Social, Temporal, Technical, Cultural) shape how users interact with IoT devices
  • Pitfalls and Ethics → Privacy, sampling bias, and informed consent ensure research is conducted responsibly

14.8 See Also

Complete the Series:

Apply Your Knowledge:

Interactive Learning:

Chapter Navigation

This is part of the Understanding People and Context series:

  1. User Research Fundamentals
  2. Research Methods
  3. Personas and Journey Maps
  4. Context Analysis
  5. Pitfalls and Ethics
  6. Quizzes and Assessment (this chapter)

Return to Overview

Scenario: Your team is designing a smart kitchen assistant to help busy parents prepare weeknight dinners. You’ve conducted a 3-hour contextual inquiry with Sarah, a working mother of two.

Observation Notes (excerpt from 3-hour session):

5:45 PM - Sarah arrives home, immediately checks phone for missed calls while unlocking door 5:47 PM - Opens refrigerator, stares for 30 seconds: “I forgot what I have” 5:50 PM - Kids interrupt 3 times asking for snacks while Sarah searches for recipe on iPad 6:05 PM - Starts cooking, but hands are covered in flour—shouts “Hey Siri, set timer 15 minutes” (Siri mishears: “50 minutes”) 6:12 PM - Realizes wrong timer, washes hands, corrects manually 6:25 PM - Oil starts smoking on stove while Sarah helps daughter with homework 6:40 PM - Dinner ready, family eats, Sarah looks exhausted

Key Insights Extracted:

  1. Context: Divided Attention
    • Observation: Sarah interrupted 8 times in 60 minutes (kids, doorbell, phone)
    • Insight: Kitchen assistant must handle interruptions gracefully, allow pausing mid-recipe
    • Design implication: Voice interaction for hands-free operation, visual progress indicator showing “where you left off”
  2. Context: Dirty Hands
    • Observation: Sarah’s hands were wet/dirty 40% of the time
    • Insight: Touchscreens are unusable in real cooking contexts
    • Design implication: Primary interface = voice, secondary = large physical buttons, NOT touchscreen tablet
  3. Context: Decision Fatigue
    • Observation: Sarah spent 15 minutes deciding what to cook
    • Insight: Don’t need more recipes—need decision support based on available ingredients
    • Design implication: “What can I make with chicken, broccoli, and rice?” → suggest 3 recipes, not 300
  4. Context: Safety Monitoring
    • Observation: Oil smoking while Sarah distracted
    • Insight: Multitasking parents can’t constantly monitor stovetop
    • Design implication: Temperature sensor alerts “Stove too hot, check pan!” (audible + visual)
  5. Context: Time Pressure
    • Observation: Target = 45-minute dinner, actual = 65 minutes
    • Insight: Time estimates must be realistic for beginners, account for cleanup/interruptions
    • Design implication: Recipe time = “Active: 20 min, Total: 45 min” (don’t hide the truth)

Persona Developed from Contextual Inquiry:

Sarah Thompson, 38, Marketing Manager & Parent

Goals:

  • Cook healthy family dinners on weeknights (5-6 days/week)
  • Spend <60 minutes from arriving home to serving dinner
  • Reduce food waste by using existing ingredients

Frustrations:

  • Recipe apps assume focused attention (reality: constant interruptions)
  • Recipes require specialty ingredients she doesn’t have
  • Time estimates unrealistic (“20-minute meals” actually take 50 minutes for beginners)
  • Touchscreens unusable with cooking hands

Context:

  • Physical: Noisy kitchen, messy hands, multitasking (cooking + homework help)
  • Social: Family members interrupting, need to supervise kids
  • Temporal: 5:30-7pm weeknights, rushed, exhausted from work
  • Technical: Has smartphone, smart speaker, basic appliances (no sous vide machine)
  • Skill: Intermediate cook, knows basics but not advanced techniques

Quantitative Data:

  • Interruptions: 8 per hour
  • Hands dirty/wet: 40% of cooking time
  • Decision time: 15 minutes (what to cook)
  • Actual vs. estimated time gap: +40% over recipe estimate

Design Decisions Driven by This Inquiry:

Feature Why (Based on Observation) How
Voice-first interface Hands dirty 40% of time, multitasking “Hey Kitchen, next step” advances recipe
Inventory-based suggestions Sarah stares at fridge, forgets contents “3 recipes with your chicken and broccoli”
Interruptible recipes 8 interruptions/hour Visual progress bar, “You’re on Step 4 of 7”
Stove temp monitoring Oil smoking incident Temperature sensor alerts via voice + light
Realistic time estimates Recipes underestimate by 40% Show “Active time” + “Total time” separately

Key Insight: Contextual inquiry revealed that the core problem ISN’T lack of recipes (Sarah has Pinterest boards with 500+ recipes). The problem is decision paralysis (what to make with available ingredients?), divided attention (interruptions), and unrealistic expectations (recipe times). A smart kitchen assistant that solves these problems succeeds where recipe apps fail.

Use this framework to select appropriate user research methods based on project stage, resources, and research questions:

Research Question Best Method Sample Size Duration Cost When to Use
“What problems do users have?” Contextual inquiry (observe in context) 8-12 users 2-4 hours/user \[$ | Early discovery, before design | | **"What do users say they want?"** | Interviews | 12-15 users | 30-60 min/user | \] Early requirements gathering
“What do users actually do?” Diary studies (log activities over time) 15-20 users 1-2 weeks \[$ | Understand real-world usage patterns | | **"Can users complete tasks?"** | Usability testing | 5-8 users | 60 min/session | \] Validate prototypes, find issues
“How many users prefer X vs Y?” A/B testing, surveys 100-1,000 users Continuous $ Validate hypotheses with large samples
“Why did users abandon the app?” Exit interviews, analytics 10-15 churned users 20-30 min/user $$ Diagnose retention problems

Decision Tree:

Stage 1: Discovery (What to build?) → Contextual inquiry (8-12 users) + Interviews (12-15 users) → Outputs: Problem statement, persona, journey map

Stage 2: Design (How to build it?) → Usability testing with prototypes (5-8 users per iteration) → Outputs: Validated design, identified usability issues

Stage 3: Validation (Does it work at scale?) → Beta testing + surveys (100+ users) → Outputs: Quantitative validation, usage metrics

Stage 4: Optimization (How to improve?) → A/B testing + analytics (1,000+ users) → Outputs: Data-driven feature improvements

Budget Constraints:

Budget Recommended Approach Trade-offs
<$5,000 5 user interviews + 5 usability tests Qualitative only, small sample
$5k-$20k 8 contextual inquiries + 8 usability tests Good qualitative insights
$20k-$50k Full discovery + iterative testing + survey validation Comprehensive, mixed methods
$50k+ Agency-led research, large sample quantitative validation Professional quality

Resource Constraints:

If you have limited time (2 weeks): - 5 user interviews (3 days) - 5 usability tests (3 days) - Persona + journey map synthesis (4 days)

If you have limited budget ($0): - Guerrilla testing (test strangers at coffee shops) - Remote unmoderated usability tests (UsabilityHub, free tier) - Analytics review (existing data)

If you have limited access to users: - Expert heuristic evaluation (internal team reviews design against usability principles) - Cognitive walkthrough (step through tasks, identify friction) - Competitive analysis (learn from similar products)

Common Mistakes:

  1. Skipping research entirely: “We’re engineers, we know what users need” → 60-70% of features are never used. Research prevents waste.

  2. Only doing surveys: “We asked 500 users what they want” → Stated preferences ≠ revealed behavior. Surveys miss context.

  3. Testing with wrong users: “We tested with our engineering team” → Engineers aren’t representative users. Test with target demographics.

  4. Only testing once: “We did usability testing before launch” → Iterative testing (test → fix → test again) finds 3x more issues than single-pass testing.

Key Insight: Match research method to research question. Contextual inquiry reveals problems users didn’t know they had. Interviews reveal stated needs. Usability testing reveals whether designs work. Surveys validate hypotheses at scale. Use multiple methods for comprehensive understanding.

Common Mistake: Confusing User Research with Usability Testing

The Mistake: Conducting usability testing (users try your prototype) and calling it “user research,” missing the critical discovery phase where you learn what problems to solve before designing solutions.

Why It Fails:

User research and usability testing serve different purposes at different stages:

Research Type When Goal Output Example Question
User Research (Discovery) BEFORE design Understand problems, contexts, needs Personas, journey maps, problem statements “What makes cooking stressful for busy parents?”
Usability Testing (Validation) DURING/AFTER design Evaluate whether solution works Usability issues, task success rates “Can users set a timer using this interface?”

Real-World Consequences:

Company A (Skipped Discovery Research): - Jumped straight to building smart home hub with 50 features - Usability testing: “Can users configure automations?” → Yes, 80% success rate - Result: Product launched with good usability but solved problems no one had - Outcome: 12% adoption rate, product discontinued after 18 months - Root cause: Built the wrong thing (well)

Company B (Did Discovery Research First): - Spent 6 weeks on contextual inquiry: What problems do users have? - Discovered: Users don’t want 50 features—they want 3 things that “just work” - Built simple solution focused on 3 core problems - Usability testing validated design worked - Result: 58% adoption rate, profitable in 12 months

In 60 Seconds

This chapter covers people & context assessment, explaining the core concepts, practical design decisions, and common pitfalls that IoT practitioners need to build effective, reliable connected systems.

  • Root cause: Built the right thing

How to Avoid This Mistake:

Phase 1: Discovery Research (Weeks 1-6)

Purpose: Understand the problem space BEFORE designing solutions

Methods:

  • Contextual inquiry (observe users in their environment)
  • Interviews (understand goals, frustrations, workarounds)
  • Diary studies (track real-world usage over time)

Questions:

  • What are users trying to accomplish?
  • What frustrations do they experience?
  • What workarounds have they invented?
  • What context factors affect their behavior?

Output:

  • Personas (who are the users?)
  • Journey maps (what’s their experience?)
  • Problem statements (what needs solving?)

Phase 2: Design & Validation (Weeks 7-12)

Purpose: Evaluate whether your solution solves the discovered problems

Methods:

  • Usability testing with prototypes
  • A/B testing of design alternatives
  • Heuristic evaluation (expert review)

Questions:

  • Can users complete critical tasks?
  • Do they understand the interface?
  • Are there friction points?
  • Does it solve the problems identified in Phase 1?

Output:

  • Validated design ready for implementation
  • List of usability issues to fix
  • Task success rates, time-on-task metrics

Red Flags You Skipped Discovery:

  • ❌ First user interaction is “try this prototype”
  • ❌ No personas or journey maps exist
  • ❌ Can’t articulate what problem you’re solving in one sentence
  • ❌ Feature list driven by engineering brainstorming, not user needs
  • ❌ Testing focuses on “Can they use it?” not “Does it solve their problem?”

Success Indicators:

  • ✅ Spent significant time observing users BEFORE designing
  • ✅ Can explain user goals, frustrations, and contexts in detail
  • ✅ Design decisions trace back to specific research insights
  • ✅ Usability testing validates solutions to known problems

Key Insight: Usability testing tells you if users CAN use your product. Discovery research tells you if users WANT to use your product. Skipping discovery leads to building the wrong thing efficiently. Both are essential, but discovery must come first.

Common Pitfalls

Adding too many features before validating core user needs wastes weeks of effort on a direction that user testing reveals is wrong. IoT projects frequently discover that users want simpler interactions than engineers assumed. Define and test a minimum viable version first, then add complexity only in response to validated user requirements.

Treating security as a phase-2 concern results in architectures (hardcoded credentials, unencrypted channels, no firmware signing) that are expensive to remediate after deployment. Include security requirements in the initial design review, even for prototypes, because prototype patterns become production patterns.

Designing only for the happy path leaves a system that cannot recover gracefully from sensor failures, connectivity outages, or cloud unavailability. Explicitly design and test the behaviour for each failure mode and ensure devices fall back to a safe, locally functional state during outages.

14.9 What’s Next

Next Chapter
Recommended Next User Experience Design – Apply UX principles and Nielsen’s heuristics to IoT systems
Previous Chapter Pitfalls and Ethics – Common design pitfalls and ethical research principles
Series Overview Understanding People and Context – Full chapter series overview