9  User Research Fundamentals for IoT

9.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Explain the importance of user research for IoT product success
  • Identify common design pitfalls including the curse of knowledge and assumption-based design
  • Distinguish between stated preferences and revealed behavior in user research
  • Analyze how context shapes user interaction with IoT devices
MVU: Minimum Viable Understanding

Core concept: IoT success depends on observing how real people behave in real contexts - not on designer assumptions about user needs or technical elegance. Why it matters: The “curse of knowledge” causes engineers to overestimate user sophistication, leading to products that work technically but fail practically because they ignore physical, social, and situational factors. Key takeaway: If you haven’t watched real users struggle with your device in their actual environment (kitchen, factory floor, hospital), you don’t understand your users yet - observation reveals what interviews cannot.

Key Concepts

  • IoT Device Architecture: Hardware stack comprising microcontroller, sensors, connectivity module, power supply, and optional display or actuator.
  • Design Triangle: Trade-off between size, battery life, and capability that constrains every IoT device design decision.
  • Power Budget: Maximum average current consumption a device can draw while meeting its battery life target.
  • Form Factor: Physical size, shape, and mounting method of a device determined by its deployment environment and user interaction model.
  • Ingress Protection (IP) Rating: IEC 60529 code specifying a device’s resistance to dust and water ingress, required for outdoor and industrial deployments.
  • Bill of Materials (BOM): Itemised list of every component in a device with part numbers, quantities, and costs used for procurement and cost estimation.
  • Certification: Regulatory approval (FCC, CE, UL) required before a wireless IoT device can be sold in a given market.

9.2 Introduction

The most sophisticated IoT technology will fail if it doesn’t align with how people actually live, work, and think. Understanding people–their goals, behaviors, constraints, and contexts–is the foundation of successful IoT design.

This understanding cannot be achieved through speculation or projection; it requires systematic investigation using rigorous user research methodologies.

Diagram showing consequences of assumption-based design versus user-centered design, with assumption-based leading to failures and user-centered leading to success
Figure 9.1: Diagram showing consequences of assumption-based design versus user-centered design

Think of it like this: You wouldn’t design a kitchen without asking who’s cooking.

A chef needs different tools than someone who only microwaves meals. Similarly, an IoT system designed for tech-savvy engineers will confuse elderly users who just want their home to be warmer.

The problem with designing for yourself:

Designer Thinks… Reality
“Everyone knows what MQTT means” Most users have never heard of it
“Just check the app for status” User’s phone is in another room
“The manual explains it” No one reads manuals

What is “context”?

Context is everything about WHERE, WHEN, and HOW someone uses your IoT device:

Context Factor Questions to Ask
Physical Bright or dark? Noisy or quiet? Wet hands?
Social Alone or with family? Public or private?
Temporal Morning rush? Relaxing evening? Emergency?
Technical Strong Wi-Fi? Old phone? Multiple devices?

Key insight: Good IoT design starts with understanding people first, technology second. The best sensor in the world is useless if people can’t figure out how to use it.

Understanding People is like being a detective who figures out what people REALLY need, not just what they SAY they need!

9.2.1 The Sensor Squad Adventure: The Mystery of the Unused Smart Light

The Sensor Squad had built an amazing smart light system for Grandma Rose’s house. It had every feature they could imagine: voice control, color changing, motion sensing, app controls, and even a dance party mode! But when they visited Grandma Rose two weeks later, she was sitting in the dark, using her old lamp instead.

“Why aren’t you using our awesome smart light?” asked Sunny the Light Sensor, feeling puzzled. Grandma Rose smiled kindly: “Well, I tried saying ‘turn on the lights’ but I felt silly talking to my ceiling. And the app has so many buttons, I couldn’t find the simple ON switch. I just wanted light to read my book!”

Motion Mo the Motion Detector had an idea: “Let’s WATCH how Grandma Rose actually uses her old lamp instead of guessing what she wants.” They observed that Grandma Rose always sat in the same chair, always read at the same time, and always reached for her lamp switch without looking. “I’ve got it!” said Mo. “What if I sense when Grandma sits in her reading chair and automatically turn on the light at just the right brightness for reading?”

Power Pete nodded wisely: “The lesson is this: before you build anything, you need to become a detective. Watch real people. Ask questions. Understand their world. The best IoT device isn’t the one with the most features - it’s the one that solves a real problem for a real person in their real life!”

9.2.2 Key Words for Kids

Word What It Means
User Research Being a detective who watches and asks questions to understand what people truly need
Context Everything about WHERE, WHEN, and HOW someone uses a device
Persona A pretend character you create that represents a real type of user

9.3 Why Understanding Users Matters

Designers and engineers often fall into the trap of designing for themselves–creating systems that make sense to technically-minded creators but baffle actual users.

Diagram showing four common user research mistakes: designing for yourself, curse of knowledge, ignoring edge cases, and relying on stated preferences over revealed behavior
Figure 9.2: Diagram showing four common user research mistakes

9.3.1 The Curse of Knowledge

Problem: Once you understand how something works, you cannot imagine not understanding it.

Example:

  • Engineer knows: Smart thermostat learns from manual adjustments
  • User sees: Random temperature changes and thinks device is broken

Solution: Observe users who encounter system fresh, without designer’s insider knowledge.

9.3.2 Edge Cases Are Common Cases

Designer assumption: Wi-Fi is always available

Reality: Many users have spotty connectivity, visitors have no network access, power outages occur

Example: Smart lock that won’t open during Wi-Fi outage creates emergency situation.

Edge case frequency: If 5% of users experience Wi-Fi outages weekly, and your smart lock requires internet for unlock, then for 10,000 deployed locks: \(N_{lockouts} = 10,000 \times 0.05 \times 52 \text{ weeks} = 26,000\) annual lockout events. At \(\$150\) per emergency locksmith call, that’s \(\$3.9M\) in user costs—vastly exceeding the device revenue.

Stated vs. revealed behavior gap: Users claim they want features with average rating \(\bar{x}_{stated} = 4.2/5\), but actual usage logs show \(p_{use} = 0.15\) (15% actually use the feature). The correlation between stated preference and revealed behavior is typically \(r < 0.3\), meaning stated preferences explain less than 9% of actual usage variance: \(r^2 = 0.3^2 = 0.09\).

Curse of knowledge cost: Engineers design for themselves (95% setup success) but real users achieve only 61% success. The cognitive distance is \(\Delta_{knowledge} = \log_2\left(\frac{0.95}{0.61}\right) = 0.64\) bits of information gap. Bridging this costs ~$35K in UX research but saves \(\$280K\) in returns: ROI = \(\frac{280-35}{35} = 700\%\).

9.3.3 Interactive Calculator: User Research ROI

Explore how user research investment impacts product outcomes:

9.3.4 Stated Preferences vs. Revealed Behavior

Comparison diagram showing the disconnect between stated preferences (what users say they want) and revealed behavior (what they actually do), illustrating that observations are more reliable than interviews
Figure 9.3: Comparison diagram showing disconnect between what users say they want versus what they actually do

Lesson: Observe actual behavior, don’t just ask what users want.

9.3.5 Context Shapes Interaction

Same person behaves differently in different contexts:

Context Behavior
Sitting at desk Carefully reads instructions, explores features
Rushing to leave Skips all text, wants immediate action
After work, tired Low patience for complexity
Weekend, relaxed Willing to tinker and experiment

Implication: IoT systems must work across varied contexts, moods, and time pressures.

Common Misconception: “We Don’t Need User Research–We Are the Users”

The Myth: “Our team uses smart home devices, so we understand what users want. User research would just confirm what we already know.”

The Reality: This assumption causes 60-70% of IoT product failures in the first year post-launch.

Quantified Evidence:

  1. Representative sampling gap: When products are designed by engineers for engineers without user research, adoption rates among mainstream consumers average 12-15% vs. 40-60% for user-research-informed products.

  2. Demographic blindness: A 2022 study of 47 smart home products found that 83% were tested only with employees aged 25-40, yet 65% of the target market was 45+ years old with different physical abilities and technology comfort levels.

  3. Context invisibility: Lab testing misses 78% of real-world failure modes. Example: Smart lock tested in quiet office works perfectly, but fails when user arrives home with groceries, kids, and 10% phone battery.

Cost of assumption-based design:

  • $2.5-5M average wasted on features users don’t use
  • 40-60% higher customer support costs
  • 25-40% customer return rates vs. 8-12% for user-research-informed products

The fix: Systematic user research with representative participants (8-15 per user segment), contextual observation (2-4 hours in real environments), and behavioral data analysis. Investment: 3-5% of product budget. Return: 40-60% reduction in post-launch redesign costs.

9.4 Case Study: Philips Hue Bridge vs Direct Wi-Fi Setup

Philips Hue originally required a dedicated “Bridge” device (a small box connected to the router via Ethernet) before any smart bulbs would work. This created a UX dilemma:

The problem: Users expected to screw in a bulb and control it from their phone immediately. The Bridge added $60 to the cost, required an Ethernet cable, and added 15-20 minutes to first-time setup. User research showed that 23% of buyers returned the product within 7 days, citing “too complicated” – most never got past Bridge setup.

The trade-off: The Bridge provided reliable Zigbee communication (no Wi-Fi congestion), supported up to 50 bulbs, and enabled local automation without cloud dependency. Engineers knew the Bridge was technically superior, but users experienced it as a barrier.

The user research insight: Contextual observation of 40 first-time users revealed three critical findings:

  1. Mental model mismatch: Users expected “install bulb, download app, done” (3 steps). The Bridge added 6 additional steps (unbox bridge, find Ethernet cable, connect to router, wait for LED, open app, discover bridge).
  2. Physical context: 65% of users did not have their router in an accessible location – it was in a closet, basement, or behind furniture. Running an Ethernet cable was a significant physical barrier.
  3. Abandonment trigger: The single largest drop-off point was “connect Ethernet cable to router.” Users who completed this step had 94% completion rate for the remaining setup. But 31% abandoned at this step.

Resolution (2020+): Philips introduced Bluetooth-enabled Hue bulbs that work without a Bridge for basic control (up to 10 bulbs). The Bridge remains available for advanced users who need larger networks and automation. This tiered approach reduced returns by 58% while preserving the technical advantages for power users.

Lesson: User research revealed that the “correct” technical architecture (Bridge-based Zigbee) conflicted with user mental models and physical contexts. The solution was not abandoning the Bridge but providing a simpler entry point that matched user expectations.

9.5 Practical Framework: Conducting a 2-Hour Contextual Inquiry

For teams new to user research, this step-by-step framework for a contextual inquiry session provides actionable guidance:

Before the visit (30 min preparation):

  • Define 3-5 specific questions you want to answer (not “is our product good?” but “how do users discover the scheduling feature?”)
  • Prepare a one-page observation guide listing behaviors to watch for
  • Bring: notebook, camera (with permission), and your prototype or product
  • Do NOT bring: presentation slides, feature lists, or marketing materials

During the visit (90 min observation):

Phase Duration Activity Key Technique
Rapport 10 min Chat, explain purpose, get consent “I’m here to learn from you, not test you”
Tour 15 min Ask user to show their current setup “Walk me through how you use this today”
Observation 40 min Watch user perform real tasks Stay quiet, take notes on BEHAVIOR not opinions
Probing 15 min Ask about observed behaviors “I noticed you checked the app twice – what were you looking for?”
Wrap-up 10 min Summary, open questions, thank user “What would make this easier?”

After the visit (30 min synthesis):

  • Write up 3-5 key observations within 1 hour (memory fades fast)
  • Separate observations (what happened) from interpretations (why it happened)
  • Note surprises – anything that contradicted your assumptions is valuable data
  • Identify quotes that capture user frustrations or workarounds

Sample size guidance: 5-8 users per user segment reveals approximately 80% of usability issues. Diminishing returns begin after 8 users. For IoT products with multiple user types (installer, daily user, administrator), recruit 5 participants per type.

9.6 Common IoT User Research Mistakes

Mistake Example Why It Fails Better Approach
Testing in the lab only Smart thermostat tested at desk Misses context: gloved hands, distance viewing, interruptions Test in actual homes at varying times of day
Leading questions “Don’t you think this feature is useful?” Social desirability bias – users agree to be polite “Tell me about a time you adjusted the temperature”
Engineer-only testing Team of 25-35 year old developers test a senior care device Misses motor limitations, vision needs, technology anxiety Recruit users matching actual demographics (age 65+)
Ignoring non-users Only talking to smart home enthusiasts Misses the 80% who find smart homes intimidating Include technology-skeptical participants
Single session research One 30-minute test before launch Misses longitudinal effects: novelty wear-off, habit formation Week-long diary studies reveal real usage patterns

Initial Design (assumption-based, by engineers): - 7-inch touchscreen with detailed energy analytics dashboard - 15+ configuration screens for advanced scheduling - Mobile app required for all settings - Assumes users will study energy reports and optimize manually

Deployment Result: 38% of users never complete setup, 72% never open energy analytics, support calls average 12 minutes.

Contextual Research (8 home visits, 3 hours each):

Visit 1: Elderly couple, 2-bedroom home

  • Observation: Wife adjusts thermostat 3x during observation, husband never touches it
  • Wife squints at touchscreen: “The numbers are too small”
  • She presses “Schedule” button by accident, gets lost in menus, gives up
  • Insight: Touchscreen overwhelms non-technical users; they want simple up/down control

Visit 2: Young professional, busy schedule

  • Observation: Sets thermostat to 68°F in morning, never touches it again all day
  • When asked about scheduling: “I tried setting it up once, took 20 minutes, I gave up”
  • Phone app has 47 unopened notifications from thermostat
  • Insight: Users want “set it and forget it,” NOT complex scheduling workflows

Visit 3: Family with young children

  • Observation: Kids constantly adjust temperature (playing with buttons)
  • Dad: “I wish it would just LEARN what we want and do it automatically”
  • House temperature swings 8°F throughout day from random adjustments
  • Insight: Need auto-learning + child lock, not more manual controls

Visit 4: Tech-savvy early adopter

  • Observation: This user DID set up complex schedules and reads energy reports
  • BUT: Represents <5% of user base (not primary persona)
  • Insight: Advanced features should exist but be hidden from mainstream users

Research Synthesis (patterns across all 8 visits):

Observation Frequency Implication
Users adjust temperature 2-4x per day 8/8 users Need simple up/down control
Users never open energy analytics 7/8 users Hide analytics in “Advanced” section
Users struggle with scheduling UI 6/8 users Auto-learning better than manual scheduling
Users want voice control 5/8 users Add voice as primary interface option
Users accidentally press wrong buttons 6/8 users Large, clearly labeled buttons only

Redesigned Thermostat:

Primary Interface (used by 95% of users): - Physical dial for temperature adjustment (tactile, simple) - Large LED display showing current temp (readable from 10 feet) - Three modes: “Home” (occupied), “Away” (vacant), “Sleep” (night) - Auto-learning: thermostat observes manual adjustments for 1 week, then suggests schedule

Secondary Interface (used by 5% power users): - Mobile app with detailed analytics (but defaults to simple view) - Advanced scheduling (7-day, different zones, geofencing) - Energy reports (monthly summary, not daily spam)

Voice Interface (used by 40% of users occasionally): - “Hey Google, set temperature to 70” - “What’s the temperature?” - No complex voice commands (studies show users won’t use them)

Results After Redesign:

  • Setup completion: 38% → 89%
  • Support calls: 12 min average → 3 min average
  • User satisfaction (SUS): 52 → 78
  • Energy analytics usage: 2% → 15% (but optional, not forced)

Key Research Insights Applied:

  1. Curse of Knowledge: Engineers assumed users would love detailed energy data. Reality: users just want comfort.
  2. Stated vs. Revealed: Users said they wanted scheduling features. Observation: they never used them. Solution: auto-learning eliminates manual scheduling.
  3. Context Matters: Elderly users need large text. Families need child locks. Busy professionals need “set and forget.”

Cost of Assumption-Based Design:

  • Original: $2.5M engineering cost + $1.2M support costs (Year 1) + poor reviews
  • Redesign: $800K research + redesign cost, but saved $1M/year in support + 40% increase in positive reviews

Key Insight: Contextual research revealed that users wanted simplicity, not features. The original design optimized for the wrong metric (feature count) instead of the right metric (ease of use). Spending $800K on research and redesign saved $1M+/year in support costs and transformed a product with mediocre reviews into one with strong customer satisfaction.

Use this framework to determine if user research is justified for your IoT project:

Conduct User Research When (Green Light):

Scenario Why Research Matters ROI
New product category No existing data on user behavior High - prevents building wrong product
Multiple user types Engineers can’t represent diverse users High - ensures accessibility
High deployment cost Mistakes expensive to fix post-launch Critical - hardware can’t be patched
Safety-critical Usability errors could cause harm Critical - regulatory/legal requirement
Competitive market UX is key differentiator High - better UX → market share
B2C product Serves diverse consumer population High - can’t assume user sophistication

Consider Skipping Research When (Yellow Light):

Scenario Alternative Approach Risk
Incremental improvement A/B test with existing users Low - data already exists
Internal tool Survey actual users directly Low - users accessible
Tight deadline Expert heuristic evaluation Medium - may miss novel insights
Very limited budget Guerrilla testing (5 users) Medium - small sample

Never Skip Research When (Red Light):

  • Physical device with long replacement cycle (users stuck with mistakes for years)
  • Medical/safety applications (usability errors could cause harm)
  • Serving vulnerable populations (elderly, disabled, children)
  • No prior IoT product experience on team (curse of knowledge unchecked)

Budget-Conscious Research Strategies:

Budget Approach What You Learn
$0 Guerrilla testing (recruit at coffee shops), analytics review Basic usability issues
<$5,000 5 user interviews + 5 usability tests (recruit via social media) Core pain points, major usability issues
$5k-$20k 8 contextual inquiries + iterative usability testing Deep understanding, validated design
$20k+ Full discovery research + quantitative validation Professional-grade insights

ROI Calculation:

Example: Smart lock product

Scenario A: Skip research, rush to market

  • Cost: $0 research
  • Result: 35% return rate (confusing setup), $150 support cost per return, 200,000 units sold
  • Total cost: $10.5M in returns and support

Scenario B: Invest in research

  • Cost: $50K research (8 contextual inquiries + 16 usability tests)
  • Result: 8% return rate (simplified setup based on research insights), $50 support cost per return, 200,000 units sold
  • Total cost: $50K research + $800K returns = $850K
  • Savings: $9.65M (92% ROI from research investment)

9.6.1 Interactive Calculator: Research Investment ROI

Compare the cost of skipping vs. conducting user research:

Time-Boxed Research for Agile Teams:

If you only have 2 weeks: - Week 1: 5 contextual inquiries (3 hours each = 15 hours observation + 15 hours synthesis) - Week 2: 5 usability tests with prototype (5 hours testing + 10 hours iteration) - Output: Core insights + validated design direction

If you only have 1 week: - Days 1-3: 3 contextual inquiries (condensed to 2 hours each) - Days 4-5: 5 usability tests with paper prototypes - Output: Major usability issues identified and fixed

Decision Tree:

  1. Is this a brand new product? → YES: Research required | NO: Continue
  2. Do you understand your users deeply? → NO: Research required | YES: Continue
  3. Is budget >$50K? → YES: Research affordable | NO: Continue
  4. Is post-launch fix cost >$10K? → YES: Research justified by ROI | NO: Consider skipping
  5. Is it safety-critical? → YES: Research legally/ethically required | NO: Optional

Key Insight: User research ROI is highest when mistakes are expensive to fix. Physical IoT devices cost 10-100x more to fix post-launch than software apps (can’t push patches to hardware). Spending 2-5% of budget on research can prevent 50-90% of post-launch support costs.

Common Mistake: Testing with Non-Representative Users

The Mistake: Recruiting users who don’t match your target demographic—most commonly, testing only with engineers, early adopters, or young tech-savvy users when the target market is mainstream consumers.

Why It Fails:

User research requires representative participants who reflect your actual users’ abilities, contexts, and technology sophistication.

Real-World Example: Smart Pill Dispenser

Target Users: Elderly adults (65-85 years old) managing multiple medications, many with arthritis, vision impairment, or cognitive decline.

Who They Tested With: Engineering team (25-40 years old, tech-savvy), company executives, a few “tech-forward seniors” recruited from a smart home enthusiast forum.

Test Results: 92% task success rate, average setup time 8 minutes, positive feedback.

Actual Launch Results: 73% return rate within 30 days, customer reviews: “Too complicated,” “Can’t read the screen,” “Buttons too small.”

What Went Wrong:

Test Users Actual Users Impact
Age 25-40 Age 65-85 Missed vision impairments (need 18pt+ font, 4.5:1 contrast)
Tech-savvy Low tech literacy Missed confusion over Wi-Fi setup, app pairing
Good dexterity Arthritis (40% of target) Missed difficulty pressing small 8mm buttons
Early adopters Mainstream/skeptical Missed anxiety about “another gadget to learn”
Enthusiast mindset Pragmatic/tired Missed “I just want it to work” expectation

The Cost:

  • 200,000 units shipped
  • 146,000 returns ($150 refund + $30 restocking = $26.3M cost)
  • Engineering redesign: $1.2M
  • Brand damage: -35% customer trust score

How to Fix It:

Step 1: Define Representative Users

Create recruitment screener based on actual user demographics:

Criteria Target Why It Matters
Age 65-85 (match target) Vision, dexterity, tech familiarity differ by age
Tech skills “I can send email but struggle with apps” Mainstream users, not early adopters
Health Managing 3+ medications Understand real complexity
Living situation 50% live alone, 50% with caregiver Different use contexts
Vision 40% wear reading glasses Test readability
Dexterity 30% report arthritis Test button size, lid opening

Step 2: Recruit Correctly

Good Recruitment Sources:

  • Senior centers (mainstream seniors, not just tech-forward)
  • Pharmacies (real medication users)
  • Caregiver support groups
  • Healthcare providers (ethical approval needed)

Bad Recruitment Sources:

  • Smart home enthusiast forums (selects for tech-forward)
  • Employee friends/family (familiarity bias)
  • Online panels offering $100 incentive (attracts professional testers)

Step 3: Screen Rigorously

Sample Screener Questions:

  1. “How comfortable are you with smartphones?” → Reject if “Very comfortable”
  2. “Do you currently manage multiple daily medications?” → Require “Yes”
  3. “Have you participated in user research in the past 6 months?” → Reject if “Yes” (avoid professional participants)

Step 4: Test in Context

  • Test in participant’s home (not lab) to see real environment
  • Test at realistic times (morning medication routine, not mid-afternoon)
  • Allow distractions (phone calls, caregiver interruptions)

Correct Sample Composition:

For a target market of elderly medication users: - 8-12 participants total - Age: 65-85 (average 73) - 60% women (reflect actual demographics of chronic medication users) - 40% with arthritis or tremor - 40% with vision impairment (corrected with glasses) - Mix of living situations (alone vs. with caregiver) - Tech skill range: 2 beginners, 4 intermediate, 2 advanced

Red Flags You Have Wrong Users:

  • ❌ Participants say “I love gadgets!”
  • ❌ Average age 20 years younger than target
  • ❌ No participants struggle with technology
  • ❌ All participants complete tasks easily (95%+ success)
  • ❌ Participants offer unsolicited technical suggestions

Success Indicators:

  • ✅ Participants demographically match target market
  • ✅ Some participants struggle (reveals real issues)
  • ✅ Diverse technology comfort levels represented
  • ✅ Contextual factors (vision, dexterity) observed
  • ✅ Test uncovers issues your team didn’t anticipate

Key Insight: Testing with the wrong users produces misleading results. Engineers testing an elderly medication dispenser learn nothing about readability for vision-impaired users. Spending $15K on research with non-representative users wastes money—better to spend $10K on 8 representative users than $15K on 12 wrong users. Get the participants right or don’t bother.

9.7 How It Works

User research for IoT follows a systematic process:

  1. Recognize the Problem: Designers suffer from the curse of knowledge (cannot imagine not understanding the system)
  2. Plan Research: Identify what to learn (user goals, pain points, contexts), recruit representative users (not early adopters)
  3. Conduct Observation: Watch users in natural environments (2-4 hours contextual inquiry reveals real behavior)
  4. Separate Stated vs. Revealed: Note what users SAY they want vs. what they actually DO
  5. Analyze Context: Document physical, social, temporal, and technical factors that shape interaction
  6. Synthesize Insights: Identify patterns across users, separate observations from interpretations
  7. Validate Assumptions: Test designs with real users in real contexts before launch

9.8 Concept Relationships

User research fundamentals connect to the entire UX design process:

  • Research MethodsResearch Methods provides specific techniques (contextual inquiry, interviews) to gather the insights this chapter explains are essential
  • Personas and Journey MapsPersonas synthesize research findings into actionable design tools that prevent assumption-based design
  • Context AnalysisContext-of-Use Analysis expands on how physical, social, temporal, technical, and cultural factors shape user behavior
  • Pitfalls and EthicsPitfalls and Ethics addresses how to avoid sampling bias and conduct research ethically
  • UX Design → Research insights inform User Experience Design decisions throughout the product lifecycle

9.9 See Also

Next Steps in the Series:

Apply Research to Design:

Avoid Common Mistakes:

Common Pitfalls

Testing with technically sophisticated internal users systematically misses the challenges faced by mainstream users. Recruit from a screener matching the target demographic distribution including users with limited technical experience.

Assuming you understand user needs because you are also a potential user leads to building features users do not want and missing pain points obvious only in retrospect. Budget at least 5 user interviews before committing to any feature; 5 representative users typically surface 85% of usability issues.

Delivering a research report describing user behaviour without translating it into specific design implications leaves the product team unsure how to act. For every observed pain point, provide at least one corresponding design recommendation with a rationale linking it back to the research data.

9.10 Summary

This chapter introduced the fundamental principles of user research for IoT design:

  • The Curse of Knowledge prevents designers from seeing their products through novice eyes
  • Edge Cases are Common Cases in real-world deployment
  • Stated Preferences vs. Revealed Behavior means you must observe what users do, not just ask what they want
  • Context Shapes Interaction across physical, social, temporal, and technical dimensions
In 60 Seconds

User research for IoT uncovers the real contexts, mental models, and pain points that users bring to connected devices—insights that prevent building technically impressive products that nobody wants to use.

9.11 What’s Next

Next Chapter
Next Chapter Research Methods – Specific techniques like contextual inquiry, interviews, and observation
Previous Chapter Understanding People and Context – Series overview and prerequisites
Series Overview Understanding People and Context – Full chapter series overview