1635  Ideate, Prototype, and Test

1635.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Apply Ideation Techniques: Use brainstorming, Crazy 8s, mind mapping, and SCAMPER to generate diverse solution concepts
  • Prioritize Ideas: Evaluate concepts using the Impact vs Effort 2x2 matrix framework
  • Select Prototype Fidelity: Choose appropriate prototype levels (paper, breadboard, Wizard of Oz, functional) for different learning goals
  • Conduct User Testing: Execute think-aloud protocols, A/B tests, and usability measurements
  • Interpret Test Results: Analyze user feedback to determine pivot, iterate, or proceed decisions

1635.2 Prerequisites

1635.3 Stage 3: Ideate

1635.3.1 Brainstorming Techniques

After defining the problem, generate diverse potential solutions. The goal is quantity—evaluation comes later.

TipIdeation Techniques

1. Classic Brainstorming - Set timer (15-30 minutes) - No criticism during generation - Build on others’ ideas (“Yes, and…”) - Aim for quantity (50+ ideas) - Wild ideas welcome

2. Crazy 8s - Fold paper into 8 sections - Sketch 8 different solutions in 8 minutes - Forces rapid idea generation - Great for visual concepts

3. Mind Mapping - Central problem in middle - Branch out related concepts - Sub-branches for variations - Reveals unexpected connections

4. SCAMPER Systematic modification of existing solutions: - Substitute: What if we use different materials/sensors? - Combine: What if we merge with another device? - Adapt: What similar problem solutions can we borrow? - Modify: What if we make it bigger/smaller/stronger? - Put to other use: What else could this device do? - Eliminate: What features can we remove? - Reverse: What if we flip the user’s role?

1635.3.2 Evaluating Ideas

Use the 2×2 Impact vs Effort matrix to prioritize ideas:

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#fff', 'fontSize': '13px'}}}%%
quadrantChart
    title Impact vs Effort Matrix for IoT Solution Ideas
    x-axis Low Effort --> High Effort
    y-axis Low Impact --> High Impact
    quadrant-1 Major Projects - Plan carefully
    quadrant-2 Quick Wins - Do first!
    quadrant-3 Fill-ins - If time permits
    quadrant-4 Time Sinks - Avoid!

    Bluetooth reminder: [0.3, 0.8]
    Auto-dispenser robot: [0.85, 0.9]
    Simple LED indicator: [0.2, 0.7]
    Voice-activated pills: [0.7, 0.5]
    Pill tracking app: [0.5, 0.6]
    AI health predictor: [0.9, 0.75]
    Basic timer alarm: [0.15, 0.4]
    Pharmacy integration: [0.8, 0.65]

Figure 1635.1: Impact vs Effort Quadrant Analysis: Prioritizing Smart Pill Bottle Features

Impact vs Effort Matrix: Ideas in the upper-left (high impact, low effort) are “Quick Wins” - prioritize these first. “Simple LED indicator” (shows pill taken/not taken) scores highest value. “Auto-dispenser robot” has high impact but requires significant effort - plan carefully. “Basic timer alarm” is easy but low impact - users already have phone alarms.

When you have 5-10 specific ideas to compare, a table format enables more detailed evaluation across multiple criteria.

Solution Idea Impact Effort Risk Score
Simple LED indicator High (direct feedback) Low (1 week) Low 9/10
Bluetooth + app reminder High Medium (3 weeks) Medium 7/10
Voice-activated pills Medium High (8 weeks) High 4/10
AI health predictor Medium Very High (16+ weeks) Very High 2/10
Auto-dispenser robot High Very High (20+ weeks) Very High 3/10
Figure 1635.2: Table format for comparing IoT solution ideas when detailed scoring is needed. Score = (Impact × 3) + (10 - Effort) + (10 - Risk), normalized to 10.

1635.4 Stage 4: Prototype

1635.4.1 Prototype Fidelity Levels

Match prototype complexity to what you’re trying to learn:

TipPrototype Levels for IoT

Level 1: Paper Prototype - Sketches, storyboards, paper interfaces - Time: Hours - Cost: $0-10 - Tests: Concept understanding, workflow, layout

Level 2: Breadboard Prototype - Arduino/ESP32 + sensors on breadboard - Time: Days - Cost: $20-100 - Tests: Technical feasibility, sensor accuracy

Level 3: Wizard of Oz Prototype - Fake “smart” behavior controlled by hidden human - Time: Hours to days - Cost: Low - Tests: Interaction patterns, user expectations

Level 4: Functional Prototype - Working device with real connectivity - Time: Weeks - Cost: $100-500+ - Tests: Complete user experience, real-world conditions

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#fff', 'fontSize': '14px'}}}%%
flowchart TD
    Q1{What are you<br/>testing?} --> |Concept & flow| P1[Paper Prototype<br/>Hours, $0]
    Q1 --> |Technical feasibility| P2[Breadboard<br/>Days, $50]
    Q1 --> |User interaction| P3[Wizard of Oz<br/>Hours, $20]
    Q1 --> |Full experience| P4[Functional<br/>Weeks, $200+]

    P1 --> T1[Learn: Do users<br/>understand the concept?]
    P2 --> T2[Learn: Can sensors<br/>detect what we need?]
    P3 --> T3[Learn: How do users<br/>expect it to respond?]
    P4 --> T4[Learn: Does it work<br/>in real conditions?]

    T1 -->|No| R1[Revise concept]
    T2 -->|No| R2[Try different sensors]
    T3 -->|No| R3[Adjust interaction model]
    T4 -->|No| R4[Refine engineering]

    R1 --> P1
    R2 --> P2
    R3 --> P3
    R4 --> P4

    style P1 fill:#16A085,stroke:#2C3E50,color:#fff
    style P2 fill:#2C3E50,stroke:#16A085,color:#fff
    style P3 fill:#E67E22,stroke:#2C3E50,color:#fff
    style P4 fill:#7F8C8D,stroke:#2C3E50,color:#fff

Figure 1635.3: Prototype Selection Decision Tree: Matching Fidelity to Learning Goals

Prototype Selection Decision Tree: Start with the cheapest prototype that answers your current question. Testing “do users understand the concept?” doesn’t require working electronics - paper prototypes suffice. Only build functional prototypes when you need to test real-world performance.

1635.4.2 Smart Pill Bottle Prototype Journey

Prototype v1: Paper (Testing Concept) - Paper cylinder with drawn LED ring - “Press” paper button to simulate interaction - Test question: “Do users understand what the light means?” - Result: Users confused—green light could mean “taken” or “ready to take” - Insight: Need clearer visual language (pulsing vs solid)

Prototype v2: Breadboard (Testing Hardware) - ESP32 + NeoPixel LED ring + piezo buzzer - Time since last “bottle open” tracked - Test question: “Can we detect bottle opening reliably?” - Result: Hall effect sensor + magnet in cap works better than light sensor - Insight: Orientation-independent detection needed

Prototype v3: Wizard of Oz (Testing Interaction) - Real bottle shell, researcher controls LEDs remotely - Test question: “What notification sequence works best?” - Result: Single long beep ignored; short beeps every 2 minutes until responded - Insight: Users want reminder to stop automatically after pill detected, not manual dismiss

Prototype v4: Functional (Testing Real Use) - Complete device with battery, Bluetooth, app - Deployed to 5 users for 2 weeks - Test questions: “Does it actually improve adherence? What breaks?” - Results: Adherence improved 78% → 94%; button too small for arthritic hands; battery lasted only 3 weeks (target: 3 months) - Insights: Larger touch surface; optimize power sleep modes

1635.5 Stage 5: Test

1635.5.1 User Testing Methods

TipTesting Approaches

1. Think-Aloud Protocol Users verbalize thoughts while interacting. - “I see a light… I think that means… let me try pressing this…” - Reveals mental models and confusion points - Best for: Usability, learnability

2. A/B Testing Compare two versions with different users. - Version A: Reminder at fixed time (8am) - Version B: Reminder after wake-up motion detected - Measure: Adherence rate, user preference - Best for: Feature decisions, optimization

3. Usability Metrics Quantify the user experience. - Task success rate: Can they complete the goal? - Time on task: How long does it take? - Error rate: How often do they make mistakes? - Satisfaction score: How do they rate the experience?

4. Field Testing Deploy in real environment for extended period. - Natural context reveals issues lab testing misses - Longer duration exposes reliability problems - Best for: Real-world validation, durability

1635.5.2 Interpreting Test Results

Validation vs Invalidation: | Finding | Interpretation | Next Step | |———|—————|———–| | Users complete task successfully | Feature validated | Proceed to implementation | | Users struggle but eventually succeed | Usability issue | Iterate on design | | Users can’t complete task | Concept invalidated | Return to ideation or definition | | Users don’t want the feature | Wrong problem | Return to empathy |

Example Test Results: Smart Pill Bottle

Test Target Result Decision
Understand LED meaning >90% correct 65% correct ❌ Revise visual design
Set reminder time <2 minutes 4.5 minutes ❌ Simplify setup flow
Take pill when reminded >85% adherence 94% adherence ✅ Core value validated
Would recommend to others >4.0/5 4.3/5 ✅ Pass
Battery lasts >30 days >30 days 25 days ❌ Needs improvement

Key Insights: 1. Success: Audio reminder works—users hear and respond 2. Failure: Setting timer too complex—need simpler UI 3. Failure: Battery life insufficient—optimize sleep mode 4. Pivot Decision: Simplify timer setup, extend battery before manufacturing

1635.6 Knowledge Check

Question 1: During ideation for an IoT pet feeder, your team generates 47 ideas in 30 minutes. Some ideas seem impractical. What should you do next?

Explanation: Ideation separates generation from evaluation. Eliminating ideas too early kills creativity and may remove ideas that seem wild but contain valuable seeds. The Impact/Effort matrix provides an objective prioritization framework. Voting or stakeholder preference introduces bias - a “wild” idea that seems impractical might be a Quick Win once properly evaluated.

Question 2: You want to test whether users understand how to interact with your smart doorbell. Which prototype fidelity should you use FIRST?

Explanation: Understanding interaction doesn’t require working electronics. A paper prototype tests whether users understand the concept, where to press, and what to expect - all without any cost or development time. Save functional prototypes for testing real-world performance after interaction patterns are validated.

Question 3: Your smart thermostat A/B test shows: Version A (manual scheduling) has 60% user satisfaction; Version B (AI-predicted) has 75% satisfaction. However, Version B users override predictions 40% of the time. What should you conclude?

Explanation: The data tells a nuanced story: Users LIKE AI predictions (higher satisfaction) but still want CONTROL (high override rate). Neither version alone is optimal. The insight is to combine approaches: AI-suggested schedules that users can easily adjust. This reveals a common IoT design principle: automation should augment human control, not replace it entirely.

Question 4: During think-aloud testing of your smart lock app, a user says: “I think I tap here to unlock… no wait… maybe this button? I’m confused.” What type of issue does this reveal?

Explanation: The user’s verbalized confusion (“I think… no wait… maybe…”) reveals they cannot identify the correct action. This is a usability/learnability issue - the interface doesn’t clearly communicate its affordances. The button might work perfectly (no technical bug), the app might be fast (no performance issue), and the feature exists (no gap) - but users can’t figure out how to use it.

1635.7 Common Pitfalls

CautionPitfall: Falling in Love with Your First Idea

The Mistake: During ideation, one idea seems obviously perfect. The team stops generating alternatives, builds prototypes for only that idea, and ignores test feedback that suggests problems.

Why It Happens: First ideas feel like breakthroughs. Generating more ideas seems wasteful when “we already have the answer.” Sunk cost fallacy kicks in after building the first prototype.

The Fix: Enforce rules: Generate at least 20 ideas before discussing any. Build at least 2-3 different concept prototypes. Use “Devil’s Advocate” role to challenge the favorite idea. Set explicit criteria for “kill” decisions before testing.

CautionPitfall: Over-Building Prototypes (Polishing Before Validating)

The Mistake: Building a polished, expensive prototype before validating the core concept. Teams spend weeks on PCB design and 3D-printed enclosures when a paper mockup would have revealed the concept doesn’t work.

Why It Happens: Engineers want to build “real” things. Stakeholders want impressive demos. Teams confuse prototype fidelity with project progress. Fear that “rough” prototypes won’t get fair feedback.

The Fix: Match fidelity to learning goal. Ask: “What’s the cheapest thing we can build to answer our current question?” Rule: No functional prototype until paper prototype validates concept with 5+ users. Budget prototyping phases separately to prevent over-spending early.

1635.8 Summary

  • Ideation Techniques: Brainstorming (quantity), Crazy 8s (rapid visual), mind mapping (connections), and SCAMPER (systematic modification) generate diverse solution concepts
  • Impact/Effort Matrix: Prioritize “Quick Wins” (high impact, low effort) first; avoid “Time Sinks” (low impact, high effort)
  • Prototype Fidelity: Paper (concept), breadboard (technical), Wizard of Oz (interaction), functional (full experience) - start with the cheapest level that answers your current question
  • User Testing Methods: Think-aloud (mental models), A/B testing (comparisons), usability metrics (quantification), field testing (real-world validation)
  • Test Interpretation: Task success validates features; struggle indicates usability issues; failure suggests wrong problem; rejection means return to empathy
  • Iteration Mindset: Prototypes exist to learn and fail cheaply - building expensive prototypes before validating concepts wastes resources

1635.9 What’s Next

Continue to Implement and Iterate to learn MVP development approaches, iterative sprints, analytics monitoring, and continuous improvement strategies for IoT products.