7  Agile and Risk Management

7.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Identify IoT Project Risks: Categorize technical, business, regulatory, and supply chain risks
  • Apply Risk Assessment: Use probability-impact matrices to prioritize risk mitigation efforts
  • Choose Development Methodology: Evaluate Agile vs Waterfall tradeoffs for IoT projects
In 60 Seconds

Agile methodologies adapted for IoT hardware development use short sprints to validate assumptions early, while risk management with probability-impact matrices ensures that the unique risks of embedded systems — hardware failures, supply chain delays, regulatory compliance — are identified and mitigated before they derail the project.

  • Adapt Scrum for Hardware: Apply longer sprint cycles and hardware-specific ceremonies
  • Use Kanban Boards: Manage hardware development with work-in-progress limits
  • Execute Design Sprints: Compress months of validation into focused five-day sprints

Agile development for IoT means building your system in small, iterative cycles rather than trying to plan everything upfront. Think of it like cooking a new recipe by tasting and adjusting as you go, rather than following rigid instructions and hoping for the best. Each short cycle produces a working increment that you can test and get feedback on.

“Every IoT project has risks – things that might go wrong,” said Max the Microcontroller. “The sensor might not work in extreme cold. The battery might die too fast. The supplier might run out of chips. The trick is to FIND those risks early and have backup plans before they become disasters.”

Sammy the Sensor explained Agile: “Instead of spending a year planning the perfect device, Agile says work in short sprints – maybe two to four weeks each. Each sprint, you build a small piece, test it, and learn. If something is wrong, you catch it in weeks, not months! It is like checking your homework after each problem instead of waiting until the whole test is done.”

Lila the LED added, “And there is a cool technique called a Design Sprint – five days to go from problem to tested solution. Monday: understand. Tuesday: sketch ideas. Wednesday: decide the best one. Thursday: build a prototype. Friday: test with real users. One week, real answers!” Bella the Battery warned, “The biggest risk of all? Not testing early enough. The longer you wait, the more expensive mistakes become!”

7.2 Prerequisites

7.3 Introduction

Every IoT project faces uncertainty. Hardware components may become unavailable, battery life may fall short of targets, or regulatory requirements may change mid-development. The combination of physical constraints (long PCB fabrication times, component lead times) and evolving software needs creates unique challenges that traditional project management approaches struggle to address.

This chapter explores two complementary strategies for managing IoT project complexity: risk management (identifying and mitigating potential problems before they become crises) and Agile methodologies (iterative development that adapts to change). You’ll learn how to quantify risks using probability-impact matrices, adapt Scrum and Kanban for hardware development, and use Design Sprints to validate concepts in just five days.

The worked examples and calculators throughout this chapter use real IoT scenarios—from smart building sensors to component supply chain challenges—to demonstrate how these techniques prevent costly delays and ensure project success.

7.4 Risk Management

7.4.1 Identifying Risks

Technical Risks:

  • Unproven technology
  • Integration challenges
  • Performance limitations
  • Scalability concerns

Business Risks:

  • Market readiness
  • Competitive threats
  • Pricing pressure
  • Channel access

Regulatory Risks:

  • Certification delays
  • Changing regulations
  • Privacy requirements
  • Safety standards

Supply Chain Risks:

  • Component availability
  • Supplier reliability
  • Lead time variations
  • Cost fluctuations

7.4.2 Risk Mitigation Strategies

Prototyping: Build proof-of-concept to validate technical feasibility early.

Modular Design: Create independent modules to isolate failures and enable parallel development.

Vendor Diversification: Qualify multiple suppliers for critical components.

Regulatory Engagement: Consult certification bodies early to understand requirements.

Iterative Development: Develop in sprints with regular testing to catch issues early.

Contingency Planning: Maintain backup plans for critical dependencies.

Risk mitigation ROI calculation: A critical component (proprietary sensor) costs $12 with single supplier, has 20% chance of 4-month shortage causing $50K project delay.

Expected loss (unmitigated): \[E[Loss] = P_{shortage} \times Cost_{delay} = 0.20 \times \$50,000 = \$10,000\]

Mitigation option: Qualify second supplier

  • Qualification cost: $3,000 (engineering time + samples)
  • Reduces shortage probability to 4% (both suppliers failing simultaneously = 0.2 × 0.2)
  • Residual expected loss: $0.04 $50,000 = $2,000

Net savings: \[ROI = \frac{\$10,000 - \$2,000 - \$3,000}{\$3,000} = \frac{\$5,000}{\$3,000} = 1.67 \text{ (167\% return)}\]

Spending $3K to reduce expected loss by $8K (from $10K to $2K) yields 1.67× ROI. If shortage risk were only 5%, expected loss drops to $2.5K—mitigation no longer justified ($3K cost > $2.5K - $0.1K residual saved).

Interactive: Risk Mitigation ROI Calculator

Explore how probability, impact, and mitigation cost affect the return on investment for risk mitigation strategies.

Key insight: Mitigation is justified when the reduction in expected loss exceeds the mitigation cost. As you adjust the sliders, notice how small changes in probability can dramatically affect ROI. This is why accurate risk assessment is critical.

Risk assessment matrix showing four quadrants plotting probability versus impact, with mitigation strategies assigned to each quadrant. Critical risks (high probability and high impact) require immediate mitigation, while low probability and low impact risks can be accepted and monitored.
Figure 7.1: Risk Assessment Matrix: Probability-Impact Analysis with Mitigation Strategies

Risk Assessment and Mitigation Framework: Systematic approach to evaluating IoT project risks by probability and impact, then assigning appropriate mitigation strategies. Critical risks (high probability + high impact) like “battery life doesn’t meet specs” require immediate action, while low-probability/low-impact risks like “preferred sensor color unavailable” can be accepted.

This layered variant organizes common IoT project risks by system layer (Hardware, Firmware, Connectivity, Cloud, Application), helping teams assign risk ownership and identify cross-layer dependencies.

Layered IoT system architecture diagram showing five layers from bottom to top: Hardware, Firmware, Connectivity, Cloud, and Application. Each layer lists common risks and responsible team members, illustrating how risks cascade upward through dependent layers.
Figure 7.2: Layered view organizing IoT risks by system component, showing typical risk owner for each layer. Risks in lower layers cascade upward - a hardware delay affects all layers above.

7.5 Worked Example: Risk Assessment for a Smart Building Sensor Startup

Scenario: A 12-person startup is developing a wireless air quality sensor (CO2, PM2.5, temperature, humidity) for commercial buildings. The product uses a Nordic nRF52840 SoC with BLE and 802.15.4 (Thread) connectivity, powered by 2x AA batteries with a 3-year target life. The team has 18 months and $1.2M seed funding to reach first production shipment.

Step 1: Identify and Quantify Risks

Risk Category Probability Impact Expected Loss Priority
PM2.5 sensor accuracy fails certification Technical 30% $180K (6-month delay + re-engineering) $54K CRITICAL
Thread protocol stack exceeds RAM budget Technical 40% $80K (MCU upgrade + PCB respin) $32K HIGH
Battery life reaches only 18 months (not 36) Technical 50% $120K (redesign + lost customer confidence) $60K CRITICAL
Key component (SPS30 PM sensor) goes end-of-life Supply Chain 20% $150K (qualification of alternative + 4-month delay) $30K HIGH
Building codes change requiring additional certifications Regulatory 15% $100K (compliance testing + delay) $15K MEDIUM
Competitor launches similar product at lower price Business 35% $200K (market share loss, must reduce margin) $70K CRITICAL
3 of 12 engineers leave mid-project Business 25% $160K (recruiting + 3-month productivity loss) $40K HIGH

Total Expected Loss (unmitigated): $301K (25% of budget)

Step 2: Apply Mitigation Strategies with Cost-Benefit

Risk Mitigation Cost Risk Reduced To ROI
PM2.5 certification Pre-compliance testing at Month 3 (before PCB finalized) $8K 10% probability (saves $43.2K expected) 5.4x
Thread RAM overflow Prototype on nRF5340 (dual-core, 256KB+64KB) as backup $3K (dev kit + porting) 10% probability (saves $24K expected) 8x
Battery life shortfall Power profiling in Sprint 2 (not Sprint 10) $2K (test equipment) 20% probability (saves $36K expected) 18x
PM sensor EOL Qualify Sensirion SEN5x as second source $12K (evaluation + testing) 5% probability (saves $22.5K expected) 1.9x
Engineer attrition Key-person documentation policy + retention bonuses $30K 10% probability (saves $24K expected) 0.8x

Total mitigation investment: $55K Total expected loss (mitigated): $95K (vs $301K unmitigated) Net savings: $151K (2.7x return on $55K investment)

Step 3: Risk Review Cadence

Frequency Activity Attendees
Weekly Quick risk status in standup (2 min) All engineers
Bi-weekly Risk register review in sprint retro PM + leads
Monthly Quantified risk report to board CEO + investors
Quarterly External risk audit (supply chain + regulatory) PM + consultant

Key lesson: The single highest-ROI mitigation was power profiling in Sprint 2 at $2K (18x return). Without early measurement, the team would have discovered the 18-month battery life shortfall at Month 12 – requiring a costly PCB redesign with a larger battery and enclosure, adding 4 months and $120K. The $2K oscilloscope-based power measurement caught the issue when firmware changes alone could fix it.

7.6 Agile Methodologies for IoT

Tradeoff: Agile vs Waterfall for IoT Projects

Decision context: When choosing a development methodology for IoT projects that combine hardware and software

Factor Agile (Scrum/Kanban) Waterfall
Requirements Stability Can change each sprint Must be fixed upfront
Hardware Lead Times Accommodated via longer sprints Built into sequential phases
Risk Discovery Early (iterative testing) Late (testing at end)
Documentation Lighter, evolving Comprehensive, upfront
Stakeholder Involvement Continuous Phase gates only
Cost of Change Low early, higher late High at any stage
Team Structure Cross-functional, co-located Specialized, handoffs between phases
Certification Compliance Requires discipline to document Natural fit for audit trails

Choose Agile when:

  • Requirements likely to evolve (new sensors, protocols)
  • Building consumer IoT with UX iteration needed
  • Startup environment with pivots expected
  • Software-heavy product (edge computing, ML)
  • Team has agile experience

Choose Waterfall when:

  • Regulatory compliance required (medical devices, automotive)
  • Fixed-price contracts with defined deliverables
  • Hardware-dominant product with long lead times
  • Requirements well-understood from similar products
  • Distributed team with timezone challenges

Default recommendation: Hybrid approach - use waterfall for hardware milestones (PCB versions) and agile for software/firmware sprints. Plan hardware in 4-week cycles, software in 2-week sprints, with integration gates every 8 weeks.

7.6.1 Scrum for IoT

Sprint Structure: 2-week sprints with defined goals.

Ceremonies:

  • Sprint planning
  • Daily standups
  • Sprint review/demo
  • Sprint retrospective

Adaptations for Hardware:

  • Longer sprints (3-4 weeks) to accommodate PCB fabrication
  • Separate tracks for hardware and software
  • Integration sprints for combined testing

User Stories: “As a [user type], I want [functionality] so that [benefit]”

Examples:

  • “As a homeowner, I want to receive alerts when motion is detected so that I know about potential intrusions”
  • “As a building manager, I want to see energy consumption by floor so that I can identify inefficiencies”

7.6.2 Kanban for Hardware

Board Columns:

  • Backlog
  • Design
  • Prototype
  • Test
  • Validate
  • Done

WIP Limits: Limit work-in-progress to prevent bottlenecks.

Example: Maximum 3 designs in progress simultaneously.

Kanban board workflow diagram showing six columns (Backlog, Design, Prototype, Test, Validate, Done) with work-in-progress limits for each stage. Orange-highlighted columns indicate bottlenecks at capacity where team should focus on completing work before pulling new items.
Figure 7.3: Kanban Board for IoT Hardware: Work-in-Progress Limits and Workflow Stages

Kanban Board for IoT Hardware Development: Visual workflow management showing work-in-progress (WIP) limits for each stage. Orange columns indicate bottlenecks at capacity—team should focus on clearing Test and Design before pulling new work. This prevents resource overload and identifies process inefficiencies.

7.7 Documentation

7.7.1 Design Documentation

Requirements Specification:

  • Functional requirements
  • Non-functional requirements (performance, reliability, etc.)
  • Use cases and user stories
  • Acceptance criteria

Architecture Documentation:

  • System block diagram
  • Component selection rationale
  • Communication protocols
  • Data flow diagrams

Design Decisions Log: Record why decisions were made for future reference.

Template:

Decision: [What was decided]
Context: [Situation requiring decision]
Options Considered: [Alternatives evaluated]
Rationale: [Why this option chosen]
Consequences: [Expected impacts]
Date: [When decided]

7.7.2 User Documentation

Quick Start Guide: Minimal steps to first successful use.

User Manual: Comprehensive coverage of all features.

Troubleshooting Guide: Common problems and solutions.

FAQ: Frequently asked questions.

Video Tutorials: Visual demonstrations of key tasks.

7.8 Best Practices

7.8.1 Start with the Problem

Anti-Pattern: “We should build a smart [device] because IoT is cool!”

Better: “Users struggle with [problem]. How might we solve this?”

7.8.2 Involve Users Early and Often

Goal: Validate assumptions every step with real user feedback.

Cadence: User touchpoints every 2-4 weeks during development.

7.8.3 Prototype Fast, Fail Fast

Mantra: “Build to learn, not to finish”

Approach: Quick, disposable prototypes that answer specific questions.

7.8.4 Design for Edge Cases

Consider:

  • Poor connectivity
  • Low battery
  • Sensor failures
  • Incorrect user input
  • Environmental extremes

7.8.5 Plan for Iteration

Reality: Version 1 will have flaws. Plan for v1.1, v2.0.

Budget: Reserve 20-30% of resources for post-launch refinement.

7.9 Design Sprint Methodology

Interactive: Design Sprint Day Planner

Plan your five-day Design Sprint by selecting a start date to see the full schedule with key activities for each day.

The Design Sprint is a structured five-day process developed at Google Ventures that compresses months of work into a focused week. For IoT products, this methodology is particularly powerful because it validates both the user experience AND technical feasibility before committing to expensive hardware development.

7.9.1 Day 1: Map and Define the Challenge

Morning - Understand the Landscape:

  1. Expert Interviews (1 hour each): Talk to stakeholders, users, and technical experts
  2. Lightning Demos (15 min each): Review competing products and analogous solutions
  3. How Might We Notes: Capture insights as opportunity statements

Afternoon - Focus:

  1. Create a User Journey Map: Map the end-to-end experience from awareness to daily use
  2. Identify the Sprint Target: Select one critical moment in the journey to solve
  3. Define Sprint Questions: What must be true for this solution to succeed?

IoT-Specific Considerations:

  • Include hardware constraints in your journey map (battery life, connectivity)
  • Map the entire ecosystem: device + app + cloud + onboarding
  • Consider installation/setup as a critical journey moment

7.9.2 Day 2: Sketch Solutions

Morning - Diverge:

  1. Lightning Demos Continued: Deep-dive into inspiration sources
  2. Individual Note-Taking: Each person captures ideas independently
  3. Crazy 8s Exercise: Fold paper into 8 panels, sketch 8 variations in 8 minutes

Afternoon - Commit:

  1. Solution Sketches: Create detailed 3-panel storyboards of your best idea
  2. Include IoT Specifics: Draw the physical device, the app screens, AND the setup flow
  3. Anonymous Voting: Post all sketches, team votes with dots (no pitching yet)

Best Practices for IoT Sketches:

Panel 1: Context (where/when device is used)
Panel 2: Interaction (how user physically interacts)
Panel 3: Outcome (what value user receives)

Include callouts for:
- Form factor and size
- LED indicators or display
- Button/touch interactions
- App notifications

7.9.3 Day 3: Decide

Morning - Critique:

  1. Art Museum: Silent review of all sketches with sticky-note questions
  2. Speed Critique: 3-minute discussions per sketch, capture concerns
  3. Straw Poll: Everyone votes, but Decider has final say

Afternoon - Storyboard:

  1. Create the Test Storyboard: 10-15 panel comic strip of complete user experience
  2. Assign Roles: Who builds what for the prototype?
  3. Plan the Prototype: List every asset needed

IoT Storyboard Must Include:

Scene 1: Unboxing and first impression
Scene 2: Physical setup/installation
Scene 3: App download and pairing
Scene 4: First successful use
Scene 5: Daily routine interaction
Scene 6: Notification/alert scenario
Scene 7: Troubleshooting/edge case

7.9.4 Day 4: Prototype

Build a “Goldilocks” Prototype: Just real enough to get honest reactions, not so polished that it took weeks.

Parallel Workstreams:

Role Deliverable Tools
App Designer Clickable app prototype Figma, Sketch, InVision
Hardware Designer Physical form mockup Cardboard, 3D print, foam
Filmmaker Demo video of “working” product Screen recording + narration
Interviewer Interview guide and script Google Docs

IoT Prototype Fidelity Levels:

  1. Paper Prototype (2 hours):
    • Hand-drawn device mockup
    • Paper app screens
    • “Wizard of Oz” - you play the device
  2. Digital Prototype (4 hours):
    • Figma/Sketch clickable app
    • Video of device “working” (edited)
    • 3D-printed or foam physical model
  3. Working Demo (8 hours):
    • Real hardware with demo firmware
    • Hardcoded “happy path” only
    • Connected app (limited features)

Critical: The Hardware Illusion:

For IoT, you often can't build working hardware in a day.
Instead, fake it:

Option A: Pre-record "device" behavior in video
Option B: Use Wizard of Oz (teammate controls "device" remotely)
Option C: Use existing dev kit hidden in 3D-printed shell
Option D: Simulate device with LED strips + Arduino (just lights)

Users don't need real functionality - they need realistic EXPERIENCE

7.9.5 Day 5: Test

Morning - Conduct Interviews:

  1. 5 Users, 1 Hour Each: Tight schedule, back-to-back
  2. 2-Room Setup: Interview room + observation room (team watches)
  3. Think-Aloud Protocol: Users narrate their thoughts while interacting

Interview Structure:

0-5 min:   Warm-up, context questions about current behavior
5-10 min:  Show concept video, gauge initial reaction
10-40 min: Hands-on prototype testing (physical + app)
40-50 min: Follow-up questions on specific pain points
50-60 min: Willingness to buy, competitive comparisons

Afternoon - Synthesize:

  1. Watch Together: Review recordings, capture quotes
  2. Pattern Recognition: What did 3+ users struggle with?
  3. Go/No-Go Decision: Iterate, pivot, or proceed to development?

IoT-Specific Interview Questions:

  • “Walk me through how you’d install this in your home.”
  • “What would you do if the device wasn’t responding?”
  • “Where would this live? Would it bother your family?”
  • “How would you feel if your [neighbor/guest] saw this?”
  • “What happens when the battery dies?”

7.9.6 Sprint Outcomes and Next Steps

Sprint Deliverables:

  • Validated (or invalidated) product concept
  • Clear list of what works and what doesn’t
  • Prioritized improvements for next iteration
  • Realistic development estimate based on actual user needs

Go/No-Go Framework:

PROCEED if:
- 4-5 users completed core task successfully
- Users expressed genuine purchase intent
- No fundamental misunderstandings of value proposition

ITERATE if:
- 2-3 users struggled but eventually succeeded
- Core concept resonated but execution confused
- Clear, fixable issues identified

PIVOT if:
- 0-2 users saw value in the product
- Fundamental assumptions invalidated
- Existing solutions preferred by majority

7.9.7 Common Design Sprint Pitfalls for IoT

Pitfall Prevention
Prototyping real hardware Use Wizard of Oz or video editing
Ignoring the physical experience Build tactile mockup, even if non-functional
Testing app only Always include device + app together
Skipping installation/setup Most IoT products fail at onboarding
Expert users only Include one “tech-reluctant” user
No hardware constraints Remind team of battery, cost, size limits during ideation

7.9.8 Tools and Resources

Digital Prototyping:

  • Figma (free): Collaborative app design
  • Marvel App: Quick clickable prototypes
  • Principle: Animation and micro-interactions

Physical Prototyping:

  • Fusion 360 (free): 3D modeling for enclosures
  • Shapeways: Overnight 3D printing
  • McMaster-Carr: Quick hardware components

Sprint Facilitation:

  • Google Ventures Sprint Book (free PDF summary)
  • MURAL or Miro: Virtual whiteboarding
  • Loom: Record prototype demos

7.10 Knowledge Check

7.11 Common Pitfalls

Common Pitfall: Skipping the Empathy Phase

The mistake: Jumping straight to ideation and building without conducting proper user research, assuming you already understand user needs.

Symptoms:

  • Product features based on team assumptions rather than user interviews
  • No user personas or journey maps created before prototyping
  • “We know what users want” statements without supporting evidence
  • Building solutions before validating that the problem exists

Why it happens: Engineers are eager to build, and user research feels slow and non-technical. Teams mistake their own frustrations for universal user needs.

The fix: Mandate at least 5 user interviews before any prototyping begins. Create empathy maps and journey maps from actual user observations, not assumptions. Allocate 20% of project time to the Empathize phase.

Prevention: Establish a gate review requiring documented user research (interview notes, observation logs, competitive analysis) before entering the Define phase. No user research = no prototype approval.

Common Pitfall: Prototype Fidelity Mismatch

The mistake: Building high-fidelity prototypes too early (wasting resources on features that get cut) or testing low-fidelity prototypes for high-stakes decisions (missing critical usability issues).

Symptoms:

  • Spending weeks on polished UI mockups before validating core functionality
  • Users confused by paper prototypes when evaluating complex interactions
  • “We already built the app” when major pivot is needed
  • Testing aesthetic preferences when you should be testing workflow

Why it happens: Teams don’t understand which prototype fidelity matches which learning goal. High-fidelity feels more “real” and impressive to stakeholders.

The fix: Match fidelity to your learning goal: - Concept validation (Does anyone want this?) - Paper sketches, storyboards - Workflow validation (Can users complete tasks?) - Clickable wireframes - Usability testing (Is it intuitive?) - Interactive prototype with realistic data - Technical feasibility (Does it work?) - Functional breadboard prototype

Prevention: Before building any prototype, explicitly state: “We want to learn X, so we need fidelity level Y.” Document this in design reviews to prevent scope creep.

7.12 Key Concepts Summary

Design Thinking Phases:

  • Empathize: Understand user needs and context
  • Define: Clearly articulate the problem
  • Ideate: Generate creative solutions
  • Prototype: Build tangible manifestations
  • Test: Validate with users, iterate

Problem Definition:

  • User perspective: Who experiences the problem?
  • Context: When, where, how does it occur?
  • Impact: Magnitude, frequency, consequence
  • Constraints: Technical, budget, time limitations
  • Success criteria: How will you know it’s solved?

Rapid Prototyping:

  • Fidelity levels: Low to Medium to High
  • Iterative cycles: Build, test, learn, refine
  • Fail fast: Quick testing, quick iteration
  • User feedback: Continuous validation
  • Low cost: Avoid expensive early prototypes

Risk Management:

  • Categories: Technical, business, regulatory, supply chain
  • Assessment: Probability x Impact matrix
  • Mitigation: Strategies matched to risk level
  • Monitoring: Continuous tracking of identified risks

7.13 Summary

  • Risk Categories: Technical (unproven tech), business (market readiness), regulatory (certifications), and supply chain (component availability) require different mitigation strategies
  • Risk Assessment Matrix: High probability + high impact = CRITICAL (immediate action); Low probability + low impact = ACCEPT (document only)
  • Agile vs Waterfall: Choose Agile for evolving requirements and consumer IoT; choose Waterfall for regulatory compliance and hardware-dominant projects; default to hybrid
  • Scrum Adaptations: Longer sprints (3-4 weeks) for hardware, separate tracks for hardware/software, integration sprints for combined testing
  • Kanban Principles: WIP limits prevent bottlenecks; when at limit, finish existing work before starting new; visualize workflow stages
  • Design Sprints: Five-day intensive process compresses months of validation; Day 1-2 (understand/sketch), Day 3-4 (decide/prototype), Day 5 (test with users)

7.14 Conclusion

Design thinking and thorough planning transform IoT development from technology-driven experimentation to user-centered problem-solving. By empathizing with users, clearly defining problems, creatively ideating solutions, rapidly prototyping, and iteratively testing, teams create IoT products that truly serve user needs.

Successful IoT projects balance human needs, technical feasibility, and business viability. The design thinking process ensures this balance through structured methods that validate assumptions early, reduce risk, and focus resources on solutions that matter.

Planning provides the roadmap from concept to product, with realistic timelines, appropriate resources, and risk mitigation strategies. Combining design thinking’s user-centered creativity with disciplined project planning creates the foundation for IoT products that succeed in the market and improve users’ lives.

7.16 Concept Relationships

Agile, Risk, and Design Thinking Integration

Risk Management ↔︎ Design Thinking Phases:

  • Empathize/Define → Identify business/market risks (wrong problem = wasted development)
  • Prototype → Mitigate technical risks (validate before expensive implementation)
  • Test → Reduce user acceptance risk (catch usability issues early)
  • Implement → Supply chain, regulatory, timeline risks emerge

Agile ↔︎ Hardware Constraints:

  • Software: 2-week sprints, daily deploys
  • Firmware: 2-week sprints, OTA updates every sprint
  • Hardware: 3-4 week sprints (PCB fab time), 3-5 iterations to production
  • Integration: Align all three with 4-week cadence (lowest common denominator)

Risk Probability × Impact → Mitigation Timing:

  • High prob + High impact (CRITICAL) → Mitigate in Sprint 1-2 (early prototyping)
  • Low prob + High impact → Contingency plan (backup suppliers)
  • High prob + Low impact → Accept or quick fix
  • Low prob + Low impact → Document, monitor

Design Sprint (5-day) → Agile Sprint (2-4 week) Relationship:

  • Design Sprint validates concept → Feeds into Agile Sprint 1 backlog
  • Use Design Sprint for major pivots or new features
  • Don’t confuse: Design Sprint = prototype + test; Agile Sprint = build + ship

7.17 See Also

Related Resources

Completing the Series:

Risk Management Details:

Agile for Hardware:

7.18 What’s Next

You have completed the Design Thinking and Planning chapter series. Return to the Design Thinking and Planning Overview for quick reference, or proceed to Hardware Prototyping to transform your design concepts into physical prototypes.

Previous Current Next
Project Planning Agile and Risk Management Design Thinking and Planning