789  IoT Network Design: Advanced Scenarios and Labs

NoteLearning Objectives

By the end of this section, you will be able to:

  • Apply comprehensive design checklists to evaluate IoT device and data requirements
  • Analyze industrial scenarios with high availability and redundancy requirements
  • Design complete IoT network solutions from requirements to deployment
  • Evaluate protocol selection through real-world case studies

789.1 Prerequisites

Before studying this chapter, review:


789.2 Comprehensive Design Questions

ImportantDesign Checklist: Know Your End Devices

What size is it? Where is it placed? Does it have an OS? What is its power supply? What data does it deliver? Does it have a sensing and/or actuator function?

Specific Questions:

  1. Physical Characteristics:
    • Dimensions and weight?
    • Environmental rating (IP rating)?
    • Operating temperature range?
    • Mounting requirements?
  2. Power Specifications:
    • Mains powered, battery, or energy harvesting?
    • Power consumption in active/sleep modes?
    • Battery life requirements?
  3. Computational Capabilities:
    • Processor type and speed?
    • RAM and storage capacity?
    • Operating system (RTOS, Linux, none)?
    • Can it run TCP/IP stack?
  4. Connectivity:
    • Built-in radio (Wi-Fi, BLE, LoRa)?
    • Wired interfaces (Ethernet, RS-485)?
    • Antenna type and placement?
    • Communication range required?
  5. Data Characteristics:
    • Sensor types and accuracy?
    • Data format and size?
    • Sampling frequency?
    • Actuator control requirements?

Scenario: You’re designing the network infrastructure for a critical oil refinery control system that monitors 1,200 sensors and controls 300 actuators (valves, pumps, safety shutdowns). The system processes hydrocarbons under high pressure/temperature — equipment failure can cause explosions, environmental damage, and loss of life.

Operational Requirements: - Uptime target: 99.99% (52 minutes downtime/year maximum) - Failover time: < 3 seconds (longer = process upset, potential safety incident) - Recovery: Automatic (unmanned night shifts, remote locations) - Downtime cost: $150,000/hour (lost production + safety risks)

Your team debates four infrastructure designs for Level 4-5 (Data Accumulation / Abstraction):

Option A - Star Topology: All edge gateways connect to one powerful central server (dual CPU, RAID storage, $80K). If server fails, backup server manually brought online by IT staff (15-30 min response time).

Option B - Tree Topology: Hierarchical design with floor-level aggregation switches feeding building switches feeding data center. Each level has backup switch with automatic failover (30-50 seconds via Spanning Tree Protocol).

Option C - Redundant Mesh: Every critical node (gateways, switches, edge servers, routers) has connections to at least 2 other nodes. Layer-3 routing (e.g., OSPF) with fast failure detection (e.g., BFD) plus precomputed fast reroute (IP-FRR/LFA). Automatic failover can be engineered to stay under the 3-second requirement.

Option D - Bus Topology: All devices on shared high-speed Ethernet backbone (40 Gbps) with multiple network taps. Single cable infrastructure reduces cost by 60%.

Trade-off Analysis Question: Which design meets the 99.99% uptime / <3s failover requirements while balancing cost and complexity?

Show Analysis & Answer

Key Insights with Metrics:

Option C (Redundant Mesh with Dynamic Routing) is the only acceptable design for this critical application.

Requirement Check (99.99% uptime + <3s failover):

Design Single Points of Failure? Restoration Approach Typical Restoration Time Meets <3s Failover?
Star (manual) Yes (central server) Manual failover 15-30 minutes (often longer off-hours) No
Tree (STP/RSTP) Reduced L2 reconvergence ~2-50 seconds (protocol/tuning dependent) Often no
Mesh + BFD + IP-FRR No (redundant paths) Automatic fast reroute <1-2 seconds (engineered) Yes
Bus (no redundancy) Yes (backbone) Repair required Minutes to hours No

Notes: - 99.99% uptime allows ~52 minutes downtime/year. - With 15-30 minute manual recovery, just 2 incidents/year can exceed the entire downtime budget. - STP/RSTP may be acceptable for enterprise IT, but safety-critical control systems often require deterministic sub-3s recovery.

Downtime-cost intuition: At $150,000/hour, preventing only ~1.7 hours of outage over 5 years offsets a $250k redundancy premium.

Architecture Deep-Dive (Option C - Redundant Mesh):

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D'}}}%%
flowchart LR
    subgraph SENSORS["Sensor Tier"]
        GW1["Sensor<br/>Gateway 1"]
        GW2["Sensor<br/>Gateway 2"]
    end

    subgraph SWITCHES["Switch Tier"]
        SWA["Switch A"]
        SWB["Switch B"]
    end

    subgraph EDGE["Edge Tier"]
        ES1["Edge<br/>Server 1"]
        ES2["Edge<br/>Server 2"]
    end

    subgraph ROUTERS["Router Tier"]
        RA["Router A"]
        RB["Router B"]
    end

    subgraph INTERNET["Internet (Redundant ISP)"]
        ISP1["Internet 1"]
        ISP2["Internet 2"]
    end

    GW1 --> SWA
    GW1 --> SWB
    GW2 --> SWA
    GW2 --> SWB

    SWA --> ES1
    SWA --> ES2
    SWB --> ES1
    SWB --> ES2

    ES1 --> RA
    ES1 --> RB
    ES2 --> RA
    ES2 --> RB

    RA --> ISP1
    RA --> ISP2
    RB --> ISP1
    RB --> ISP2

    style GW1 fill:#16A085,stroke:#2C3E50,color:#fff
    style GW2 fill:#16A085,stroke:#2C3E50,color:#fff
    style SWA fill:#E67E22,stroke:#2C3E50,color:#fff
    style SWB fill:#E67E22,stroke:#2C3E50,color:#fff
    style ES1 fill:#2C3E50,stroke:#16A085,color:#fff
    style ES2 fill:#2C3E50,stroke:#16A085,color:#fff
    style RA fill:#7F8C8D,stroke:#2C3E50,color:#fff
    style RB fill:#7F8C8D,stroke:#2C3E50,color:#fff
    style ISP1 fill:#16A085,stroke:#2C3E50,color:#fff
    style ISP2 fill:#16A085,stroke:#2C3E50,color:#fff

Figure 789.1

Failure Scenarios & Recovery:

Failure Event Impact Without Mesh Impact With Mesh (BFD + IP-FRR)
Switch A fails All GW1 sensors offline (50%) Traffic reroutes via Switch B in <1s, 0% loss
Edge Server 1 fails Half of analytics offline Traffic reroutes to Server 2 in <1s, 0% loss
Router A fails Internet connectivity lost BGP switches to Router B in 2s, 0% loss
ISP 1 outage Cloud monitoring down Automatic BGP failover to ISP 2
Multiple failures Cascading total outage System degraded but operational

Routing Configuration for Sub-3s Failover:

router ospf 1
  network 10.0.0.0 0.255.255.255 area 0
  bfd all-interfaces              # Use fast failure detection
  fast-reroute per-prefix         # Pre-compute backup paths

! Example BFD tuning (conceptual)
bfd interval 300 min_rx 300 multiplier 3   # ~0.9s failure detection

Failure detection: ~1 second (BFD) Local repair: sub-second (IP-FRR/LFA switchover) Total failover: < 3 seconds

Why Other Options Fail:

Option A (Star): Single Point of Failure (SPOF): Central server failure = total outage. Manual failover: 15-30 min requires: (1) Monitoring system detects failure, (2) Alert sent to on-call engineer, (3) Engineer logs in remotely or drives to site, (4) Diagnoses issue, (5) Manually activates backup server, (6) Verifies operation.

Option B (Tree): Spanning Tree Protocol (STP) limitations: Default timers 30-50 seconds (too slow). Rapid STP (RSTP): 2-6 seconds (still exceeds 3s requirement). Single path to root: If building switch fails, entire building isolated until STP reconverges.

Option D (Bus): Shared medium SPOF: Single cable cut = total network failure. No redundancy: Cannot meet 99.99% availability by definition.

Design Principle: For critical systems where downtime causes safety incidents or costs exceed infrastructure investment, eliminate all single points of failure through redundant mesh architecture with automatic failover.


789.3 Design Checklist: Know Your Data

How much data? How often is data sent? How big is the data? How fast is the data? How timely does a response to the data need to be? How accurate is the data? What is the useful part of the data?

Specific Questions:

  1. Volume:
    • Bytes per message?
    • Messages per second/minute/hour?
    • Daily/monthly data volume?
    • Peak vs average rates?
  2. Velocity:
    • Real-time requirements (latency)?
    • Acceptable delay before processing?
    • Continuous vs periodic transmission?
    • Burst vs steady data flow?
  3. Variety:
    • Structured (JSON, CSV) or unstructured?
    • Multiple data types (sensor, video, audio)?
    • Metadata requirements?
    • Data format standards?
  4. Veracity:
    • Sensor accuracy and precision?
    • Error rates and handling?
    • Data validation requirements?
    • Calibration needs?
  5. Value:
    • Critical vs informational data?
    • Data retention period?
    • Which data requires immediate action?
    • Long-term analytics requirements?

Scenario: Your manufacturing plant has 2,000 Zigbee sensors (temperature, vibration, door status) sending data to a cloud-based predictive maintenance system. The sensors are battery-powered and deployed across a 500m x 300m facility.

Current Architecture Constraints: - Zigbee sensors: 8-bit MCU, 64KB RAM, IEEE 802.15.4 radio only (no TCP/IP stack) - Cloud database: AWS IoT Core (accepts MQTT/HTTPS only) - Budget: $50,000 for gateway infrastructure - Latency requirement: < 5 seconds sensor-to-cloud

Your team proposes three architectures:

Option A - Sensor-Level Conversion: Replace all Zigbee sensors with Wi-Fi-enabled sensors that speak MQTT directly ($80/sensor vs $25/sensor for Zigbee)

Option B - Cloud-Level Conversion: Build custom cloud endpoint that accepts raw Zigbee protocol packets over cellular backhaul

Option C - Edge Gateway Conversion: Deploy 10 edge gateways (Raspberry Pi 4 + Zigbee coordinator) running Node-RED to convert Zigbee to MQTT ($400/gateway)

Which option best aligns with the Cisco 7-Level IoT Reference Model’s “right processing at right layer” principle?

Show Analysis & Answer

Key Insights with Metrics:

Option C (Edge Gateway at Level 3) is the correct architectural choice according to the reference model.

Cost-Performance Analysis:

Option Initial Cost Power Cost (10yr) Complexity Latency
A (Wi-Fi sensors) $160K (2000x$80) $50K (200 mA avg) Low <1s
B (Cloud conversion) $30K (custom endpoint) $5K Very High 2-4s
C (Edge gateways) $4K (10x$400) $2K (20 mA avg) Medium 1-3s

Option C Total: $6K (vs $210K for A, $35K for B)

Architectural Rationale per Cisco 7-Level Model:

Level 1 (Physical Devices): Sensors collect data. Should remain simple, low-power, low-cost. - Zigbee sensors ($25) achieve this. Replacing with Wi-Fi ($80) violates “right capability at right cost” principle.

Level 2 (Connectivity): Local-area networking between sensors and coordinator. - Zigbee mesh provides this efficiently (self-healing, low power).

Level 3 (Edge Computing): “Data Element Analysis & Transformation” — protocol conversion layer. - Edge gateways convert Zigbee frames to MQTT messages: ``` Zigbee packet: {cluster:0x0402, temp:23.5C, nodeID:0xAB12}

MQTT message: {“topic”:“factory/zone1/temp”, “payload”:{“temp”:23.5, “sensor”:“AB12”}} ``` - Aggregation: 200 sensors/gateway x 10 gateways = 2,000 sensors - Local buffering if internet drops (resilience)

Level 4-7 (Cloud Infrastructure): Standard internet protocols (MQTT, HTTPS). - AWS IoT Core receives MQTT messages. No custom protocol support needed.

Why Not Alternatives:

Option A (Wi-Fi sensors): Cost: 3.2x sensor cost + 10x power cost = $210K total. Battery life: Wi-Fi 200 mA vs Zigbee 20 mA means batteries last 1 year vs 10 years. Violation: Puts complexity at wrong layer (Level 1 instead of Level 3)

Option B (Cloud conversion): Maintenance: Every Zigbee firmware update requires cloud endpoint changes. Scalability: Custom protocol parsers don’t scale (what about LoRa devices next year? Another custom parser?). Violation: Cloud (Level 6-7) shouldn’t handle device-specific protocols

Option C (Edge gateways): Right layer separation: Zigbee at L1-L2, conversion at L3, standard protocols L4+. Reusability: Add LoRa devices? Deploy LoRa gateway with same MQTT output format. Local intelligence: Edge can filter/aggregate data (send only anomalies to cloud = 10x bandwidth reduction)

Design Principle: The Cisco 7-Level Model prescribes “do the right processing at the right layer.” Protocol conversion is an Edge Computing (Level 3) function — not a sensor (L1) or cloud (L6-7) responsibility.


789.4 Case Study: Protocol Selection in Practice

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#2C3E50','primaryTextColor':'#fff','primaryBorderColor':'#16A085','lineColor':'#16A085','secondaryColor':'#E67E22','tertiaryColor':'#ecf0f1','textColor':'#2C3E50','fontSize':'14px'}}}%%
graph TD
    REQ["Requirements:<br/>10K parking sensors<br/>5 km² area<br/>10-year battery life<br/>5-min updates"]

    ANALYSIS["Decision Analysis"]

    REQ --> ANALYSIS

    ANALYSIS --> D1["Device: Battery-powered<br/>magnetic sensors"]
    ANALYSIS --> D2["Connectivity: LoRaWAN<br/>(long range, low power)"]
    ANALYSIS --> D3["Data: 1 byte/message<br/>MQTT to cloud"]
    ANALYSIS --> D4["Infrastructure: 20 gateways<br/>IPv6 addressing"]
    ANALYSIS --> D5["Topology: Star<br/>(sensors → GW → cloud)"]

    D1 --> DEPLOY["Deployment Success:<br/>10-year battery<br/>99.5% uptime<br/>Real-time availability"]
    D2 --> DEPLOY
    D3 --> DEPLOY
    D4 --> DEPLOY
    D5 --> DEPLOY

    style REQ fill:#E67E22,stroke:#2C3E50,color:#fff
    style ANALYSIS fill:#2C3E50,stroke:#16A085,color:#fff
    style D1 fill:#16A085,stroke:#2C3E50,color:#fff
    style D2 fill:#16A085,stroke:#2C3E50,color:#fff
    style D3 fill:#16A085,stroke:#2C3E50,color:#fff
    style D4 fill:#16A085,stroke:#2C3E50,color:#fff
    style D5 fill:#16A085,stroke:#2C3E50,color:#fff
    style DEPLOY fill:#2C3E50,stroke:#16A085,color:#fff

Figure 789.2: Smart city parking system design case study from requirements to deployment

{fig-alt=“Smart city parking system case study flowchart showing requirements (10,000 sensors, 5km² area, 10-year battery, 5-minute updates in orange) leading to decision analysis (navy) which determines five key design choices: battery-powered magnetic sensors, LoRaWAN connectivity for long range and low power, 1-byte messages via MQTT to cloud, 20 gateways with IPv6 addressing, and star topology (all in teal). These decisions result in successful deployment achieving 10-year battery life, 99.5% uptime, and real-time parking availability (navy).”}

789.4.1 Example: Smart City Parking System

Requirements Analysis: - 10,000 parking spots across 5km² area - Battery-powered sensors (10-year life) - Occupancy status updates every 5 minutes - Real-time availability display - Monthly billing data

Design Decisions:

  1. Devices (Level 1-2):
    • Magnetic sensor modules (battery powered)
    • LoRaWAN connectivity (long range, low power)
    • 10-year battery life with 5-minute reporting
  2. Data (Level 2-4):
    • 1 byte per message (occupied/vacant + spot ID)
    • ~33 messages/hour per sensor
    • LoRaWAN to Gateway to MQTT to Cloud
    • Total: 10KB/hour for 10,000 sensors
  3. Infrastructure (Level 4+):
    • 20 LoRaWAN gateways (500m coverage each)
    • Redundant internet connections
    • Cloud-based MQTT broker
    • PostgreSQL database for historical data
  4. Addressing:
    • Each sensor: Unique DevEUI (64-bit)
    • Gateway-to-cloud: IPv6
    • No IP addressing needed for sensors
  5. Topology:
    • Logical: Star topology (sensors to gateways to cloud)
    • Physical: Gateway placement for maximum coverage
    • Documentation: GIS map with sensor locations

789.5 Hands-On Lab: Design an IoT Network

NoteLab Activity: Industrial Monitoring System Design

Scenario: You’re designing an IoT network for a manufacturing facility with these requirements:

Devices: - 50 temperature/humidity sensors (critical - 1-second updates) - 20 vibration sensors on machinery (critical - real-time) - 100 door/window sensors (security - event-based) - 10 IP cameras (monitoring - continuous video) - 5 industrial PLCs (control - real-time)

Constraints: - Factory area: 200m x 100m - Existing Ethernet infrastructure available - Budget: Moderate - Reliability: High (manufacturing uptime critical)

Tasks:

  1. Select protocols for each device type (justify your choices)
  2. Design addressing scheme (IPv4, IPv6, or hybrid?)
  3. Create logical topology showing data flows
  4. Specify network infrastructure (switches, routers, gateways)
  5. Identify redundancy requirements for critical systems

Deliverables: - Protocol selection matrix with justifications - IP addressing plan - Logical topology diagram - Physical topology with device placement - Bill of materials (BOM) for network equipment


789.6 Quiz: IoT Network Design Decisions

You need to connect 5,000 soil moisture sensors across a 10km² agricultural area. Sensors report every 30 minutes and send 10 bytes of data. Which protocol is most appropriate?

  1. Wi-Fi
  2. Bluetooth Low Energy
  3. LoRaWAN
  4. 5G Cellular
Click to see answer

Answer: C) LoRaWAN

Explanation: - Wi-Fi: Limited range (~100m), high power consumption, too many access points needed - Bluetooth LE: Very limited range (~10-100m), would require thousands of gateways - LoRaWAN: Long range (up to 15km in rural areas), very low power (10+ year battery life), perfect for infrequent, small data transmissions, relatively few gateways needed (maybe 5-10 for 10km²), low cost per device at scale - 5G Cellular: Expensive per device, overkill for such low data rates, higher power consumption

LoRaWAN is specifically designed for this use case: massive number of devices, wide area coverage, infrequent small data packets, battery-powered operation.

You’re deploying 100,000 smart utility meters across a city. Each meter needs bidirectional communication for reading data and receiving firmware updates. What addressing strategy is best?

  1. Private IPv4 (10.0.0.0/8) with NAT
  2. Public IPv4 addresses for each meter
  3. IPv6 with SLAAC
  4. No IP addresses - use proprietary protocol
Click to see answer

Answer: C) IPv6 with SLAAC

Explanation: - Private IPv4 with NAT: Can technically support 16 million addresses (10.0.0.0/8), NAT complicates bidirectional communication, firmware updates to specific meters require NAT traversal, management complexity increases significantly

  • Public IPv4: 100,000 public IPv4 addresses are extremely expensive, IPv4 address exhaustion makes this impractical, wasteful use of scarce resources

  • IPv6 with SLAAC: Virtually unlimited addresses (340 undecillion), end-to-end connectivity without NAT, auto-configuration reduces deployment complexity, each meter gets globally unique address, simplified firmware updates and management, future-proof for IoT growth

  • Proprietary protocol: Locks you into vendor, can’t leverage standard internet infrastructure, difficult to integrate with other systems, higher long-term costs

Best practice: IPv6 is the recommended solution for large-scale IoT deployments requiring bidirectional communication.

You have sensors that can’t run TCP/IP but need to send data to a cloud application. Where should protocol conversion happen according to the IoT reference model?

  1. Level 1 (Physical Devices)
  2. Level 2 (Connectivity)
  3. Level 3 (Edge Computing)
  4. Level 6 (Application)
Click to see answer

Answer: C) Level 3 (Edge Computing)

Explanation: - Level 1 (Physical Devices): By definition, these devices cannot run TCP/IP, no processing capability for protocol conversion

  • Level 2 (Connectivity): This level handles the non-TCP/IP protocols (Zigbee, LoRa, etc.), communication and processing units exist here, could do conversion, but Edge Computing is better suited

  • Level 3 (Edge Computing): Specifically designed for “Data Element Analysis & Transformation”, protocol gateways operate at this level, converts proprietary/IoT-specific protocols to TCP/IP, aggregates data from multiple Level 2 sources, optimal placement for transformation before data accumulation

  • Level 6 (Application): Too high in the stack, data needs to be in TCP/IP format by Level 4 (Data Accumulation), application layer should work with already-converted data

The Edge Computing layer (Level 3) acts as the translation point between IoT-specific protocols and standard internet protocols, enabling seamless integration of constrained devices with cloud applications.

For a hospital patient monitoring IoT system, which redundancy measure is LEAST critical?

  1. Redundant network switches
  2. Backup power (UPS) for gateways
  3. Multiple internet connections
  4. Geographic distribution of data centers across continents
Click to see answer

Answer: D) Geographic distribution of data centers across continents

Explanation: - Redundant network switches: Critical - Single switch failure could disconnect all patient monitors, network-level redundancy essential for continuous monitoring, recovery time must be seconds, not minutes

  • Backup power (UPS): Critical - Power outages happen regularly, patient monitoring cannot be interrupted, gateways and network equipment need continuous power, typically paired with generator backup

  • Multiple internet connections: Critical - If cloud-based monitoring is used, ISP outages occur, diverse providers and physical paths recommended, enables continued data transmission and alerting

  • Geographic distribution across continents: LEAST Critical - Hospital monitoring is local/regional concern, data center in same region/country is sufficient, local redundancy (different data centers in same city) is more important, cross-continent distribution is for global disaster recovery, adds complexity and latency without commensurate benefit for this use case

For hospital patient monitoring, local redundancy and immediate failover are far more critical than global geographic distribution.

Scenario: You’re deploying environmental monitoring across a 5 km² (1,235 acres) vineyard with rolling hills and sparse infrastructure. The system must monitor 50,000 sensors across 200 blocks:

Sensor Requirements: - Soil moisture, temperature, humidity (15 bytes/reading) - Report frequency: Every 60 minutes - Battery-powered (solar panels impractical due to tree canopy) - Battery replacement: 10-year target (manual replacement costs $15/sensor labor = $750K total) - Environmental: -10C to 50C, IP67 rating

Infrastructure Constraints: - No Wi-Fi coverage (nearest building 2 km away) - Cellular coverage: 3G (spotty, not 4G/5G) - Budget: $200K for connectivity infrastructure - Existing: Internet connection at main facility

Your team evaluates four protocol options:

Option A - Wi-Fi Mesh: Deploy 500 mesh access points ($150 each) across vineyard, sensors connect via Wi-Fi ($40/sensor x 50K = $2M sensor cost)

Option B - Zigbee Mesh: Deploy 200 Zigbee coordinators ($200 each), create 200 mesh sub-networks ($25/sensor x 50K = $1.25M sensor cost)

Option C - LoRaWAN: Deploy 8 LoRaWAN gateways ($800 each) at high-points, sensors transmit directly to gateways ($18/sensor x 50K = $900K sensor cost)

Option D - Cellular NB-IoT: Each sensor has cellular modem ($35/sensor x 50K = $1.75M) + $3/month data plan ($150K/month = $1.8M/year operational)

Trade-off Analysis: Which option meets the 10-year battery life, 5 km² coverage, and $200K infrastructure budget while minimizing total cost of ownership?

Show Analysis & Answer

Key Insights with Metrics:

Option C (LoRaWAN) is the correct choice per Cisco 7-Level Model’s “right protocol for requirements” principle.

Cost Analysis (10-year TCO):

Option Sensor HW Infrastructure Power (10yr)* Operations Total 10yr
A (Wi-Fi) $2.0M $75K (500 APs) $450K $100K (maint) $2.625M
B (Zigbee) $1.25M $40K (200 coords) $90K $80K (mesh mgmt) $1.46M
C (LoRa) $900K $6.4K (8 GW) $18K $50K $974K
D (NB-IoT) $1.75M $0 (uses cell) $120K $18M (subs) $19.87M

*Power cost = battery replacements needed before 10-year target

LoRaWAN saves $486K (33%) vs next-best option (Zigbee) and $18.9M (95%) vs cellular.

Battery Life Calculation:

Protocol TX Power TX Time RX Power Sleep Power Battery Life (calc)
Wi-Fi 200 mA 100 ms/hr 100 mA 15 mA 6 months
Zigbee 30 mA 50 ms/hr 30 mA 3 mA 2-3 years
LoRaWAN 120 mA 500 ms/hr 0 mA (Class A) 0.3 uA 10+ years
NB-IoT 200 mA 2 sec/hr 100 mA 5 uA 5-7 years

LoRaWAN Class A (sensor-initiated only): No listening = no RX power drain. Sleep current 0.3 uA = negligible.

Energy budget per hour: - TX: 120 mA x 0.5 sec = 0.017 mAh - Sleep: 0.0003 mA x 3599.5 sec = 0.0003 mAh - Total: 0.017 mAh/hour x 24 hr x 365 days = 149 mAh/year

Battery capacity: 2x AA batteries = 3,000 mAh. Life = 3,000 / 149 = 20 years (exceeds 10-year target).

Design Principle per Cisco 7-Level Model: - Level 1 (Devices): Battery-powered, constrained - needs ultra-low power - Level 2 (Connectivity): Wide area, low data rate, massive scale - LoRaWAN is purpose-built for this exact scenario - Wrong choices: Wi-Fi/Zigbee (designed for personal/home area), NB-IoT (designed for urban cellular coverage with subscription model)

LoRaWAN is the only protocol that simultaneously achieves: 5 km² coverage with minimal infrastructure + 10-year battery life + 50K device scale + under $1M total cost.


Network Design Foundations: - Networking Basics - Core networking principles - Layered Models Fundamentals - OSI and TCP/IP models - Topologies Fundamentals - Network structure patterns

Protocol Selection: - IoT Protocols Fundamentals - Protocol stack overview - IoT Protocols Labs and Selection - Hands-on selection criteria - LPWAN Fundamentals - Wide-area connectivity

Architecture References: - IoT Reference Models - Standard IoT frameworks - Edge/Fog Computing - Distributed processing

Learning Resources: - Simulations Hub - Network topology tools - Knowledge Gaps Hub - Design pattern exercises


789.8 Summary

Designing an IoT network requires careful consideration across multiple dimensions:

TipKey Takeaways
  1. No Universal Solution: Each IoT deployment requires custom design based on specific requirements
  2. Layered Decision Making: Use the 7-Level IoT Reference Model to organize design decisions
  3. Device-First Thinking: Device capabilities and constraints drive connectivity choices
  4. Data Characteristics Matter: Volume, velocity, and value of data influence protocol selection
  5. Plan for Scale: Addressing schemes and topology must accommodate growth
  6. Document Everything: Logical and physical topologies are essential for deployment and maintenance
  7. Think End-to-End: Design must consider all layers from devices to applications
  8. Redundancy for Critical Systems: High-availability requirements demand redundant infrastructure

Design Process: 1. Understand devices (capabilities, power, placement) 2. Characterize data (volume, frequency, latency) 3. Select appropriate protocols for each layer 4. Design addressing scheme for scale 5. Plan physical and logical topologies 6. Specify infrastructure requirements 7. Implement redundancy where needed 8. Document all design decisions

The complexity of IoT network design reflects the heterogeneous nature of IoT devices and applications. Successful deployments balance technical constraints, business requirements, and future scalability.

789.9 What’s Next?

Continue to Network Topologies to explore how different network structures (star, mesh, tree) impact IoT system scalability, reliability, and management.