33  Protocol Selection Lab

Key Concepts
  • Selection Framework: A structured decision process that evaluates protocols against weighted criteria to produce a defensible recommendation
  • Weighted Scoring Matrix: A table where each protocol is scored on multiple criteria, each with an importance weight; the highest total score wins
  • Must-Have vs Nice-to-Have: Hard requirements that eliminate protocols immediately vs soft preferences that influence ranking
  • Technology Readiness Level (TRL): A scale (1–9) indicating how mature a technology is; TRL 7+ is typically required for production IoT deployments
  • Total Cost of Ownership (TCO): All costs over a deployment lifetime: hardware, connectivity fees, maintenance, and eventual replacement
  • Vendor Lock-In Risk: The risk of being unable to switch suppliers if a vendor discontinues a product or raises prices
  • Pilot Study: A small-scale test of a chosen protocol in real deployment conditions before committing to full-scale rollout

33.1 In 60 Seconds

Protocol selection for IoT follows a systematic framework: start with power budget (battery vs. mains), then evaluate range, data rate, and latency requirements to narrow choices. Wrong protocol selection can reduce battery life from years to weeks – for example, TCP overhead on a sensor sending 4-byte readings every 15 minutes can consume 8x more energy than UDP with application-layer reliability.

33.2 Learning Objectives

By the end of this chapter, you will be able to:

  • Apply protocol selection decision trees: Navigate from requirements to protocol choice systematically
  • Evaluate energy efficiency trade-offs: Compare protocols by energy-per-bit and instantaneous power
  • Calculate protocol overhead: Determine frame sizes and payload efficiency for different stacks
  • Predict battery life impact: Estimate how protocol choice affects device longevity quantitatively
  • Architect hybrid protocol stacks: Combine protocols for edge-to-cloud communication paths

What is this chapter? A systematic framework for selecting IoT protocols based on real-world constraints.

Why it matters:

  • Wrong protocol choice can reduce battery life from years to weeks
  • Different deployment scenarios require different protocol combinations
  • Understanding trade-offs prevents costly redesigns later

Prerequisites:

33.3 Protocol Selection Framework

Protocol selection decision tree starting with power source (battery vs mains), then range requirements (short/medium/long), leading to technology choices (BLE, Zigbee, Wi-Fi, LoRaWAN, Ethernet) and finally application protocol selection (CoAP for request-response, MQTT for pub-sub patterns)
Figure 33.1: IoT Protocol Selection Decision Tree

The decision tree in Figure 33.1 guides protocol selection through three stages: first determine your power source (battery or mains), then evaluate range requirements, and finally choose the application-layer protocol based on your communication pattern. Once you have narrowed the candidates, the next step is quantifying energy trade-offs.

33.3.1 Protocol Energy Efficiency Comparison

Understanding power consumption and energy-per-bit is critical for battery-powered IoT devices. Higher data rate protocols can be more energy-efficient per bit despite higher instantaneous power.

Counter-intuitively, Wi-Fi (210 mW) achieves the best energy efficiency at 5.25 nJ/bit because its 40 Mbps data rate amortizes the power investment. BLE (0.147 mW, 153 nJ/bit) is ~30x less efficient per bit but ~1,400x lower instantaneous power – critical for battery life. Zigbee (186,000 nJ/bit) is extremely inefficient for data transfer but optimized for mesh networking and low duty-cycle operation. The key insight: energy per bit is a strong function of data rate and range.

Protocol Power (mW) Data Rate Energy/Bit (nJ) Best For
Bluetooth LE 0.147 960 bps 153 Wearables, beacons
ANT+ 0.675 272 bps 2,480 Sports sensors
Zigbee 35.7 192 bps 186,000 Mesh networks
Wi-Fi 210 40 Mbps 5.25 High throughput
Try It: Energy-per-Bit Explorer

Enter a custom power and data rate to see how energy-per-bit changes. Compare against the standard protocols in the table above.

Protocol Selection Rule of Thumb
  • Maximize battery life: Choose lowest instantaneous power (BLE)
  • Minimize energy per byte: Choose highest data rate (Wi-Fi)
  • Balance both: Duty-cycle high-rate protocols aggressively

Example: Sending 1 KB of sensor data - BLE (153 nJ/bit): 1 KB × 8 bits × 153 nJ = 1.22 mJ, takes 8.5 seconds - Wi-Fi (5.25 nJ/bit): 1 KB × 8 bits × 5.25 nJ = 0.042 mJ, takes 0.2 ms

Total energy cost includes both transmission energy and startup overhead.

\(E_{\text{total}} = E_{\text{startup}} + E_{\text{transmission}}\)

Worked example - Sensor sending 1 KB once per hour:

BLE: \(E_{\text{startup}} = 0.5\text{ mJ}\), \(E_{\text{tx}} = 1.22\text{ mJ}\) → Total: 1.72 mJ per transmission - Daily: \(24 \times 1.72 = 41.3\text{ mJ/day}\)

Wi-Fi: \(E_{\text{startup}} = 50\text{ mJ}\) (100× higher), \(E_{\text{tx}} = 0.042\text{ mJ}\) → Total: 50.04 mJ - Daily: \(24 \times 50.04 = 1201\text{ mJ/day}\)

Despite Wi-Fi’s 30× better energy-per-bit, its startup overhead makes it 29× worse for infrequent 1KB transmissions. Break-even occurs at ~10 KB payloads.

Wi-Fi uses 30× less energy for the transfer but wakes from sleep with higher startup cost. The optimal choice depends on payload size and transfer frequency.

Objective: Build an interactive protocol selection tool on ESP32 that evaluates different IoT protocol stacks against real deployment constraints (power budget, range, data rate, latency) and recommends the optimal stack with battery life and cost estimates.

Paste this code into the Wokwi editor:

#include <WiFi.h>

struct Protocol {
  const char* name;
  float powerMw;       // Instantaneous TX power
  float dataRateBps;   // Bits per second
  float energyPerBit;  // nJ per bit
  int rangeMeter;      // Typical range
  float latencyMs;     // Typical latency
  int headerBytes;     // Protocol overhead
  const char* bestFor;
};

Protocol protocols[] = {
  {"BLE 5.0",       0.15,    2000000, 0.075, 100,   5,   7,  "Wearables, beacons"},
  {"Zigbee",       35.7,     250000, 143,   100,  15,  25,  "Mesh networks, home automation"},
  {"Thread/6Lo",   35.7,     250000, 143,   100,  20,  10,  "IP mesh, Matter devices"},
  {"Wi-Fi",       210.0,   40000000, 5.25,   50,   3,  36,  "High throughput, video"},
  {"LoRaWAN SF7",   0.1,      5470,  18.3, 5000, 1000,  13,  "Long range, low data"},
  {"LoRaWAN SF12",  0.1,       250, 400,  15000, 5000,  13,  "Extreme range"},
  {"NB-IoT",       230.0,   66000, 3485,  10000, 1500,  20,  "Cellular coverage, mobility"},
  {"LTE-M",       230.0,  1000000, 230,   10000, 100,   20,  "Voice, mobility, medium rate"}
};
int numProtocols = 8;

struct Scenario {
  const char* name;
  bool battery;
  int rangeMeter;
  int payloadBytes;
  int msgsPerDay;
  float maxLatencyMs;
  int batteryMah;
};

void setup() {
  Serial.begin(115200);
  delay(1000);

  Serial.println("=== IoT Protocol Selection Framework ===\n");

  // Define deployment scenarios
  Scenario scenarios[] = {
    {"Smart Home Temp Sensor", true,  30,  4,    288,  5000, 2000},
    {"Industrial Vibration",   true,  50,  64,   8640, 100,  3000},
    {"Agricultural Moisture",  true,  2000, 20,  96,   60000, 2000},
    {"City Parking Sensor",    true,  5000, 8,   48,   30000, 2000},
    {"Video Doorbell",         false, 20,  50000, 86400, 50, 0},
    {"Asset Tracker (Moving)", true,  10000, 32, 144,  10000, 5000}
  };
  int numScenarios = 6;

  for (int s = 0; s < numScenarios; s++) {
    Scenario& sc = scenarios[s];
    Serial.println("========================================");
    Serial.printf("Scenario: %s\n", sc.name);
    Serial.printf("Range: %dm | Payload: %dB | Msgs/day: %d | Max latency: %.0fms\n",
                  sc.rangeMeter, sc.payloadBytes, sc.msgsPerDay, sc.maxLatencyMs);
    Serial.printf("Power: %s | Battery: %d mAh\n",
                  sc.battery ? "Battery" : "Mains", sc.batteryMah);
    Serial.println("----------------------------------------");

    // Score each protocol
    int bestScore = -1;
    int bestIdx = -1;

    Serial.println("Protocol       Range  Lat  Power  Score  Battery Life");
    Serial.println("------------------------------------------------------");

    for (int p = 0; p < numProtocols; p++) {
      Protocol& pr = protocols[p];
      int score = 0;
      bool eligible = true;

      // Range check
      if (pr.rangeMeter < sc.rangeMeter) { eligible = false; }
      else { score += 20; }

      // Latency check
      if (pr.latencyMs > sc.maxLatencyMs) { eligible = false; }
      else { score += 20; }

      // Power score (lower is better for battery)
      if (sc.battery) {
        if (pr.powerMw < 1) score += 30;
        else if (pr.powerMw < 50) score += 20;
        else score += 5;
      } else {
        score += 15;  // Power doesn't matter for mains
      }

      // Data rate adequacy
      float bitsPerMsg = sc.payloadBytes * 8.0 + pr.headerBytes * 8.0;
      float msgsPerSec = sc.msgsPerDay / 86400.0;
      float requiredBps = bitsPerMsg * msgsPerSec;
      if (requiredBps < pr.dataRateBps * 0.01) score += 20; // <1% utilization
      else if (requiredBps < pr.dataRateBps * 0.1) score += 10;
      else score += 0;

      // Battery life estimate
      float batteryYears = 0;
      if (sc.battery && sc.batteryMah > 0) {
        float txTimePerMsg = bitsPerMsg / pr.dataRateBps;  // seconds
        float dailyTxTime = txTimePerMsg * sc.msgsPerDay;
        float dailySleepTime = 86400.0 - dailyTxTime;
        float avgCurrent = (pr.powerMw / 3.3 * dailyTxTime +
                           0.01 * dailySleepTime) / 86400.0;  // mA
        batteryYears = (sc.batteryMah / avgCurrent) / 8760.0;
        if (batteryYears > 10) batteryYears = 10;  // Self-discharge cap
      }

      char tag = eligible ? ' ' : 'X';
      Serial.printf("%c %-13s %-5s %-4s %-6s %3d    ",
                    tag, pr.name,
                    pr.rangeMeter >= sc.rangeMeter ? "OK" : "FAIL",
                    pr.latencyMs <= sc.maxLatencyMs ? "OK" : "FAIL",
                    pr.powerMw < 50 ? "Low" : "High",
                    eligible ? score : 0);

      if (sc.battery && batteryYears > 0) {
        Serial.printf("%.1f yr", batteryYears);
      } else {
        Serial.print("N/A");
      }
      Serial.println();

      if (eligible && score > bestScore) {
        bestScore = score;
        bestIdx = p;
      }
    }

    if (bestIdx >= 0) {
      Serial.printf("\n>> RECOMMENDED: %s (score %d) - %s\n\n",
                    protocols[bestIdx].name, bestScore,
                    protocols[bestIdx].bestFor);
    } else {
      Serial.println("\n>> No single protocol meets all constraints!\n");
    }
  }

  Serial.println("=== Protocol Selection Complete ===");
}

void loop() {
  delay(10000);
}

What to Observe:

  1. Range eliminates options first: LoRaWAN and NB-IoT are the only choices for >1km agricultural and city parking scenarios; BLE/Zigbee/Wi-Fi fail the range check
  2. Latency eliminates options second: Industrial vibration monitoring (100ms max) rules out LoRaWAN (1000ms+); video doorbell rules out everything except Wi-Fi
  3. Battery life varies dramatically: BLE achieves 10+ years for a home sensor, while Wi-Fi would drain a battery in weeks for the same use case
  4. No single protocol fits all: Each scenario recommends a different protocol – this is why real IoT deployments often use protocol gateways to bridge multiple technologies

33.4 Hands-On Lab: Protocol Overhead Analysis

Scenario: An agricultural cooperative plans to deploy 10,000 soil moisture sensors across 50 farms spanning 3 counties. Sensors use 802.15.4 radios with 102-byte payloads, transmitting 20-byte readings every 15 minutes. Engineering analysis reveals: IPv4 requires NAT gateways ($15/device = $150K total) plus introduces 50ms latency and 3% packet overhead. IPv6 with 6LoWPAN offers direct addressing but adds compression complexity. Each 2000 mAh battery must last 5+ years.

Think about:

  1. Why does 6LoWPAN’s header compression (40→6 bytes) result in lower per-packet energy than IPv4’s native 20-byte header?
  2. How does eliminating NAT gateways reduce both upfront cost AND ongoing maintenance for 50 distributed farm locations?
  3. What’s the total cost-of-ownership difference between IPv4+NAT and IPv6+6LoWPAN over 5 years?

Key Insight: IPv6 + 6LoWPAN achieves 60% longer battery life (4.2 vs 2.6 years) while eliminating $150K in NAT infrastructure:

Battery Life Calculation:

IPv4 with NAT:
- Header: 20 bytes IPv4 + 8 bytes UDP + 4 bytes CoAP = 32 bytes overhead
- Total: 32 + 20 payload = 52 bytes/packet
- Daily energy: 1.48 mAh/day → 2.6 year battery life

IPv6 with 6LoWPAN:
- Compressed: 6 bytes IPv6 + 4 bytes UDP + 4 bytes CoAP = 14 bytes overhead
- Total: 14 + 20 payload = 34 bytes/packet
- Daily energy: 0.92 mAh/day → 4.2 year battery life

Cost Analysis:

  • NAT gateways avoided: $150,000 upfront
  • Reduced truck rolls (4.2yr vs 2.6yr battery): $50,000 over 5 years
  • Simplified addressing (no NAT routing): $20,000 in IT support savings
  • Total 5-year TCO advantage: $220,000

Verify Your Understanding:

  • Calculate the energy per bit transmitted: which is more efficient for small 20-byte payloads?
  • Why does 6LoWPAN context-based compression work better for IoT than IPv4’s fixed header structure?

Scenario: A commercial office building installs 500 wireless temperature sensors in ceilings for HVAC optimization. Sensors transmit 4-byte readings every 5 minutes over 802.15.4 mesh. Building policy requires 10-year battery life to avoid costly ceiling access for replacements. Network uses 6LoWPAN with compressed headers (2-byte IPv6, 4-byte UDP, 4-byte CoAP = 10 bytes total overhead). Engineering must justify the compression complexity vs using uncompressed 40-byte IPv6 headers.

Think about:

  1. For 4-byte payloads, what percentage of each packet is overhead with compressed vs uncompressed headers?
  2. Why does energy consumption scale linearly with total packet size for RF transmission?
  3. How many years of operation do you gain by reducing total packet size from 56 bytes to 14 bytes?

Key Insight: Header compression extends battery life by 4× (10.9 years vs 2.7 years), meeting the 10-year requirement:

Packet Size Comparison:

Compressed (6LoWPAN):
2 (IPv6) + 4 (UDP) + 4 (CoAP) + 4 (payload) = 14 bytes
Overhead = 10/14 = 71%

Uncompressed (full IPv6):
40 (IPv6) + 8 (UDP) + 4 (CoAP) + 4 (payload) = 56 bytes
Overhead = 52/56 = 93%

Battery Life Impact:

Radio TX energy ∝ total bytes transmitted
Energy ratio: 56÷14 = 4× more energy without compression

2000 mAh battery at 288 transmissions/day:
- Compressed: 0.055 mAh/day → 10.9 years ✓ Meets requirement
- Uncompressed: 0.22 mAh/day → 2.7 years ✗ Fails requirement

Verify Your Understanding:

  • If payload increases to 64 bytes, how does the battery life advantage change?
  • Why is header compression MORE critical for small payloads than large payloads?

33.5 Lab Activity: Compare Protocol Efficiency

Objective: Calculate and compare overhead for different protocol combinations

Scenario: Temperature sensor (2 bytes) and humidity (2 bytes) = 4 bytes payload

33.5.1 Task 1: Calculate Total Frame Size

Calculate frame size for different protocol stacks:

  1. Full stack (uncompressed): 802.15.4 + IPv6 + UDP + CoAP
  2. Compressed stack: 802.15.4 + 6LoWPAN + UDP + CoAP
  3. MQTT stack: Ethernet + IPv6 + TCP + MQTT
  4. HTTP stack (for comparison): Ethernet + IPv4 + TCP + HTTP
Click to see solution

1. Full Stack (Uncompressed):

802.15.4 MAC: 25 bytes
IPv6: 40 bytes
UDP: 8 bytes
CoAP: 4 bytes
Payload: 4 bytes

Total: 81 bytes
Overhead: 77 bytes (95% overhead!)
Efficiency: 4.9%

2. Compressed Stack (6LoWPAN):

802.15.4 MAC: 25 bytes
6LoWPAN (compressed IPv6 + UDP): 6 bytes
CoAP: 4 bytes
Payload: 4 bytes

Total: 39 bytes
Overhead: 35 bytes (90% overhead)
Efficiency: 10.3%

Improvement: 81 → 39 bytes (52% reduction)

3. MQTT Stack (assuming kept-alive TCP):

Ethernet MAC: 18 bytes (header + FCS)
IPv6: 40 bytes
TCP: 20 bytes
MQTT Fixed Header: 2 bytes
MQTT Variable Header: ~10 bytes (topic "home/temp")
Payload: 4 bytes

Total: 94 bytes
Overhead: 90 bytes (96% overhead)
Efficiency: 4.3%

4. HTTP Stack (minimum request):

Ethernet MAC: 18 bytes
IPv4: 20 bytes
TCP: 20 bytes
HTTP: ~100 bytes (GET /sensor/temp HTTP/1.1...)
Payload: 4 bytes

Total: 162 bytes
Overhead: 158 bytes (98% overhead)
Efficiency: 2.5%

Comparison:

  • 6LoWPAN + CoAP: 39 bytes (best)
  • MQTT: 94 bytes (2.4× CoAP)
  • HTTP: 162 bytes (4.2× CoAP)
Conclusion: For very small payloads, 6LoWPAN + CoAP is most efficient.

33.5.2 Task 2: Calculate Transmissions for Battery Life

Sensor transmits every 5 minutes. Battery: 2000 mAh.

Given:

  • Radio TX: 5 mA
  • Data rate: 250 kbps (802.15.4)
  • Sleep: 5 µA

Calculate daily power consumption for: 1. CoAP (39 bytes, UDP) 2. MQTT (94 bytes, TCP with keep-alive)

Estimate battery life.

Click to see solution

Transmissions per day: 24 × 60 / 5 = 288

CoAP (UDP, 39 bytes):

TX time per message:
= 39 bytes × 8 bits / 250,000 bps
= 1.25 ms

Total TX time per day:
= 288 × 1.25 ms = 360 ms

Energy (TX):
= 5 mA × 0.36 s = 1.8 mA·s = 0.5 µA·h

Energy (Sleep):
= 5 µA × (86,400 - 0.36) s / 3600 = 120 µA·h

Total per day: 120.5 µA·h = 0.12 mA·h
Battery life: 2000 / 0.12 = 16,667 days = 45.7 years

MQTT (TCP keep-alive, 94 bytes + ACKs):

Assuming TCP connection kept open:
- Data: 94 bytes
- ACK: 40 bytes (IPv6 + TCP)
- Total per transmission: 134 bytes

TX+RX time per message:
= 134 bytes × 8 bits / 250,000 bps
= 4.3 ms

Total active time per day:
= 288 × 4.3 ms = 1.24 s

Energy (TX/RX):
= 5 mA × 1.24 s = 6.2 mA·s = 1.7 µA·h

Energy (Sleep):
= 5 µA × (86,400 - 1.24) s / 3600 = 120 µA·h

Total per day: 121.7 µA·h = 0.122 mA·h
Battery life: 2000 / 0.122 = 16,393 days = 44.9 years

Analysis: For infrequent transmission (every 5 min), sleep current dominates. Protocol overhead has minimal impact on battery life (both ~45 years, limited by battery self-discharge).

If transmitting every 10 seconds (8,640 times/day), including realistic radio startup overhead (2 ms warm-up at 15 mA per wake cycle): - CoAP: 10.8 s TX + 17.3 s startup → 0.42 mAh TX + 0.072 mAh startup + 0.12 mAh sleep = 0.61 mAh/day → 9.0 years - MQTT: 37.2 s TX/RX + 17.3 s startup + TCP keep-alive (every 60 s = 1440/day × 40 B ACKs) → 1.44 mAh TX + 0.072 mAh startup + 0.12 mAh sleep + 0.77 mAh keep-alive = 2.4 mAh/day → 2.3 years

Conclusion: For frequent transmission, CoAP saves significant power (~4× longer battery life) primarily because UDP avoids TCP keep-alive overhead.

33.6 Knowledge Check

Common Mistake: Choosing Protocols Based on Header Size Alone

The Error: Many developers select MQTT over CoAP because “MQTT has a 2-byte header and CoAP has a 4-byte header, so MQTT must be more efficient.”

Why It’s Wrong: This ignores the transport layer. MQTT requires TCP (20-byte header minimum), while CoAP uses UDP (8-byte header). When you calculate the full stack:

  • MQTT stack: 2 (MQTT) + 20 (TCP) + 40 (IPv6) = 62 bytes minimum overhead
  • CoAP stack: 4 (CoAP) + 8 (UDP) + 40 (IPv6) = 52 bytes minimum overhead

With 6LoWPAN compression, the gap widens further: - MQTT with 6LoWPAN: 2 + 20 + 6 = 28 bytes - CoAP with 6LoWPAN: 4 + 4 (compressed UDP) + 6 = 14 bytes

Real Impact: For a soil sensor sending 4-byte readings every 15 minutes over 5 years: - MQTT total: 32 bytes per packet, battery life ~4.2 years - CoAP total: 18 bytes per packet, battery life ~7.1 years

The Lesson: Always evaluate the complete protocol stack from application to physical layer. MQTT’s smaller application header is dwarfed by TCP’s connection overhead. Additionally, TCP keep-alive packets (sent every 60-120 seconds even when idle) drain battery continuously, while UDP has no such requirement.

When MQTT Actually Wins: For mains-powered devices with many subscribers (dashboards, alerts, analytics), MQTT’s broker-based pub/sub architecture justifies the TCP overhead through scalability benefits. But for battery-powered point-to-point communication, CoAP’s connectionless UDP approach is demonstrably superior.

Try It: Protocol Overhead & Battery Life Calculator

Adjust the parameters below to explore how protocol choice, payload size, and transmission frequency affect overhead efficiency and estimated battery life.

33.7 Concept Relationships

The protocol selection framework integrates multiple IoT concepts:

Related Concepts:

  • Energy-per-bit vs instantaneous power trade-off: Wi-Fi uses least energy per bit but highest power, BLE is opposite
  • Duty cycle optimization through protocol choice: CoAP’s UDP approach avoids TCP keep-alive overhead
  • Payload aggregation improves efficiency by amortizing headers over larger data: with 6LoWPAN + CoAP (35 bytes overhead), a 4-byte payload yields 10% efficiency while a 127-byte payload yields 78% efficiency
  • Network topology influences protocol: mesh networks favor 6LoWPAN/RPL, star topologies can use simpler protocols
  • Latency requirements interact with reliability mechanisms: CoAP’s exponential backoff vs MQTT’s TCP retransmission

Prerequisite Knowledge:

  • IPv6 and 6LoWPAN - Compression techniques that enable protocol efficiency
  • CoAP vs MQTT - Understanding protocol characteristics before selecting

Builds Foundation For:

33.8 See Also

Selection Tools:

Decision Frameworks:

Energy Analysis:

Common Pitfalls

Assigning equal weight to all criteria treats latency and cost as equally important even when one is clearly more critical. Fix: hold a requirements workshop with stakeholders before setting weights.

Datasheets describe best-case performance. Fix: supplement datasheet comparison with field reports, academic benchmarks, and community forum discussions about real-world behaviour.

A framework output is a recommendation, not a guarantee. Fix: always validate the top-ranked protocol with a small pilot deployment before signing purchase orders for hundreds of devices.

Teams focused on connectivity often weight performance and cost heavily and forget to score security posture and regulatory compliance. Fix: include at least two security criteria (encryption support, key management) and one regulatory criterion (duty cycle, certification) in every matrix.

33.9 Summary

This chapter provided a systematic framework for IoT protocol selection:

  • Start with power source: Battery devices need BLE/Zigbee/LoRaWAN; mains-powered can use Wi-Fi/Ethernet
  • Consider range requirements: Short (<100m) favors BLE, medium (100m-1km) uses Zigbee/Thread, long (>1km) needs LPWAN
  • Match data patterns to protocols: Request-response suits CoAP, pub-sub suits MQTT
  • Energy efficiency is nuanced: Wi-Fi has best energy/bit (5.25 nJ) but highest instantaneous power (210 mW)
  • Header compression is critical for small payloads: 6LoWPAN + CoAP achieves 52% reduction over uncompressed IPv6
  • Transmission frequency determines impact: Infrequent transmission is sleep-dominated; frequent transmission makes protocol choice critical (~4x battery life difference)

33.10 What’s Next

If you want to… Read this
Apply the framework to real scenarios Real-World Examples
Understand protocol overhead numbers Protocol Overhead Analysis
Run hands-on CoAP and MQTT labs CoAP and MQTT Lab
Review all IoT protocol content Protocol Overview
See the full labs collection Labs and Selection