Collision: The simultaneous transmission of two or more devices on a shared medium, causing signal overlap and data corruption
Collision Domain: The set of devices that can cause a collision with each other; reducing collision domains improves network performance
CSMA/CD: Carrier Sense Multiple Access with Collision Detection; the Ethernet MAC protocol that detects collisions and schedules retransmissions
CSMA/CA: Carrier Sense Multiple Access with Collision Avoidance; used in Wi-Fi and IEEE 802.15.4; avoids collisions rather than detecting them after the fact
Bandwidth Utilisation: The fraction of the available channel capacity consumed by successful data transmission; collisions reduce utilisation
Backoff Algorithm: A randomised delay strategy used after a collision to reduce the probability of repeated collisions between the same devices
Hidden Node Problem: A scenario where two devices cannot hear each other but both can hear a third device; leads to collisions invisible to the senders
50.1 In 60 Seconds
Bandwidth is the maximum data rate a link can carry, but more bandwidth does not reduce latency – they are independent metrics. Most IoT devices transmit small, infrequent payloads, so networks are commonly over-provisioned by 100x or more. Calculating actual bandwidth needs from device count, payload size, and reporting interval prevents costly over-provisioning.
50.2 Learning Objectives
By the end of this chapter, you will be able to:
Calculate bandwidth requirements for IoT deployments accurately
Identify common bandwidth misconceptions that lead to over-provisioning
Right-size network capacity based on actual traffic patterns
Select appropriate protocols based on bandwidth needs
For Beginners: Bandwidth Requirements
Bandwidth is like the width of a highway – a wider highway can carry more cars at once. In networking, bandwidth determines how much data can flow between devices per second. IoT devices usually need very little bandwidth (like a bicycle lane), but when thousands of sensors report at once, those small lanes add up quickly.
Sensor Squad: The Highway Myth!
“We need a bigger highway for our data!” Sammy the Sensor declared. Max the Microcontroller shook his head. “Hold on, Sammy. Let me ask you something – how much data do you actually send?”
“Well, I send a temperature reading of about 50 bytes every 15 minutes,” Sammy admitted. “That is like tossing a marble down the highway once in a while,” said Max. “You definitely do not need a six-lane motorway! Even with 500 sensors like you, we would only use about 1% of a basic cellular connection.”
Lila the LED added, “And here is a big myth – a wider highway does NOT make your marble arrive faster! Bandwidth is about how much data fits at once, not how quickly it gets there. Latency is what controls speed.”
“The real trick,” said Bella the Battery, “is calculating exactly what you need. Over-provisioning wastes money AND my energy. Multiply your sensor count by payload size and divide by reporting interval – that gives you the actual bandwidth needed. Most IoT networks need way less than people think!”
50.3 Introduction
Time: ~10 min | Difficulty: Intermediate | Unit: P07.C15.U03
Many IoT engineers significantly overestimate bandwidth requirements, leading to costly over-provisioning. Understanding actual traffic patterns and calculating real bandwidth needs is essential for cost-effective IoT network design.
50.4 Common Misconception: “More Bandwidth Always Means Better Performance”
Common Misconception
The Misconception: Many IoT engineers assume that provisioning higher bandwidth (e.g., upgrading from 100 Kbps to 1 Mbps) will automatically improve application performance and reduce latency.
The Reality: Bandwidth and latency are independent metrics. Higher bandwidth increases throughput (data volume per second) but does NOT reduce latency (round-trip time).
Real-World Example: A smart agriculture company deployed 500 soil moisture sensors across a 5 km squared farm:
Actual traffic: Each sensor sends 50 bytes every 15 minutes = 0.056 bytes/second average
Total bandwidth used: 500 sensors x 0.056 bytes/sec = 27.8 bytes/sec = 222 bps (0.02% of provisioned capacity!)
Latency: LTE-M latency remained 50-200 ms regardless of bandwidth (limited by radio protocol and tower distance, not throughput)
Cost impact: Wasting $7,500/month ($90,000/year) on unused bandwidth
The Fix: Switched to LoRaWAN (unlicensed spectrum, $2/device/month gateway fee):
Bandwidth: 0.3-5 Kbps (200x less than LTE-M)
Latency: 1-2 seconds (10x worse than LTE-M)
Result: Application requirements met perfectly (sensors don’t need sub-1s responses)
Savings: $6,500/month ($78,000/year)
Battery life: Improved from 2 years (LTE-M) to 10 years (LoRaWAN)
Key Lesson: Right-size your bandwidth based on actual data volume requirements. For most IoT sensor applications, bandwidth requirements are surprisingly low (measured in bps or Kbps, not Mbps). Focus on matching protocol characteristics to application needs rather than maximizing raw throughput.
50.5 Related Misconceptions
Understanding bandwidth vs latency helps clarify these common confusions:
“5G is necessary for IoT” (Reality: Most IoT needs less than 1 Mbps; 5G’s benefit is massive device density, not speed)
“Wi-Fi 6 improves range” (Reality: Wi-Fi 6 improves efficiency and capacity, not range compared to Wi-Fi 5)
“TCP is slower than UDP” (Reality: UDP has lower latency, but TCP can achieve higher throughput with proper tuning)
50.6 Protocol Bandwidth Comparison
Protocol
Typical Bandwidth
Best Use Case
Cost Considerations
LoRaWAN
0.3-50 Kbps
Infrequent sensor data
Free spectrum, gateway costs
Sigfox
100 bps
Ultra-low data volume
Subscription per device
NB-IoT
20-250 Kbps
Low-medium data
Cellular subscription
LTE-M
375 Kbps-1 Mbps
Voice + data
Higher cellular cost
Wi-Fi
10-1000 Mbps
High data volume
Infrastructure cost
Ethernet
100-1000 Mbps
Industrial, video
Wiring cost
50.7 Bandwidth Sizing Guidelines
Bandwidth Sizing Rule of Thumb
Step 1: Calculate Average Load
Average (bps) = Devices x Payload_bytes x 8 / Interval_seconds
Putting Numbers to It
The bandwidth calculation converts device behavior into bits per second:
Measure first: Deploy a pilot with monitoring before sizing production
Use actual traffic data: Don’t rely on theoretical maximums
Consider duty cycles: Most IoT sensors are idle 99%+ of the time
Factor in compression: MQTT, CoAP can significantly reduce payload sizes
50.9.2 Protocol Selection for Cost
Scenario
Wrong Choice
Right Choice
Savings
500 farm sensors
1 Mbps LTE-M
LoRaWAN
$78K/year
10K smart meters
Wi-Fi mesh
NB-IoT
$120K/year
100 security cameras
4G cellular
Wi-Fi + fiber
$36K/year
50.10 Worked Example: Smart City Multi-Protocol Bandwidth Planning
Scenario: A city deploys a smart infrastructure pilot across a 2 km squared downtown district with four IoT subsystems. The network architect must calculate total bandwidth requirements and select appropriate protocols for each subsystem.
50.10.1 Step 1: Calculate Per-Subsystem Bandwidth
Subsystem 1: Environmental Monitoring (500 sensors)
Payload: 32 bytes (temp, humidity, PM2.5, noise)
Protocol overhead: 13 bytes (LoRaWAN header + MIC)
Reporting interval: 15 minutes (900 seconds)
Average per sensor: (32 + 13) x 8 / 900 = 0.4 bps
Total: 500 x 0.4 = 200 bps = 0.2 Kbps
Subsystem 2: Smart Parking (2,000 spaces)
Payload: 8 bytes (occupied/vacant + battery + timestamp)
Protocol overhead: 13 bytes (LoRaWAN)
Reporting: On state change (~4 events/day average, plus hourly heartbeat)
Peak hour: 80% spaces change in 2 hours = 1,600 events in 7,200s
Total peak: 1,600 x (8 + 13) x 8 / 7,200 = 37.3 Kbps
Subsystem 3: Traffic Cameras (50 intersections)
Resolution: 1080p at 15 fps
Encoding: H.265 (HEVC)
Bitrate per camera: 2 Mbps (motion-adaptive)
Cameras per intersection: 4
Total: 50 x 4 x 2 = 400 Mbps continuous
Subsystem 4: Street Lighting Control (800 poles)
Payload: 16 bytes (dimming level, power draw, fault status)
Protocol overhead: 20 bytes (NB-IoT)
Reporting: Every 5 minutes + on-demand dimming commands
Average per pole: (16 + 20) x 8 / 300 = 0.96 bps
Peak (sunset, all poles adjust): 800 x (36 x 8) / 10 = 23 Kbps burst
Total average: 800 x 0.96 = 768 bps = 0.77 Kbps
50.10.2 Step 2: Protocol Selection
Subsystem
Bandwidth
Latency Need
Protocol Choice
Rationale
Environmental
0.2 Kbps
Minutes OK
LoRaWAN SF10
Ultra-low power, 10-year battery
Parking
37.3 Kbps peak
30s OK
LoRaWAN SF7
Event-driven, low duty cycle
Cameras
400 Mbps
Real-time
Fiber + Wi-Fi 6
Only viable option at this rate
Lighting
0.77 Kbps
10s for dimming
NB-IoT
Bidirectional needed for commands
50.10.3 Step 3: Cost-Per-Bit Comparison
LoRaWAN (environmental + parking):
Gateway cost: 8 gateways x $1,500 = $12,000
Annual backhaul: 8 x $50/month = $4,800
Per-device cost: $0/month (unlicensed spectrum)
Total year 1: $16,800 for 2,500 devices
Cost per device/month: $0.56
NB-IoT (lighting):
Per-device subscription: $1.50/month
Total: 800 x $1.50 x 12 = $14,400/year
Cost per device/month: $1.50
Fiber + Wi-Fi 6 (cameras):
Fiber to 50 intersections: $250,000
AP per intersection: 50 x $400 = $20,000
Annual bandwidth: 50 x $200/month = $120,000
Total year 1: $390,000 for 200 cameras
Cost per device/month: $162.50
50.10.4 Step 4: Over-Provisioning Analysis
Without proper sizing (common mistake):
"Use NB-IoT for everything" approach:
3,300 devices x $1.50/month = $59,400/year
With right-sized protocols:
LoRaWAN: $16,800/year
NB-IoT: $14,400/year
Fiber/Wi-Fi: $390,000/year (cameras unavoidable)
Total: $421,200/year
Savings from right-sizing sensors: $59,400 - $31,200 = $28,200/year
(cameras dominate cost regardless of sensor protocol choice)
Key Insight: In this smart city deployment, 99.98% of the bandwidth demand comes from 200 cameras (400 Mbps), while 3,300 sensors collectively need only 38 Kbps. Right-sizing protocols by subsystem saves $28,200/year on sensor connectivity alone. The bandwidth calculation also reveals that upgrading sensor protocols to higher-bandwidth options provides zero benefit – the sensors genuinely need only Kbps, not Mbps.
50.11 Data Compression Impact on Bandwidth Requirements
Protocol-level compression can dramatically reduce bandwidth needs, but the benefit depends on payload type:
Data Type
Raw Size
Compressed
Ratio
Protocol Support
JSON sensor reading
120 bytes
45 bytes
2.7x
MQTT 5.0 (content type), HTTP gzip
CBOR sensor reading
48 bytes
38 bytes
1.3x
CoAP (native CBOR support)
CSV batch (100 readings)
5,000 bytes
800 bytes
6.3x
Any protocol with payload compression
Protobuf telemetry
35 bytes
32 bytes
1.1x
gRPC, custom MQTT
Base64 image thumbnail
4,096 bytes
3,200 bytes
1.3x
MQTT, HTTP
Worked Example: Compression Savings for 1,000-Sensor Deployment
A building management system sends JSON-formatted sensor data:
Without compression:
Payload: {"sensorId":"T-0142","temp":22.5,"humidity":45,"ts":1702314567}
Size: 65 bytes + 13 bytes MQTT overhead = 78 bytes per message
1,000 sensors x 78 bytes x 4 messages/min = 312,000 bytes/min = 41.6 Kbps
With CBOR encoding (CoAP):
Same data in CBOR: 28 bytes + 4 bytes CoAP overhead = 32 bytes per message
1,000 sensors x 32 bytes x 4 messages/min = 128,000 bytes/min = 17.1 Kbps
With batched CSV + gzip (100 readings per batch):
100 readings x 50 bytes = 5,000 bytes raw
gzip compressed: ~800 bytes + 50 bytes TCP/HTTP overhead = 850 bytes
10 batches/min x 850 bytes = 8,500 bytes/min = 1.1 Kbps
Result: Switching from individual JSON messages to batched compressed CSV reduces bandwidth from 41.6 Kbps to 1.1 Kbps – a 38x reduction. This shifts the protocol requirement from NB-IoT ($1.50/device/month) to LoRaWAN ($0.56/device/month), saving $11,280/year for 1,000 sensors.
When NOT to compress: Compression adds latency (1-5 ms for gzip) and CPU overhead. For real-time control loops requiring sub-10 ms latency, send raw binary payloads. For periodic telemetry with seconds-to-minutes tolerance, always compress.
Understanding how bandwidth-related concepts interconnect helps prevent design mistakes:
Concept
Relates To
Key Insight
Bandwidth
Throughput, Goodput
Theoretical maximum, not achieved rate
Throughput
Protocol Overhead, Congestion
Actual achieved rate after losses
Goodput
Application Data
Useful payload excluding all overhead
Latency
Bandwidth (independent)
Round-trip time NOT reduced by more bandwidth
Protocol Overhead
Efficiency, Cost
Headers reduce bandwidth utilization
Compression
Bandwidth Savings
2-38x reduction but adds CPU/latency cost
Over-provisioning
Cost, Waste
Common mistake from ignoring duty cycles
Duty Cycle
Average Usage
Most IoT sensors idle 99%+ of time
Common Pitfalls
1. Confusing CSMA/CD and CSMA/CA
CSMA/CD detects collisions after they happen (wired Ethernet). CSMA/CA tries to avoid collisions before they happen (Wi-Fi, 802.15.4). Fix: note that collision detection requires being able to hear your own signal while transmitting — impossible in half-duplex wireless.
2. Assuming a Switch Eliminates All Collision Problems
A switch creates one collision domain per port, eliminating collisions between ports. But a half-duplex device on a switch port still has a collision domain with its upstream port. Fix: verify that all devices and switch ports operate in full-duplex mode to truly eliminate collisions.
3. Ignoring the Hidden Node Problem in Wireless IoT Deployments
Two sensors at opposite ends of a warehouse may both transmit to a central gateway simultaneously, causing collisions at the gateway that neither sensor detects. Fix: use RTS/CTS mechanisms or time-slotted MAC protocols (TDMA) to handle hidden nodes.
🏷️ Label the Diagram
Code Challenge
50.13 Summary
Bandwidth and latency are independent – more bandwidth does not reduce latency
Most IoT sensors need Kbps, not Mbps – calculate actual requirements before provisioning
Over-provisioning wastes money – the smart agriculture example saved $78K/year by right-sizing
Match protocol to requirements – LPWAN for low data, cellular for medium, Wi-Fi for high
Data compression reduces bandwidth 2-38x depending on encoding format and batching strategy
Use the bandwidth calculator to estimate needs before deployment
Average events per day = (500 spaces x 4 events) + (500 x 24 heartbeats) = 14,000 events/day
Average rate = 14,000 x (6+13) bytes x 8 bits / 86,400 seconds = 24.7 bps
Solution:
Average: 24.7 bps (any protocol works)
Peak: 80% of 500 = 400 spaces change in 2 hours = 200 events/hour
Peak bandwidth: 200 x 19 bytes x 8 bits / 3,600 sec = 8.4 bps
With 1.5x headroom on peak: 12.7 bps required
Recommendation: LoRa SF10 (980 bps) provides massive headroom at under 3% utilization
NB-IoT would waste $750/month ($1.50/sensor x 500 sensors) for bandwidth you don’t need
Key Lesson: Calculate actual usage, not theoretical maximums. Don’t over-provision!