MAC Layer: The Medium Access Control sublayer of the Data Link layer; governs when and how devices access a shared communication channel
CSMA (Carrier Sense Multiple Access): A family of MAC protocols where a device senses the channel before transmitting to avoid interfering with ongoing transmissions
Random Backoff: A delay chosen from a random range before retransmitting after a collision or detected busy channel; reduces the probability of repeated collisions
Binary Exponential Backoff: A backoff algorithm where the retry delay range doubles after each failed attempt; used in Ethernet and 802.11
Duty Cycle: The fraction of time a radio is active (transmitting or receiving); lower duty cycle conserves battery life
Superframe: A periodic structure in beacon-enabled IEEE 802.15.4 networks dividing time into a contention access period and an optional contention-free period
CCA Threshold: The signal power level above which a device considers the channel busy and defers transmission
14.1 In 60 Seconds
Advanced networking topics critical for production IoT: TCP congestion control can misinterpret wireless packet loss as congestion (reducing throughput unnecessarily), IPv4 exhaustion forces NAT usage that breaks end-to-end IoT connectivity, hidden/exposed terminal problems affect CSMA-based protocols, and MTU fragmentation adds significant overhead to small IoT payloads.
14.2 Learning Objectives
By the end of this chapter, you will be able to:
Analyze TCP congestion control behavior and its impact on IoT throughput over lossy wireless links
Explain IPv4 exhaustion and NAT challenges for IoT deployments
Compare MAC protocols (CSMA, TDMA, ALOHA) for different IoT use cases
Identify and solve hidden and exposed terminal problems using RTS/CTS and TDMA
Implement QoS for prioritizing critical IoT traffic using DSCP marking
Calculate fragmentation overhead and select MTU-aware strategies for constrained networks
For Beginners: MAC Protocols
MAC (Media Access Control) protocols are the traffic rules that decide when each device is allowed to transmit data. Imagine a group conversation where everyone needs to take turns speaking – MAC protocols ensure devices do not all talk at once and create chaos on the network.
Sensor Squad: The Advanced Traffic Rules!
“We learned basic MAC protocols, but real IoT networks have trickier problems,” said Max the Microcontroller. “Like TCP congestion control – it was designed for wired networks. When a wireless packet gets lost due to interference, TCP thinks the network is overloaded and slows down. But the network was fine – it was just a momentary radio glitch!”
“That is called the TCP-over-wireless problem,” explained Lila the LED. “TCP reduces its sending rate when it should not, making things slower for no reason. Some IoT systems use UDP instead to avoid this issue, handling reliability themselves at the application layer.”
Sammy the Sensor raised another concern. “What about the hidden terminal problem? I cannot hear another sensor behind a wall, so I think the channel is free and start transmitting. But the gateway can hear BOTH of us, and our signals collide there!”
“Great example,” said Bella the Battery. “The solution is RTS/CTS – Ready-to-Send and Clear-to-Send messages. Before transmitting big data, you ask the gateway for permission. The gateway broadcasts ‘clear,’ telling all nearby devices to wait. It adds a little overhead but prevents costly collisions, especially important for battery-powered devices like me!”
How It Works: TCP Congestion Control in IoT Networks
TCP congestion control operates on a simple principle: packet loss signals congestion. Here’s the step-by-step mechanism and why it fails for wireless IoT:
Step 1: Slow Start Phase
TCP starts conservatively with 1 MSS (Maximum Segment Size)
Window doubles each RTT until reaching slow-start threshold
Example: 1 → 2 → 4 → 8 MSS
Step 2: Congestion Avoidance
After threshold, window grows linearly (+1 MSS per RTT)
Continues until packet loss detected
Step 3: Loss Response (The IoT Problem)
TCP detects loss → assumes congestion → halves window
For wired networks: loss = congestion (correct assumption)
For wireless IoT: loss = interference (wrong assumption!)
Why This Breaks for LoRa/Zigbee:
5% wireless packet loss (normal interference)
TCP interprets ALL as congestion
Window repeatedly halved → throughput collapses to 4% of capacity
Battery drains retransmitting unnecessarily
Solution: Use UDP-based protocols (CoAP) that don’t assume loss = congestion, letting application layer decide retry strategy.
14.3 Deep Dive: Advanced Networking Concepts for IoT
This section covers advanced networking topics for engineers implementing production IoT systems. Beginners can skip and return when deploying large-scale networks.
14.3.1 TCP Congestion Control and IoT Implications
TCP’s congestion control algorithms (originally designed for traditional networks) can cause problems in IoT environments.
Putting Numbers to It
TCP Congestion Window Dynamics on Lossy Links
TCP’s congestion window (CWND) determines how many bytes can be in-flight. On wireless links, packet loss causes catastrophic throughput collapse:
Loss Event (multiplicative decrease): \[\text{CWND}_{\text{new}} = \frac{\text{CWND}_{\text{old}}}{2}\]
Example: LoRaWAN TCP connection with 5% loss rate, RTT = 2 sec, MSS = 128 bytes, link = 5 Kbps: - Ideal CWND: \(\frac{5{,}000 \times 2}{8} = 1{,}250\) bytes (10 MSS) - With 5% loss: Every 20 packets, one loss → CWND halved before reaching 10 MSS - Measured CWND: oscillates 1-5 MSS, average 3 MSS → 30% of ideal - Throughput: \(\frac{3 \times 128 \times 8}{2} = 1{,}536\) bps (31% of 5 Kbps capacity)
For comparison, UDP-based CoAP with application-layer retries achieves 4,500 bps (90% capacity) on the same link – explaining why IoT protocols avoid TCP on lossy wireless networks.
Solutions:
Use UDP for lossy links: CoAP (UDP-based) works better than HTTP (TCP)
Some protocols don’t work through NAT (FTP, SIP without ALG)
Port exhaustion:
One public IP = 65535 ports
If 10000 IoT devices each maintain 10 connections:
10000 × 10 = 100,000 connections → EXCEEDS port limit!
Result: Connection failures, timeouts
IPv6: The Real Solution:
IPv6 address space: 2¹²⁸ ≈ 3.4 × 10³⁸ addresses
That's: 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses
= 670,000,000,000,000,000,000,000,000 addresses per square millimeter of Earth!
Every IoT device can have multiple global unicast addresses - no NAT needed!
IPv6 Adoption in IoT (2026):
Cellular IoT (NB-IoT, LTE-M): 100% IPv6 support
Wi-Fi 6: Full IPv6 support
Zigbee, Thread: 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks)
LoRaWAN: IPv6 via IPv6-over-LoRaWAN draft standard
14.3.3 Link Layer Protocol Comparison: CSMA vs TDMA vs ALOHA
Different medium access control (MAC) protocols affect IoT network performance.
CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance): Used by: Wi-Fi, Zigbee, Bluetooth
Algorithm:
1. Listen to channel
2. If busy → wait random backoff time
3. If idle → transmit
4. If collision → exponential backoff and retry
Pros:
+ Fair access (all nodes get turns)
+ Efficient at low-medium loads
+ Simple, distributed (no coordinator needed)
Cons:
- Poor at high loads (too many collisions)
- Hidden node problem (can't hear all transmitters)
- Exposed node problem (unnecessary backoff)
Real performance (Wi-Fi):
- 1-10 devices: 70-90% channel utilization
- 50+ devices: 30-50% channel utilization (collisions dominate)
14.3.4 MAC Protocol Challenges: Hidden and Exposed Terminals
CSMA/CA works well in simple scenarios, but wireless networks face unique challenges that don’t exist in wired networks. Two fundamental problems—hidden terminals and exposed terminals—can severely degrade network performance in IoT deployments.
14.3.4.1 The Hidden Terminal Problem
Definition: Two nodes cannot hear each other’s transmissions but both can reach a common destination, causing undetected collisions at the receiver.
Figure 14.2: Hidden terminal problem with nodes A and C both transmitting to gateway B unaware of each other
The Problem in Action:
Scenario: Smart factory with Wi-Fi sensors
Time 0ms: Node A senses channel → idle (can't hear C)
Node C senses channel → idle (can't hear A)
Time 1ms: Node A starts transmitting to Gateway B
Node C starts transmitting to Gateway B
Time 2ms: COLLISION at Gateway B!
- Gateway B receives garbled data from both A and C
- A and C think transmission succeeded (no collision detection)
- Both A and C cannot hear each other's carrier signal
Result: Both frames lost, but nodes don't know it
No retransmission until timeout (wasted time and energy)
Real-World IoT Impact:
Scenario
Hidden Terminal Effect
Performance Loss
Smart home (2-story house)
Upstairs sensors can’t hear downstairs → collisions at Wi-Fi AP
20-40% packet loss
Zigbee mesh (walls)
Nodes in different rooms → collisions at coordinator
Step 1: Node A sends RTS (Request to Send) to Gateway B
- RTS includes: duration of upcoming transmission
- All nodes hearing RTS (even if they can't hear A) set NAV timer
Step 2: Gateway B responds with CTS (Clear to Send)
- CTS also includes transmission duration
- All nodes hearing CTS (including hidden Node C) set NAV timer
- Crucially: Node C hears CTS even if it couldn't hear RTS!
Step 3: Node A transmits DATA frame
- Hidden Node C stays silent (NAV timer active)
- No collision at Gateway B
Step 4: Gateway B sends ACK (acknowledgment)
- Transmission complete
- All NAV timers expire → nodes can compete for channel again
RTS/CTS Trade-offs:
Overhead Calculation (Wi-Fi example):
Without RTS/CTS:
- DATA frame: 1500 bytes
- ACK: 14 bytes
- Total airtime: ~120 μs (at 100 Mbps)
With RTS/CTS:
- RTS: 20 bytes → 1.6 μs
- CTS: 14 bytes → 1.1 μs
- DATA: 1500 bytes → 120 μs
- ACK: 14 bytes → 1.1 μs
- Total airtime: ~124 μs
Overhead: 4 μs / 124 μs = 3.2% (small!)
But if hidden terminals cause 30% collisions:
- Without RTS/CTS: 30% retransmissions → 156 μs average (collision + retry)
- With RTS/CTS: 3.2% overhead → 124 μs (no collisions)
RTS/CTS wins when collision rate > ~10%!
When to Enable RTS/CTS:
IoT Deployment
Enable RTS/CTS?
Reasoning
Small smart home (< 10 devices)
❌ No
Low collision risk, overhead not worth it
Large smart building (50+ Wi-Fi devices)
✅ Yes
Hidden terminals common, reduces collisions
Zigbee mesh (many walls)
✅ Yes
Physical obstacles create hidden terminals
Industrial IoT (metal machinery)
✅ Yes
Severe RF blocking → many hidden terminals
LoRaWAN
❌ N/A
Uses ALOHA, not CSMA (no RTS/CTS)
Bluetooth mesh
✅ Yes
Dense deployments with obstacles
14.3.4.2 The Exposed Terminal Problem
Definition: A node unnecessarily defers transmission when its transmission would not cause a collision, reducing channel efficiency.
Why It Happens:
Node overhears a transmission not directed at its intended receiver
CSMA/CA conservatively assumes all transmissions will collide
Spatial reuse is prevented
Figure 14.4: Exposed terminal problem where node C unnecessarily defers though D is out of B’s range
The Problem in Action:
Scenario: Zigbee smart lighting network
Configuration:
- Node B (light switch) transmitting to Node A (controller)
- Node C (motion sensor) wants to transmit to Node D (different controller)
- Node C can hear Node B, but Node D cannot
Time 0ms: Node B starts transmitting to Node A
Time 1ms: Node C senses channel → BUSY (hears B)
Node C defers transmission
Problem: Node C COULD transmit to Node D safely!
- Node D is far from Node B (won't hear B's transmission)
- C → D transmission wouldn't collide with B → A
- But CSMA/CA doesn't allow spatial reuse
Result: Channel is underutilized
Node C waits unnecessarily → increased latency
Real-World IoT Impact:
Scenario
Exposed Terminal Effect
Efficiency Loss
Linear sensor array
Sensors at opposite ends defer unnecessarily
30-50% channel underutilization
Multi-floor building Wi-Fi
Devices on different floors could transmit simultaneously
20-40% throughput loss
Factory floor WSN
Sensors in different zones defer to each other
25-35% latency increase
Parking lot sensors
Distant sensors wait for nearby transmissions
15-30% reduced spatial reuse
Why Exposed Terminals Matter Less Than Hidden Terminals:
Hidden Terminal: CAUSES COLLISIONS → data loss, retransmissions, wasted energy
Severity: HIGH (actual failures)
Exposed Terminal: REDUCES EFFICIENCY → channel underutilization, higher latency
Severity: MEDIUM (performance degradation, not failures)
Engineering Priority:
1. Solve hidden terminals first (RTS/CTS)
2. Accept some exposed terminal inefficiency as acceptable trade-off
3. Use directional antennas or TDMA for mission-critical deployments
14.3.4.3 MAC Protocol Comparison: Handling Hidden/Exposed Terminals
MAC Protocol
Hidden Terminal Solution
Exposed Terminal Solution
IoT Use Cases
CSMA/CA
❌ Poor (collisions common)
❌ Poor (spatial reuse limited)
Wi-Fi, Zigbee (add RTS/CTS for improvements)
CSMA/CA + RTS/CTS
✅ Good (handshake prevents collisions)
❌ Poor (still defers unnecessarily)
Dense Wi-Fi, industrial Zigbee
TDMA
✅ Excellent (no collisions, time slots)
✅ Excellent (spatial reuse via slot assignment)
Cellular IoT, mission-critical WSN
ALOHA
❌ Terrible (no carrier sensing at all!)
✅ Good (transmits anytime → spatial reuse)
LoRaWAN (works due to sparse traffic)
FDMA
✅ Excellent (different frequencies)
✅ Excellent (simultaneous transmission)
Cellular (frequency bands per user)
Real-World Mitigation Strategies:
Strategy 1: Network Planning (Design Phase)
- Site survey to identify hidden terminal zones
- Strategic AP/gateway placement to minimize coverage holes
- Use 5 GHz Wi-Fi (less interference/congestion than 2.4 GHz, though shorter range; 2.4 GHz penetrates walls better)
Strategy 2: Protocol Configuration (Deployment Phase)
- Enable RTS/CTS for networks with > 20 devices or physical obstacles
- Adjust CSMA/CA backoff parameters (increase contention window)
- Use mesh routing to provide multiple paths (avoids single collision point)
Strategy 3: Advanced Techniques (Enterprise IoT)
- Beamforming (directional antennas reduce hidden terminals)
- MU-MIMO (Wi-Fi 6) - simultaneous transmission to multiple devices
- TDMA scheduling for deterministic traffic (industrial IoT)
- Frequency hopping (Bluetooth) - collision on one channel doesn't affect others
Case Study: Wi-Fi Smart Factory Deployment
Initial Deployment (no RTS/CTS):
- 80 Wi-Fi sensors across factory floor
- Metal machinery creates hidden terminals
- Measured performance:
* 45% packet loss during peak traffic
* 3-5 second latency for critical alerts
* 60% retransmission rate
After Enabling RTS/CTS + Network Redesign:
- RTS/CTS enabled on all APs
- Added 2 additional APs to reduce hidden terminal zones
- Mesh routing for redundancy
- Measured performance:
* 5% packet loss (acceptable)
* 200-500ms latency for alerts (excellent)
* 8% retransmission rate (normal)
Cost: $2000 for 2 additional APs
Benefit: $50,000/year in reduced downtime from missed alerts
Having covered hidden and exposed terminal mitigations for CSMA, the remaining MAC protocols – TDMA and ALOHA – take fundamentally different approaches that sidestep these problems altogether.
TDMA (Time Division Multiple Access): Used by: Cellular (GSM, LTE), some industrial WSN
Algorithm:
1. Coordinator assigns time slots to nodes
2. Each node transmits only in its slot
3. No collisions possible
Example: 10ms frame, 10 nodes
Node 1: 0-1ms
Node 2: 1-2ms
...
Node 10: 9-10ms
Repeat every 10ms
Pros:
+ No collisions → predictable, high efficiency
+ Deterministic latency (max = frame duration)
+ Low power (sleep when not your slot)
Cons:
- Requires time synchronization (GPS, PTP)
- Coordinator needed (single point of failure)
- Wasted slots if node has no data
- Scalability issues (more nodes = longer frames)
Real performance (LTE):
- 50+ devices: 85-95% channel utilization (even with many devices)
ALOHA / Slotted ALOHA: Used by: LoRaWAN Class A, Sigfox
Pure ALOHA:
1. Transmit whenever you have data
2. If collision → wait random time, retry
3. No carrier sensing
Slotted ALOHA:
1. Time divided into slots
2. Transmit only at slot boundaries
3. Still no carrier sensing
Throughput comparison:
Pure ALOHA: Max 18% channel utilization (!)
Slotted ALOHA: Max 36% channel utilization
CSMA/CA: Max 70-90% channel utilization
TDMA: Max 85-95% channel utilization
Why use ALOHA for LoRaWAN?
+ Ultra-simple (no coordination, no synchronization)
+ Low power (no listening, just transmit)
+ Works for infrequent traffic (0.5% duty cycle)
+ Scales to 1000s of nodes (if traffic is sparse)
Choosing MAC Protocol for IoT:
Use Case
Traffic Pattern
Best MAC
Why
Smart meter (1 msg/hour)
Infrequent, periodic
ALOHA (LoRaWAN)
Simple, low power, scales well for sparse traffic
Industrial sensor (100 Hz)
Frequent, deterministic
TDMA
Predictable latency, no collisions, mission-critical
Smart home devices
Bursty, on-demand
CSMA (Wi-Fi, Zigbee)
Fair access, efficient for dynamic traffic
Real-time video
Continuous, high-rate
TDMA (LTE)
Guaranteed bandwidth, low latency
Try It: ALOHA Network Capacity Calculator
Explore how the number of nodes and transmission duty cycle affect channel throughput under ALOHA.
Scenario: HTTP request from 6LoWPAN sensor to cloud server
HTTP GET request size: 400 bytes
6LoWPAN MTU: 127 bytes
Required fragments: ceil(400 / 127) = 4 fragments
Fragment transmission:
- Fragment 1/4: 127 bytes
- Fragment 2/4: 127 bytes
- Fragment 3/4: 127 bytes
- Fragment 4/4: 19 bytes
If ANY fragment is lost → ENTIRE datagram lost!
Packet loss: 5% per fragment
Success probability: (0.95)⁴ = 81.5%
Effective loss rate: 18.5% (!)
Compare to no fragmentation (single 127-byte packet):
Packet loss: 5%
Much better!
Try It: MTU Fragmentation Loss Calculator
Adjust the per-fragment loss rate and payload size to see how fragmentation multiplies effective packet loss.
Show code
viewof frag_loss_pct = Inputs.range([0.1,20], {value:5,step:0.1,label:"Per-fragment loss rate (%)"})viewof payload_bytes = Inputs.range([64,1500], {value:400,step:1,label:"Payload size (bytes)"})viewof mtu_bytes = Inputs.range([50,1500], {value:127,step:1,label:"Link MTU (bytes)"})
Path MTU Discovery: Discover smallest MTU along path
Send packets with DF (Don't Fragment) bit set
If too large → get ICMP "Fragmentation Needed" error
Reduce packet size and retry
Application-level chunking: Send multiple small requests
Instead of: 1× 400-byte HTTP request
Use: 4× 100-byte HTTP requests (each fits in one 6LoWPAN frame)
Protocol choice:
Avoid TCP for constrained networks (large headers, fragmentation)
Use CoAP (UDP-based, small headers, built-in chunking)
14.3.6 Quality of Service (QoS) for IoT Traffic Differentiation
Not all IoT data is equal - some requires priority handling.
DiffServ (Differentiated Services) Model:
IPv4/IPv6 packets have DSCP (Differentiated Services Code Point) field:
Figure 14.5: Diagram illustrating Dscp Ip Header
IoT Traffic Classes:
Application
Latency Requirement
Loss Tolerance
Suggested QoS
Fire alarm trigger
<100ms
0% loss
EF (highest priority)
Video surveillance
<500ms
1-5% loss
AF41 (high priority)
Temperature reading
<5s
10% loss OK
AF21 (medium priority)
Firmware update
<1min
0% loss (use TCP retries)
Best Effort (bulk)
Historical data sync
<1hr
0% loss (use retries)
Best Effort (lowest)
Real Implementation:
ESP32 example - marking CoAP packets for fire alarm:
// Set DSCP to EF (Expedited Forwarding) for fire alarmint dscp =0x2E<<2;// Shift left 2 bits for TOS fieldsetsockopt(sock, IPPROTO_IP, IP_TOS,&dscp,sizeof(dscp));// Regular temperature reading uses default (Best Effort)
Router Configuration (Cisco example):
! Define class for IoT alarm traffic
class-map match-any IOT_ALARMS
match dscp ef
! Define policy - guarantee 10% bandwidth, low latency queue
policy-map IOT_QOS
class IOT_ALARMS
priority percent 10
! Apply to WAN interface
interface GigabitEthernet0/0
service-policy output IOT_QOS
Measured Impact:
Without QoS (fire alarm competes with firmware download):
- Alarm latency: 5-50 seconds (unacceptable!)
- Jitter: ±20 seconds
- Occasional drops during congestion
With QoS (EF class for alarms):
- Alarm latency: 50-200ms (excellent)
- Jitter: ±10ms
- Zero drops even during congestion
14.4 Worked Example: Solving Hidden Terminal Collisions in a Warehouse IoT Network
Scenario: A logistics warehouse (80m x 40m) deploys 30 Zigbee inventory sensors on shelving units to track pallet locations. The sensors report to a single gateway mounted at the center ceiling. Metal shelving creates radio shadows, causing hidden terminal problems – sensors on opposite sides of shelving rows cannot hear each other but both reach the gateway. During peak hours (07:00-09:00), 25% of sensor reports are lost to collisions. Design a MAC-layer solution.
Step 1: Quantify the Hidden Terminal Problem
Parameter
Value
Sensors
30, reporting every 5 seconds
Packet size
40 bytes (20B payload + 20B headers)
Data rate
250 kbps (Zigbee 802.15.4)
Transmission time per packet
40 x 8 / 250,000 = 1.28 ms
Offered load
30 packets / 5 seconds = 6 packets/second
Channel busy time
6 x 1.28 ms = 7.68 ms per second (0.77% utilization)
At 0.77% utilization, CSMA/CA should experience almost zero collisions. Yet 25% of packets are lost. Why?
The hidden terminal explanation: With 30 sensors, approximately 10 pairs cannot hear each other (behind shelving). When sensor A transmits, sensor B (hidden from A) senses the channel as free and also transmits. Both signals arrive at the gateway simultaneously, destroying each other. The 0.77% utilization calculation is misleading because it assumes all nodes see all other transmissions – hidden terminals violate this assumption.
Step 2: Evaluate MAC Protocol Solutions
Solution
How It Works
Collision Rate
Overhead
Suitable?
Pure CSMA/CA (current)
Listen before transmit
25% (hidden nodes can’t listen)
Minimal
No – hidden terminals bypass carrier sensing
RTS/CTS
Request permission from gateway before data
~2% (gateway arbitrates)
28 bytes per exchange (14B RTS + 14B CTS)
Yes, but doubles airtime
TDMA slots
Gateway assigns time slots to each sensor
~0% (no contention)
Beacon overhead + synchronization
Yes, best for predictable traffic
Frequency hopping
Sensors use different channels
~5% (reduced collision probability)
Channel coordination overhead
Partial – reduces but doesn’t eliminate
Step 3: TDMA Slot Calculation
Assign each sensor a dedicated time slot within a 5-second reporting cycle:
Superframe duration: 5,000 ms (one complete cycle)
Sensors: 30
Slot duration: 1.28 ms (data) + 0.5 ms (guard time) + 1.0 ms (ACK) = 2.78 ms
Active period: 30 x 2.78 ms = 83.4 ms
Sleep period: 5,000 - 83.4 = 4,916.6 ms (98.3% sleep time)
Metric
CSMA/CA (current)
TDMA (proposed)
Collision rate
25%
~0%
Successful delivery
75% (requires retries)
99.5%+
Average latency
5-15 ms (with backoff)
Deterministic (assigned slot)
Radio on-time per sensor
~50 ms/cycle (listen + transmit + retries)
2.78 ms/cycle
Battery impact
Baseline
18x less radio on-time
Step 4: Decision
TDMA is the optimal choice for this scenario because:
Traffic is predictable – every sensor reports at the same 5-second interval, making slot assignment straightforward
Hidden terminals are eliminated – no contention means no collisions regardless of radio shadows
Battery savings are dramatic – sensors sleep 98.3% of the time instead of continuously monitoring the channel
Capacity headroom exists – 83.4 ms of 5,000 ms used, allowing the network to grow to 150+ sensors before slots become tight
When CSMA/CA would be better: If sensors report only on events (unpredictable timing), TDMA wastes empty slots. For event-driven traffic with hidden terminals, RTS/CTS is the pragmatic choice despite its overhead.
Incremental Examples: Beginner Level
Scenario: You have 3 smart home sensors (temperature, motion, light) connected to a Wi-Fi access point.
Challenge: Why does your motion sensor sometimes fail to report movement immediately?
Answer: Hidden terminal problem! The temperature sensor in the basement can’t hear the light sensor in the attic. Both think the channel is idle and transmit simultaneously to the AP, causing collision. The motion sensor’s urgent alert gets caught in the collision.
Solution: Enable RTS/CTS on your AP. Before transmitting large data, sensors request permission. The AP broadcasts “clear to send” which ALL sensors hear, preventing collisions.
Incremental Examples: Intermediate Level
Scenario: A warehouse IoT deployment with 100 Zigbee sensors experiences 30% packet loss during peak hours (07:00-09:00).
Key Insight: At sparse traffic (1,000 sensors × 4 readings/hour = 0.01% duty cycle), pure ALOHA-style approaches outperform coordinated TDMA due to zero synchronization overhead.
14.5 Concept Relationships: MAC Protocols and IoT Design
# Configure all nodes for 100 packets/second# Measure packet delivery rate at gateway# Expected: 95-98% delivery (normal wireless loss)
Hidden Terminal Test:
# Place metal barrier between Node A and Node C# Both nodes transmit simultaneously# Expected: 30-50% packet loss (collisions at gateway)
RTS/CTS Mitigation:
# Enable RTS/CTS on all nodes (if supported)# Repeat test with metal barrier# Expected: 10-15% loss (hidden terminal solved, normal loss remains)
What to Observe:
Without RTS/CTS: Collision rate increases dramatically when nodes can’t hear each other
With RTS/CTS: Gateway’s CTS broadcast prevents simultaneous transmission
Trade-off: RTS/CTS adds ~3% overhead but prevents 20-40% collision loss
Hint: If your Zigbee hardware doesn’t support RTS/CTS toggle, use Wi-Fi with iw dev wlan0 set rts <threshold> on Linux to enable/disable RTS/CTS dynamically.
Solution:
# Disable RTS/CTS (threshold = off)iw dev wlan0 set rts off# Run iperf test → measure throughput with collisions# Enable RTS/CTS (threshold = 100 bytes)iw dev wlan0 set rts 100# Run iperf test → measure improved throughput# Expected improvement: 30-50% throughput gain in hidden terminal scenario
Extension: Try varying the RTS threshold (100, 500, 1500, 2346 bytes) and plot throughput vs overhead. Find the optimal threshold for your packet size distribution.
Common Pitfalls
1. Using a CCA Threshold That Is Too Sensitive
Setting the CCA threshold too low (detecting very weak signals as “busy”) causes unnecessary deferrals and reduces throughput, especially near the edge of coverage. Fix: calibrate the CCA threshold to a level that detects genuine transmissions in the deployment environment, not background noise.
2. Not Accounting for ACK Timeout in Latency Calculations
IEEE 802.15.4 requires an ACK within 192 µs. If no ACK arrives, the device retransmits up to 3 times. At worst, one packet delivery takes 4 × (transmission time + 192 µs). Fix: include the maximum MAC-layer retry delay in end-to-end latency budgets for time-critical applications.
3. Forgetting That MAC Layer Behaviour Varies Between Devices
Two “Zigbee” devices from different vendors may use different backoff ranges or retry counts, causing one to monopolise the channel. Fix: verify MAC parameters across all device vendors in a mixed deployment and normalise settings through coordinator configuration where possible.
🏷️ Label the Diagram
Code Challenge
14.7 Summary
TCP congestion control interprets wireless packet loss as network congestion, causing severe throughput degradation on lossy IoT links
NAT allows IPv4 sharing but blocks inbound connections and creates complexity for peer-to-peer IoT
MAC protocols differ significantly: CSMA is fair but collision-prone; TDMA is deterministic but requires coordination; ALOHA is simple but low efficiency
Hidden terminals cause collisions that CSMA cannot detect - solve with RTS/CTS handshaking
QoS with DSCP marking ensures critical IoT traffic (alarms) gets priority over bulk transfers
Fragmentation increases effective loss rate - design for MTU constraints rather than fragmenting
14.8 What’s Next
Build on the MAC-layer concepts covered here by exploring the related chapters below: