14  Advanced MAC Protocols

Key Concepts
  • MAC Layer: The Medium Access Control sublayer of the Data Link layer; governs when and how devices access a shared communication channel
  • CSMA (Carrier Sense Multiple Access): A family of MAC protocols where a device senses the channel before transmitting to avoid interfering with ongoing transmissions
  • Random Backoff: A delay chosen from a random range before retransmitting after a collision or detected busy channel; reduces the probability of repeated collisions
  • Binary Exponential Backoff: A backoff algorithm where the retry delay range doubles after each failed attempt; used in Ethernet and 802.11
  • Duty Cycle: The fraction of time a radio is active (transmitting or receiving); lower duty cycle conserves battery life
  • Superframe: A periodic structure in beacon-enabled IEEE 802.15.4 networks dividing time into a contention access period and an optional contention-free period
  • CCA Threshold: The signal power level above which a device considers the channel busy and defers transmission

14.1 In 60 Seconds

Advanced networking topics critical for production IoT: TCP congestion control can misinterpret wireless packet loss as congestion (reducing throughput unnecessarily), IPv4 exhaustion forces NAT usage that breaks end-to-end IoT connectivity, hidden/exposed terminal problems affect CSMA-based protocols, and MTU fragmentation adds significant overhead to small IoT payloads.

14.2 Learning Objectives

By the end of this chapter, you will be able to:

  • Analyze TCP congestion control behavior and its impact on IoT throughput over lossy wireless links
  • Explain IPv4 exhaustion and NAT challenges for IoT deployments
  • Compare MAC protocols (CSMA, TDMA, ALOHA) for different IoT use cases
  • Identify and solve hidden and exposed terminal problems using RTS/CTS and TDMA
  • Implement QoS for prioritizing critical IoT traffic using DSCP marking
  • Calculate fragmentation overhead and select MTU-aware strategies for constrained networks

MAC (Media Access Control) protocols are the traffic rules that decide when each device is allowed to transmit data. Imagine a group conversation where everyone needs to take turns speaking – MAC protocols ensure devices do not all talk at once and create chaos on the network.

“We learned basic MAC protocols, but real IoT networks have trickier problems,” said Max the Microcontroller. “Like TCP congestion control – it was designed for wired networks. When a wireless packet gets lost due to interference, TCP thinks the network is overloaded and slows down. But the network was fine – it was just a momentary radio glitch!”

“That is called the TCP-over-wireless problem,” explained Lila the LED. “TCP reduces its sending rate when it should not, making things slower for no reason. Some IoT systems use UDP instead to avoid this issue, handling reliability themselves at the application layer.”

Sammy the Sensor raised another concern. “What about the hidden terminal problem? I cannot hear another sensor behind a wall, so I think the channel is free and start transmitting. But the gateway can hear BOTH of us, and our signals collide there!”

“Great example,” said Bella the Battery. “The solution is RTS/CTS – Ready-to-Send and Clear-to-Send messages. Before transmitting big data, you ask the gateway for permission. The gateway broadcasts ‘clear,’ telling all nearby devices to wait. It adds a little overhead but prevents costly collisions, especially important for battery-powered devices like me!”

How It Works: TCP Congestion Control in IoT Networks

TCP congestion control operates on a simple principle: packet loss signals congestion. Here’s the step-by-step mechanism and why it fails for wireless IoT:

Step 1: Slow Start Phase

  • TCP starts conservatively with 1 MSS (Maximum Segment Size)
  • Window doubles each RTT until reaching slow-start threshold
  • Example: 1 → 2 → 4 → 8 MSS

Step 2: Congestion Avoidance

  • After threshold, window grows linearly (+1 MSS per RTT)
  • Continues until packet loss detected

Step 3: Loss Response (The IoT Problem)

  • TCP detects loss → assumes congestion → halves window
  • For wired networks: loss = congestion (correct assumption)
  • For wireless IoT: loss = interference (wrong assumption!)

Why This Breaks for LoRa/Zigbee:

  • 5% wireless packet loss (normal interference)
  • TCP interprets ALL as congestion
  • Window repeatedly halved → throughput collapses to 4% of capacity
  • Battery drains retransmitting unnecessarily

Solution: Use UDP-based protocols (CoAP) that don’t assume loss = congestion, letting application layer decide retry strategy.

14.3 Deep Dive: Advanced Networking Concepts for IoT

This section covers advanced networking topics for engineers implementing production IoT systems. Beginners can skip and return when deploying large-scale networks.

14.3.1 TCP Congestion Control and IoT Implications

TCP’s congestion control algorithms (originally designed for traditional networks) can cause problems in IoT environments.

TCP Congestion Window Dynamics on Lossy Links

TCP’s congestion window (CWND) determines how many bytes can be in-flight. On wireless links, packet loss causes catastrophic throughput collapse:

Slow Start phase (exponential growth): \[\text{CWND}_{\text{new}} = \text{CWND}_{\text{old}} \times 2 \quad \text{each RTT until ssthresh}\]

Congestion Avoidance (linear growth): \[\text{CWND}_{\text{new}} = \text{CWND}_{\text{old}} + \text{MSS} \quad \text{per RTT}\]

Loss Event (multiplicative decrease): \[\text{CWND}_{\text{new}} = \frac{\text{CWND}_{\text{old}}}{2}\]

Example: LoRaWAN TCP connection with 5% loss rate, RTT = 2 sec, MSS = 128 bytes, link = 5 Kbps: - Ideal CWND: \(\frac{5{,}000 \times 2}{8} = 1{,}250\) bytes (10 MSS) - With 5% loss: Every 20 packets, one loss → CWND halved before reaching 10 MSS - Measured CWND: oscillates 1-5 MSS, average 3 MSS → 30% of ideal - Throughput: \(\frac{3 \times 128 \times 8}{2} = 1{,}536\) bps (31% of 5 Kbps capacity)

For comparison, UDP-based CoAP with application-layer retries achieves 4,500 bps (90% capacity) on the same link – explaining why IoT protocols avoid TCP on lossy wireless networks.

Solutions:

  1. Use UDP for lossy links: CoAP (UDP-based) works better than HTTP (TCP)
  2. Tune TCP parameters: Increase initial CWND, adjust retransmission timeout
  3. Application-layer retries: Implement custom retry logic at MQTT/CoAP layer

Real Implementation:

# Linux TCP tuning for IoT gateway
sysctl -w net.ipv4.tcp_congestion_control=bbr  # Use BBR instead of CUBIC
sysctl -w net.ipv4.tcp_slow_start_after_idle=0  # Don't reset CWND on idle
sysctl -w net.ipv4.tcp_no_metrics_save=1  # Don't cache congestion metrics

14.3.2 IPv4 Address Exhaustion and NAT for IoT

The IPv4 Problem:

IPv4 address space: 2³² = 4,294,967,296 addresses (~4.3 billion)
Unusable addresses: ~590 million (private ranges, reserved, multicast)
Usable public IPv4: ~3.7 billion addresses

Current IoT devices (2026): ~18 billion
Projected (2030): ~30+ billion
Problem: NOT ENOUGH IPv4 addresses for every device!

Network Address Translation (NAT) - The Temporary Fix:

NAT allows many devices to share one public IP:

Your home network (192.168.1.0/24):

  • Router public IP: 203.0.113.45 (ONE public IP)
  • Devices behind NAT:
    • Phone: 192.168.1.100 (private)
    • Thermostat: 192.168.1.101 (private)
    • Camera: 192.168.1.102 (private)
    • Lights: 192.168.1.103-120 (private)
NAT translation table showing internal private IP addresses and ports (192.168.1.100-102 with high ports) mapped to external public IP address with different port numbers (203.0.113.45:50001-50003), demonstrating how multiple IoT devices share a single public IP address through port-based translation
Figure 14.1: Diagram illustrating Nat Translation Table

NAT Problems for IoT:

  1. Inbound connections blocked:

    • Can’t directly access smart camera from internet
    • Need port forwarding or VPN
    • Security risk if misconfigured
  2. NAT traversal complexity:

    • Requires STUN/TURN servers for peer-to-peer
    • Adds latency (extra hops)
    • Some protocols don’t work through NAT (FTP, SIP without ALG)
  3. Port exhaustion:

    One public IP = 65535 ports
    If 10000 IoT devices each maintain 10 connections:
    10000 × 10 = 100,000 connections → EXCEEDS port limit!
    
    Result: Connection failures, timeouts

IPv6: The Real Solution:

IPv6 address space: 2¹²⁸ ≈ 3.4 × 10³⁸ addresses

That's: 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses
      = 670,000,000,000,000,000,000,000,000 addresses per square millimeter of Earth!

Every IoT device can have multiple global unicast addresses - no NAT needed!

IPv6 Adoption in IoT (2026):

  • Cellular IoT (NB-IoT, LTE-M): 100% IPv6 support
  • Wi-Fi 6: Full IPv6 support
  • Zigbee, Thread: 6LoWPAN (IPv6 over Low-Power Wireless Personal Area Networks)
  • LoRaWAN: IPv6 via IPv6-over-LoRaWAN draft standard

14.3.4 MAC Protocol Challenges: Hidden and Exposed Terminals

CSMA/CA works well in simple scenarios, but wireless networks face unique challenges that don’t exist in wired networks. Two fundamental problems—hidden terminals and exposed terminals—can severely degrade network performance in IoT deployments.

14.3.4.1 The Hidden Terminal Problem

Definition: Two nodes cannot hear each other’s transmissions but both can reach a common destination, causing undetected collisions at the receiver.

Why It Happens:

  • Wireless signals attenuate with distance
  • Physical obstacles (walls, furniture, machinery) block signals
  • Nodes outside each other’s carrier sense range
Hidden terminal problem diagram showing nodes A and C within range of central gateway B but out of range of each other, so both sense an idle channel and transmit simultaneously causing a collision at the gateway
Figure 14.2: Hidden terminal problem with nodes A and C both transmitting to gateway B unaware of each other

The Problem in Action:

Scenario: Smart factory with Wi-Fi sensors

Time 0ms:  Node A senses channel → idle (can't hear C)
           Node C senses channel → idle (can't hear A)

Time 1ms:  Node A starts transmitting to Gateway B
           Node C starts transmitting to Gateway B

Time 2ms:  COLLISION at Gateway B!
           - Gateway B receives garbled data from both A and C
           - A and C think transmission succeeded (no collision detection)
           - Both A and C cannot hear each other's carrier signal

Result:    Both frames lost, but nodes don't know it
           No retransmission until timeout (wasted time and energy)

Real-World IoT Impact:

Scenario Hidden Terminal Effect Performance Loss
Smart home (2-story house) Upstairs sensors can’t hear downstairs → collisions at Wi-Fi AP 20-40% packet loss
Zigbee mesh (walls) Nodes in different rooms → collisions at coordinator 15-30% throughput reduction
Industrial WSN (metal machinery) Metal blocks RF → severe hidden terminals 40-60% retransmissions
Wi-Fi in dense deployment 50+ APs in building → overlapping coverage areas 50%+ channel utilization loss

Solution: RTS/CTS Handshake (Request-to-Send / Clear-to-Send)

RTS/CTS handshake sequence showing Node A sending RTS to Gateway B, Gateway B broadcasting CTS to all nodes (including hidden Node C), Node A transmitting data while Node C waits, and Gateway B sending ACK on completion
Figure 14.3: RTS/CTS handshake sequence diagram preventing hidden terminal collisions

How RTS/CTS Works:

Step 1: Node A sends RTS (Request to Send) to Gateway B
        - RTS includes: duration of upcoming transmission
        - All nodes hearing RTS (even if they can't hear A) set NAV timer

Step 2: Gateway B responds with CTS (Clear to Send)
        - CTS also includes transmission duration
        - All nodes hearing CTS (including hidden Node C) set NAV timer
        - Crucially: Node C hears CTS even if it couldn't hear RTS!

Step 3: Node A transmits DATA frame
        - Hidden Node C stays silent (NAV timer active)
        - No collision at Gateway B

Step 4: Gateway B sends ACK (acknowledgment)
        - Transmission complete
        - All NAV timers expire → nodes can compete for channel again

RTS/CTS Trade-offs:

Overhead Calculation (Wi-Fi example):

Without RTS/CTS:
- DATA frame: 1500 bytes
- ACK: 14 bytes
- Total airtime: ~120 μs (at 100 Mbps)

With RTS/CTS:
- RTS: 20 bytes → 1.6 μs
- CTS: 14 bytes → 1.1 μs
- DATA: 1500 bytes → 120 μs
- ACK: 14 bytes → 1.1 μs
- Total airtime: ~124 μs

Overhead: 4 μs / 124 μs = 3.2% (small!)

But if hidden terminals cause 30% collisions:
- Without RTS/CTS: 30% retransmissions → 156 μs average (collision + retry)
- With RTS/CTS: 3.2% overhead → 124 μs (no collisions)

RTS/CTS wins when collision rate > ~10%!

When to Enable RTS/CTS:

IoT Deployment Enable RTS/CTS? Reasoning
Small smart home (< 10 devices) ❌ No Low collision risk, overhead not worth it
Large smart building (50+ Wi-Fi devices) ✅ Yes Hidden terminals common, reduces collisions
Zigbee mesh (many walls) ✅ Yes Physical obstacles create hidden terminals
Industrial IoT (metal machinery) ✅ Yes Severe RF blocking → many hidden terminals
LoRaWAN ❌ N/A Uses ALOHA, not CSMA (no RTS/CTS)
Bluetooth mesh ✅ Yes Dense deployments with obstacles

14.3.4.2 The Exposed Terminal Problem

Definition: A node unnecessarily defers transmission when its transmission would not cause a collision, reducing channel efficiency.

Why It Happens:

  • Node overhears a transmission not directed at its intended receiver
  • CSMA/CA conservatively assumes all transmissions will collide
  • Spatial reuse is prevented
Exposed terminal problem diagram showing Node B transmitting to Node A while Node C hears Node B and defers its transmission to Node D, even though Node D is out of Node B's range and no collision would occur
Figure 14.4: Exposed terminal problem where node C unnecessarily defers though D is out of B’s range

The Problem in Action:

Scenario: Zigbee smart lighting network

Configuration:
- Node B (light switch) transmitting to Node A (controller)
- Node C (motion sensor) wants to transmit to Node D (different controller)
- Node C can hear Node B, but Node D cannot

Time 0ms:  Node B starts transmitting to Node A

Time 1ms:  Node C senses channel → BUSY (hears B)
           Node C defers transmission

Problem:   Node C COULD transmit to Node D safely!
           - Node D is far from Node B (won't hear B's transmission)
           - C → D transmission wouldn't collide with B → A
           - But CSMA/CA doesn't allow spatial reuse

Result:    Channel is underutilized
           Node C waits unnecessarily → increased latency

Real-World IoT Impact:

Scenario Exposed Terminal Effect Efficiency Loss
Linear sensor array Sensors at opposite ends defer unnecessarily 30-50% channel underutilization
Multi-floor building Wi-Fi Devices on different floors could transmit simultaneously 20-40% throughput loss
Factory floor WSN Sensors in different zones defer to each other 25-35% latency increase
Parking lot sensors Distant sensors wait for nearby transmissions 15-30% reduced spatial reuse

Why Exposed Terminals Matter Less Than Hidden Terminals:

Hidden Terminal:  CAUSES COLLISIONS → data loss, retransmissions, wasted energy
                  Severity: HIGH (actual failures)

Exposed Terminal: REDUCES EFFICIENCY → channel underutilization, higher latency
                  Severity: MEDIUM (performance degradation, not failures)

Engineering Priority:
1. Solve hidden terminals first (RTS/CTS)
2. Accept some exposed terminal inefficiency as acceptable trade-off
3. Use directional antennas or TDMA for mission-critical deployments

14.3.4.3 MAC Protocol Comparison: Handling Hidden/Exposed Terminals

MAC Protocol Hidden Terminal Solution Exposed Terminal Solution IoT Use Cases
CSMA/CA ❌ Poor (collisions common) ❌ Poor (spatial reuse limited) Wi-Fi, Zigbee (add RTS/CTS for improvements)
CSMA/CA + RTS/CTS ✅ Good (handshake prevents collisions) ❌ Poor (still defers unnecessarily) Dense Wi-Fi, industrial Zigbee
TDMA ✅ Excellent (no collisions, time slots) ✅ Excellent (spatial reuse via slot assignment) Cellular IoT, mission-critical WSN
ALOHA ❌ Terrible (no carrier sensing at all!) ✅ Good (transmits anytime → spatial reuse) LoRaWAN (works due to sparse traffic)
FDMA ✅ Excellent (different frequencies) ✅ Excellent (simultaneous transmission) Cellular (frequency bands per user)

Real-World Mitigation Strategies:

Strategy 1: Network Planning (Design Phase)
- Site survey to identify hidden terminal zones
- Strategic AP/gateway placement to minimize coverage holes
- Use 5 GHz Wi-Fi (less interference/congestion than 2.4 GHz, though shorter range; 2.4 GHz penetrates walls better)

Strategy 2: Protocol Configuration (Deployment Phase)
- Enable RTS/CTS for networks with > 20 devices or physical obstacles
- Adjust CSMA/CA backoff parameters (increase contention window)
- Use mesh routing to provide multiple paths (avoids single collision point)

Strategy 3: Advanced Techniques (Enterprise IoT)
- Beamforming (directional antennas reduce hidden terminals)
- MU-MIMO (Wi-Fi 6) - simultaneous transmission to multiple devices
- TDMA scheduling for deterministic traffic (industrial IoT)
- Frequency hopping (Bluetooth) - collision on one channel doesn't affect others

Case Study: Wi-Fi Smart Factory Deployment

Initial Deployment (no RTS/CTS):
- 80 Wi-Fi sensors across factory floor
- Metal machinery creates hidden terminals
- Measured performance:
  * 45% packet loss during peak traffic
  * 3-5 second latency for critical alerts
  * 60% retransmission rate

After Enabling RTS/CTS + Network Redesign:
- RTS/CTS enabled on all APs
- Added 2 additional APs to reduce hidden terminal zones
- Mesh routing for redundancy
- Measured performance:
  * 5% packet loss (acceptable)
  * 200-500ms latency for alerts (excellent)
  * 8% retransmission rate (normal)

Cost: $2000 for 2 additional APs
Benefit: $50,000/year in reduced downtime from missed alerts

Having covered hidden and exposed terminal mitigations for CSMA, the remaining MAC protocols – TDMA and ALOHA – take fundamentally different approaches that sidestep these problems altogether.

TDMA (Time Division Multiple Access): Used by: Cellular (GSM, LTE), some industrial WSN

Algorithm:
1. Coordinator assigns time slots to nodes
2. Each node transmits only in its slot
3. No collisions possible

Example: 10ms frame, 10 nodes
Node 1: 0-1ms
Node 2: 1-2ms
...
Node 10: 9-10ms
Repeat every 10ms

Pros:
+ No collisions → predictable, high efficiency
+ Deterministic latency (max = frame duration)
+ Low power (sleep when not your slot)

Cons:
- Requires time synchronization (GPS, PTP)
- Coordinator needed (single point of failure)
- Wasted slots if node has no data
- Scalability issues (more nodes = longer frames)

Real performance (LTE):
- 50+ devices: 85-95% channel utilization (even with many devices)

ALOHA / Slotted ALOHA: Used by: LoRaWAN Class A, Sigfox

Pure ALOHA:
1. Transmit whenever you have data
2. If collision → wait random time, retry
3. No carrier sensing

Slotted ALOHA:
1. Time divided into slots
2. Transmit only at slot boundaries
3. Still no carrier sensing

Throughput comparison:
Pure ALOHA:    Max 18% channel utilization (!)
Slotted ALOHA: Max 36% channel utilization
CSMA/CA:       Max 70-90% channel utilization
TDMA:          Max 85-95% channel utilization

Why use ALOHA for LoRaWAN?
+ Ultra-simple (no coordination, no synchronization)
+ Low power (no listening, just transmit)
+ Works for infrequent traffic (0.5% duty cycle)
+ Scales to 1000s of nodes (if traffic is sparse)

Choosing MAC Protocol for IoT:

Use Case Traffic Pattern Best MAC Why
Smart meter (1 msg/hour) Infrequent, periodic ALOHA (LoRaWAN) Simple, low power, scales well for sparse traffic
Industrial sensor (100 Hz) Frequent, deterministic TDMA Predictable latency, no collisions, mission-critical
Smart home devices Bursty, on-demand CSMA (Wi-Fi, Zigbee) Fair access, efficient for dynamic traffic
Real-time video Continuous, high-rate TDMA (LTE) Guaranteed bandwidth, low latency
Try It: ALOHA Network Capacity Calculator

Explore how the number of nodes and transmission duty cycle affect channel throughput under ALOHA.

14.3.5 MTU, MSS, and Fragmentation in Constrained Networks

Maximum Transmission Unit (MTU): Maximum packet size a network can carry without fragmentation.

Common MTU sizes:
Ethernet:     1500 bytes (most wired LANs)
Wi-Fi:         1500 bytes (matches Ethernet for compatibility)
6LoWPAN:      127 bytes (IEEE 802.15.4 frame size)
LoRaWAN:      51-242 bytes (depends on spreading factor and region)
NB-IoT:       1358 bytes
PPPoE:        1492 bytes (Ethernet 1500 - 8 for PPP header)

Maximum Segment Size (MSS): Maximum TCP payload size (MTU - IP header - TCP header).

MSS calculation:
Ethernet MSS = 1500 (MTU) - 20 (IPv4 header) - 20 (TCP header) = 1460 bytes

6LoWPAN MSS = 127 (MTU) - 40 (IPv6 header, compressed to 2-4) - 20 (TCP header)
            = ~63-85 bytes (tiny!)

Fragmentation Problem:

Scenario: HTTP request from 6LoWPAN sensor to cloud server

HTTP GET request size: 400 bytes
6LoWPAN MTU: 127 bytes
Required fragments: ceil(400 / 127) = 4 fragments

Fragment transmission:
- Fragment 1/4: 127 bytes
- Fragment 2/4: 127 bytes
- Fragment 3/4: 127 bytes
- Fragment 4/4: 19 bytes

If ANY fragment is lost → ENTIRE datagram lost!
Packet loss: 5% per fragment
Success probability: (0.95)⁴ = 81.5%
Effective loss rate: 18.5% (!)

Compare to no fragmentation (single 127-byte packet):
Packet loss: 5%
Much better!
Try It: MTU Fragmentation Loss Calculator

Adjust the per-fragment loss rate and payload size to see how fragmentation multiplies effective packet loss.

Mitigation Strategies:

  1. Path MTU Discovery: Discover smallest MTU along path

    Send packets with DF (Don't Fragment) bit set
    If too large → get ICMP "Fragmentation Needed" error
    Reduce packet size and retry
  2. Application-level chunking: Send multiple small requests

    Instead of: 1× 400-byte HTTP request
    Use:        4× 100-byte HTTP requests (each fits in one 6LoWPAN frame)
  3. Protocol choice:

    Avoid TCP for constrained networks (large headers, fragmentation)
    Use CoAP (UDP-based, small headers, built-in chunking)

14.3.6 Quality of Service (QoS) for IoT Traffic Differentiation

Not all IoT data is equal - some requires priority handling.

DiffServ (Differentiated Services) Model:

IPv4/IPv6 packets have DSCP (Differentiated Services Code Point) field:

IP header structure showing DSCP field (6 bits for QoS marking) and ECN field (2 bits for congestion notification) within the Type of Service byte, used for traffic differentiation in IoT networks to prioritize critical data like fire alarms over routine telemetry
Figure 14.5: Diagram illustrating Dscp Ip Header

IoT Traffic Classes:

Application Latency Requirement Loss Tolerance Suggested QoS
Fire alarm trigger <100ms 0% loss EF (highest priority)
Video surveillance <500ms 1-5% loss AF41 (high priority)
Temperature reading <5s 10% loss OK AF21 (medium priority)
Firmware update <1min 0% loss (use TCP retries) Best Effort (bulk)
Historical data sync <1hr 0% loss (use retries) Best Effort (lowest)

Real Implementation:

ESP32 example - marking CoAP packets for fire alarm:

// Set DSCP to EF (Expedited Forwarding) for fire alarm
int dscp = 0x2E << 2; // Shift left 2 bits for TOS field
setsockopt(sock, IPPROTO_IP, IP_TOS, &dscp, sizeof(dscp));

// Regular temperature reading uses default (Best Effort)

Router Configuration (Cisco example):

! Define class for IoT alarm traffic
class-map match-any IOT_ALARMS
 match dscp ef

! Define policy - guarantee 10% bandwidth, low latency queue
policy-map IOT_QOS
 class IOT_ALARMS
  priority percent 10

! Apply to WAN interface
interface GigabitEthernet0/0
 service-policy output IOT_QOS

Measured Impact:

Without QoS (fire alarm competes with firmware download):
- Alarm latency: 5-50 seconds (unacceptable!)
- Jitter: ±20 seconds
- Occasional drops during congestion

With QoS (EF class for alarms):
- Alarm latency: 50-200ms (excellent)
- Jitter: ±10ms
- Zero drops even during congestion

14.4 Worked Example: Solving Hidden Terminal Collisions in a Warehouse IoT Network

Scenario: A logistics warehouse (80m x 40m) deploys 30 Zigbee inventory sensors on shelving units to track pallet locations. The sensors report to a single gateway mounted at the center ceiling. Metal shelving creates radio shadows, causing hidden terminal problems – sensors on opposite sides of shelving rows cannot hear each other but both reach the gateway. During peak hours (07:00-09:00), 25% of sensor reports are lost to collisions. Design a MAC-layer solution.

Step 1: Quantify the Hidden Terminal Problem

Parameter Value
Sensors 30, reporting every 5 seconds
Packet size 40 bytes (20B payload + 20B headers)
Data rate 250 kbps (Zigbee 802.15.4)
Transmission time per packet 40 x 8 / 250,000 = 1.28 ms
Offered load 30 packets / 5 seconds = 6 packets/second
Channel busy time 6 x 1.28 ms = 7.68 ms per second (0.77% utilization)

At 0.77% utilization, CSMA/CA should experience almost zero collisions. Yet 25% of packets are lost. Why?

The hidden terminal explanation: With 30 sensors, approximately 10 pairs cannot hear each other (behind shelving). When sensor A transmits, sensor B (hidden from A) senses the channel as free and also transmits. Both signals arrive at the gateway simultaneously, destroying each other. The 0.77% utilization calculation is misleading because it assumes all nodes see all other transmissions – hidden terminals violate this assumption.

Step 2: Evaluate MAC Protocol Solutions

Solution How It Works Collision Rate Overhead Suitable?
Pure CSMA/CA (current) Listen before transmit 25% (hidden nodes can’t listen) Minimal No – hidden terminals bypass carrier sensing
RTS/CTS Request permission from gateway before data ~2% (gateway arbitrates) 28 bytes per exchange (14B RTS + 14B CTS) Yes, but doubles airtime
TDMA slots Gateway assigns time slots to each sensor ~0% (no contention) Beacon overhead + synchronization Yes, best for predictable traffic
Frequency hopping Sensors use different channels ~5% (reduced collision probability) Channel coordination overhead Partial – reduces but doesn’t eliminate

Step 3: TDMA Slot Calculation

Assign each sensor a dedicated time slot within a 5-second reporting cycle:

Superframe duration: 5,000 ms (one complete cycle)
Sensors: 30
Slot duration: 1.28 ms (data) + 0.5 ms (guard time) + 1.0 ms (ACK) = 2.78 ms
Active period: 30 x 2.78 ms = 83.4 ms
Sleep period: 5,000 - 83.4 = 4,916.6 ms (98.3% sleep time)
Metric CSMA/CA (current) TDMA (proposed)
Collision rate 25% ~0%
Successful delivery 75% (requires retries) 99.5%+
Average latency 5-15 ms (with backoff) Deterministic (assigned slot)
Radio on-time per sensor ~50 ms/cycle (listen + transmit + retries) 2.78 ms/cycle
Battery impact Baseline 18x less radio on-time

Step 4: Decision

TDMA is the optimal choice for this scenario because:

  1. Traffic is predictable – every sensor reports at the same 5-second interval, making slot assignment straightforward
  2. Hidden terminals are eliminated – no contention means no collisions regardless of radio shadows
  3. Battery savings are dramatic – sensors sleep 98.3% of the time instead of continuously monitoring the channel
  4. Capacity headroom exists – 83.4 ms of 5,000 ms used, allowing the network to grow to 150+ sensors before slots become tight

When CSMA/CA would be better: If sensors report only on events (unpredictable timing), TDMA wastes empty slots. For event-driven traffic with hidden terminals, RTS/CTS is the pragmatic choice despite its overhead.

Scenario: You have 3 smart home sensors (temperature, motion, light) connected to a Wi-Fi access point.

Challenge: Why does your motion sensor sometimes fail to report movement immediately?

Answer: Hidden terminal problem! The temperature sensor in the basement can’t hear the light sensor in the attic. Both think the channel is idle and transmit simultaneously to the AP, causing collision. The motion sensor’s urgent alert gets caught in the collision.

Solution: Enable RTS/CTS on your AP. Before transmitting large data, sensors request permission. The AP broadcasts “clear to send” which ALL sensors hear, preventing collisions.

Scenario: A warehouse IoT deployment with 100 Zigbee sensors experiences 30% packet loss during peak hours (07:00-09:00).

Analysis:

  • Sensors report every 30 seconds (100 sensors / 30s = 3.3 packets/second)
  • At 250 kbps, channel utilization should be < 1%
  • But loss is 30% - why?

Root Cause: Hidden terminals behind metal shelving + pure CSMA/CA

Calculation:

  • 10 sensor pairs can’t hear each other
  • Collision probability when hidden nodes transmit: ~30%
  • CSMA/CA can’t prevent what it can’t hear

Solution: Upgrade to TDMA with time slots. Each sensor gets dedicated 100ms window. Zero collisions, 98% packet delivery.

Scenario: Design MAC protocol for 500-hectare farm with 1,000 soil sensors reporting every 15 minutes over LoRaWAN.

Requirements:

  • 15 km range
  • 10-year battery life
  • 40-byte payload
  • < $10/sensor

Protocol Selection Decision Tree:

Option A: TDMA

  • Pros: No collisions, deterministic
  • Cons: 1,000 slots × 2s each = 33 minutes cycle → Too slow for 15-min requirement
  • Verdict: ❌ Infeasible

Option B: Slotted ALOHA

  • Channel efficiency: 36.8%
  • At 1,000 sensors, collision rate: ~8%
  • Retransmissions: 8% of readings need retry
  • Battery impact: Acceptable (extra 2-3 retries/day)
  • Verdict: ✅ Works but not optimal

Option C: Adaptive CSMA (LoRaWAN Class A)

  • Listen Before Talk (LBT) in EU868
  • Collision rate: < 2% with proper duty cycle
  • No coordination overhead
  • Battery: 10+ years achievable
  • Verdict: ✅ Optimal choice

Key Insight: At sparse traffic (1,000 sensors × 4 readings/hour = 0.01% duty cycle), pure ALOHA-style approaches outperform coordinated TDMA due to zero synchronization overhead.

14.5 Concept Relationships: MAC Protocols and IoT Design

Concept Depends On Enables Conflicts With
TCP Congestion Control Reliable transport, packet sequence numbers Flow control, network stability Wireless loss interpretation, LoRa duty cycles
NAT (Network Address Translation) IPv4 exhaustion, private addressing Home network scaling, cost savings Peer-to-peer IoT, inbound connections, end-to-end principle
CSMA/CA Carrier sensing, random backoff Fair channel access, distributed coordination Hidden terminals, exposed terminals, high-density deployments
RTS/CTS Handshake CSMA/CA foundation, NAV timer Hidden terminal mitigation, collision reduction Adds 3-4% overhead, not viable for tiny packets
TDMA Slots Time synchronization (GPS/PTP), coordinator Deterministic latency, zero collisions, high efficiency Requires infrastructure, wasted slots if no data, limited scalability
QoS/DSCP Marking DiffServ architecture, router support Traffic prioritization, latency guarantees Requires configuration at every hop, no benefit without congestion

Application to IoT Design:

  • Low-power sensors (LoRa) → ALOHA (no coordination cost)
  • Mission-critical control (industrial) → TDMA (guaranteed delivery)
  • Dense deployments (smart buildings) → CSMA/CA + RTS/CTS + QoS (handle contention + prioritize critical)
  • Mesh networks with obstacles → RPL routing + RTS/CTS (cope with hidden terminals)

14.6 See Also

  • Routing Fundamentals - How packets find paths through multi-hop networks, RPL for IoT meshes
  • Transport Protocols - TCP vs UDP trade-offs, when to use each for IoT
  • LoRaWAN Fundamentals - ALOHA-based MAC in action, Class A/B/C devices
  • Wi-Fi Deep Dive - CSMA/CA with RTS/CTS in 802.11, hidden terminal mitigation
  • Zigbee and Thread - CSMA/CA over 802.15.4, TDMA-like GTS (Guaranteed Time Slots)
  • Quality of Service - DSCP marking, traffic shaping, priority queuing for IoT
Try It Yourself: Hidden Terminal Experiment

Objective: Observe hidden terminal collisions in a real Zigbee network and measure the impact of RTS/CTS.

Materials:

  • 3× Zigbee devices (e.g., XBee or nRF52840)
  • 1× Coordinator/gateway
  • Metal barrier (aluminum foil screen or filing cabinet)

Setup:

[Node A] <------ 5m ------> [Gateway] <------ 5m ------> [Node C]
                                ^
                                |
                            [Metal barrier blocks A-C path]

Experiment Steps:

  1. Baseline Test (No Obstruction):

    # Configure all nodes for 100 packets/second
    # Measure packet delivery rate at gateway
    # Expected: 95-98% delivery (normal wireless loss)
  2. Hidden Terminal Test:

    # Place metal barrier between Node A and Node C
    # Both nodes transmit simultaneously
    # Expected: 30-50% packet loss (collisions at gateway)
  3. RTS/CTS Mitigation:

    # Enable RTS/CTS on all nodes (if supported)
    # Repeat test with metal barrier
    # Expected: 10-15% loss (hidden terminal solved, normal loss remains)

What to Observe:

  • Without RTS/CTS: Collision rate increases dramatically when nodes can’t hear each other
  • With RTS/CTS: Gateway’s CTS broadcast prevents simultaneous transmission
  • Trade-off: RTS/CTS adds ~3% overhead but prevents 20-40% collision loss

Hint: If your Zigbee hardware doesn’t support RTS/CTS toggle, use Wi-Fi with iw dev wlan0 set rts <threshold> on Linux to enable/disable RTS/CTS dynamically.

Solution:

# Disable RTS/CTS (threshold = off)
iw dev wlan0 set rts off

# Run iperf test → measure throughput with collisions

# Enable RTS/CTS (threshold = 100 bytes)
iw dev wlan0 set rts 100

# Run iperf test → measure improved throughput

# Expected improvement: 30-50% throughput gain in hidden terminal scenario

Extension: Try varying the RTS threshold (100, 500, 1500, 2346 bytes) and plot throughput vs overhead. Find the optimal threshold for your packet size distribution.

Common Pitfalls

Setting the CCA threshold too low (detecting very weak signals as “busy”) causes unnecessary deferrals and reduces throughput, especially near the edge of coverage. Fix: calibrate the CCA threshold to a level that detects genuine transmissions in the deployment environment, not background noise.

IEEE 802.15.4 requires an ACK within 192 µs. If no ACK arrives, the device retransmits up to 3 times. At worst, one packet delivery takes 4 × (transmission time + 192 µs). Fix: include the maximum MAC-layer retry delay in end-to-end latency budgets for time-critical applications.

Two “Zigbee” devices from different vendors may use different backoff ranges or retry counts, causing one to monopolise the channel. Fix: verify MAC parameters across all device vendors in a mixed deployment and normalise settings through coordinator configuration where possible.

14.7 Summary

  • TCP congestion control interprets wireless packet loss as network congestion, causing severe throughput degradation on lossy IoT links
  • NAT allows IPv4 sharing but blocks inbound connections and creates complexity for peer-to-peer IoT
  • MAC protocols differ significantly: CSMA is fair but collision-prone; TDMA is deterministic but requires coordination; ALOHA is simple but low efficiency
  • Hidden terminals cause collisions that CSMA cannot detect - solve with RTS/CTS handshaking
  • QoS with DSCP marking ensures critical IoT traffic (alarms) gets priority over bulk transfers
  • Fragmentation increases effective loss rate - design for MTU constraints rather than fragmenting

14.8 What’s Next

Build on the MAC-layer concepts covered here by exploring the related chapters below:

Topic Chapter Description
Hands-on MAC labs Networking Labs Simulate hidden terminal collisions and RTS/CTS mitigation using Packet Tracer and Wokwi
Routing over IoT meshes Routing Fundamentals How RPL selects routes above the MAC layer to deliver packets across multi-hop IoT networks
TCP vs UDP trade-offs Transport Protocols When to use TCP congestion control versus UDP with application-layer reliability in IoT systems
LoRaWAN ALOHA in practice LoRaWAN Fundamentals Class A/B/C device operation and how duty-cycle limits make ALOHA viable for sparse IoT traffic
Wi-Fi CSMA/CA and RTS/CTS Wi-Fi Fundamentals 802.11 CSMA/CA mechanics, RTS/CTS thresholds, and hidden terminal mitigation in dense deployments
Zigbee TDMA slots Zigbee Architecture How Guaranteed Time Slots (GTS) in 802.15.4 provide TDMA-like determinism within a Zigbee network