Packet Switching: A network paradigm where data is broken into packets, each routed independently through the network; dominant model for modern networks
Circuit Switching: A paradigm where a dedicated end-to-end path is reserved for the duration of a call (traditional telephone networks); inefficient for bursty IoT traffic
Store-and-Forward: The switch receives the complete packet, verifies the CRC, and then forwards it to the next hop; adds one packet-transmission-time of delay per hop
Cut-Through Switching: The switch begins forwarding a packet as soon as the destination address is read, before the full packet is received; reduces latency but cannot detect errors mid-packet
Virtual Circuit: A connection-oriented service over a packet-switched network that provides guaranteed ordering and QoS (e.g., ATM, MPLS)
Queuing Delay: The time a packet spends waiting in a switch buffer for an occupied output link; the dominant source of variable latency in loaded networks
Head-of-Line Blocking: A queuing phenomenon where a large packet at the front of a queue prevents smaller, higher-priority packets behind it from being forwarded
40.1 In 60 Seconds
Packet switching lets millions of IoT devices share network infrastructure by breaking data into packets that are independently routed – each device uses resources only when actively transmitting. Through statistical multiplexing, a single link can serve many more devices than its bandwidth would allow with dedicated connections. Key metrics to distinguish: bandwidth (maximum capacity), throughput (actual sustained rate), and goodput (useful application data after subtracting protocol overhead).
40.2 Learning Objectives
By the end of this section, you will be able to:
Explain Packet Switching: Describe how routers examine headers and make independent forwarding decisions
Analyze Multiplexing: Differentiate how multiple data streams share a single network link through statistical multiplexing
Evaluate Network Resilience: Describe how packet switching enables automatic rerouting around failures
Distinguish Performance Metrics: Differentiate between bandwidth, throughput, and goodput
Calculate Network Capacity: Apply statistical multiplexing formulas to determine IoT gateway capacity and efficiency for real deployments
For Beginners: Packet Switching
Instead of reserving a dedicated line for an entire conversation (like old telephone switchboards), packet switching breaks data into small chunks called packets that travel independently through the network. This is like sending a letter in several postcards – each one can take a different route but they all arrive at the same destination.
Sensor Squad: Sharing the Road!
“In the old days, when you made a phone call, an entire wire was reserved JUST for you – even during the silent pauses,” said Max the Microcontroller. “Imagine if a highway lane was blocked off for just one car! Packet switching changed everything.”
“Now my data gets broken into tiny packets,” explained Sammy the Sensor. “Each packet finds its own way through the network, hopping from router to router. And when I am not sending anything, other devices can use the same wires. It is like a shared highway where everyone takes turns.”
“This is what makes IoT possible,” said Lila the LED. “Without packet switching, every sensor would need its own dedicated wire to the cloud. With millions of IoT devices, that would be impossible! Instead, statistical multiplexing lets thousands of devices share one connection because they rarely all send data at the exact same time.”
“There are three important measurements to know,” added Bella the Battery. “Bandwidth is the maximum speed of the highway. Throughput is how fast traffic actually moves (always less than bandwidth due to congestion). And goodput is the useful data after you subtract all the protocol overhead. For IoT, goodput is what really matters – it tells you how much actual sensor data gets through!”
40.3 Prerequisites
Before diving into this chapter, you should be familiar with:
Packet switching makes IoT economically viable. Without it, every sensor would need a dedicated connection to the cloud – impossibly expensive at scale. Packet switching lets millions of devices share network infrastructure, each using resources only when actively transmitting.
40.4 How Packet Switching Works
When a datagram is transmitted, devices along the communication path examine the header and make forwarding decisions based on the destination address.
Figure 40.1: Packet switching path from IoT sensor through routers to cloud
Transmitted datagrams effectively “find their own way” across an internetwork because each router independently decides where to forward the datagram based on addressing information.
40.4.1 The Forwarding Process
Packet arrives at router interface
Router examines destination IP address in header
Routing table lookup finds best next-hop
Packet forwarded out appropriate interface
Process repeats at each router until destination reached
Each router makes its decision independently – packets from the same conversation may take different paths through the network.
40.4.2 Packet Switching Process Summary
Data is broken into datagrams with headers containing source and destination addresses
Each datagram is routed independently based on destination address
Many paths may be used for a single communication (no fixed path)
Datagrams are reordered at the destination based on sequence numbers
40.5 Advantages of Packet Switching
40.5.1 Multiplexing
Multiple data streams share a common network link by interleaving datagrams.
Figure 40.2: Time-division multiplexing of video and sensor packets on shared link
IoT Example: A smart home gateway can simultaneously:
Stream security camera footage (video)
Transmit temperature sensor readings (data)
Handle voice assistant commands (audio)
All over the same Wi-Fi connection.
40.5.2 Network Resilience
If one link becomes unavailable, subsequent datagrams are re-routed along other links.
Figure 40.3: Network resilience with automatic rerouting around failed link
IoT Example: An industrial sensor network can maintain communication even if individual network segments fail, because routers dynamically discover alternate paths.
40.6 Bandwidth, Throughput, and Goodput
These three terms are often confused but have distinct meanings:
Metric
Definition
Analogy
Bandwidth
Maximum theoretical capacity of the link
Highway lanes
Throughput
Actual measured data rate achieved
Real traffic flow
Goodput
Usable application data rate (excluding overhead)
Passengers delivered
Alternative View: Bandwidth vs Throughput vs Goodput
This variant clarifies the critical difference between these three commonly confused network performance metrics – essential for IoT capacity planning.
Figure 40.4: Bandwidth vs Throughput vs Goodput - The journey from theoretical capacity to usable application data
IoT Impact: For small IoT packets (50 bytes payload), protocol overhead can reduce goodput to just 39% of throughput. Larger packets improve efficiency significantly.
40.6.1 Real-World Efficiency
Real-world networks typically achieve 60-70% efficiency (goodput / bandwidth):
100 Mbps Link
Value
Bandwidth
100 Mbps (theoretical maximum)
Throughput
~70 Mbps (after congestion, retries)
Goodput
~65 Mbps (after protocol overhead on 1000-byte packets)
Putting Numbers to It
When designing an IoT gateway, understanding the relationship between bandwidth, throughput, and goodput is critical for capacity planning. Consider a building automation system with a 100 Mbps Ethernet backbone.
For 50-byte IoT sensor packets with the same 78 bytes overhead: \[\eta_{\text{protocol}} = \frac{50}{50 + 78} = 0.391 \rightarrow \text{Goodput} = 70 \times 0.391 = 27.4 \text{ Mbps}\]
Key insight: Small IoT packets suffer from poor protocol efficiency. The 100 Mbps link delivers only 27.4 Mbps of useful IoT data with tiny packets, compared to 64.9 Mbps with larger packets – a 2.4x difference based purely on packet size.
Protocol headers (8-40+ bytes per packet depending on stack)
Network congestion and collisions
Retransmissions for lost packets
Processing delays at each hop
40.7 Packet Size Tradeoffs
Tradeoff: Small Frequent Packets vs Aggregated Large Packets
Option A: Send small packets immediately as data is generated – lower latency, simpler device logic, real-time responsiveness
Option B: Aggregate multiple readings into larger packets before transmission – higher protocol efficiency, lower power consumption, reduced network overhead
Decision Factors: Choose small immediate packets for time-critical applications (alarms, real-time control) where latency matters more than efficiency. Choose aggregation for periodic telemetry (temperature, humidity) where 10-60 second delays are acceptable. A 50-byte packet has ~39% efficiency due to fixed 78-byte overhead, while aggregating 10 readings into 500 bytes achieves ~87% efficiency. For battery-powered devices, aggregation can extend battery life 2-3x by reducing radio wake-ups.
40.7.1 Efficiency by Packet Size
Payload Size
Total with Overhead
Protocol Efficiency
50 bytes
128 bytes
39%
100 bytes
178 bytes
56%
500 bytes
578 bytes
87%
1000 bytes
1078 bytes
93%
1460 bytes (max TCP)
1538 bytes
95%
IoT Recommendation: Aggregate small sensor readings when latency permits. A gateway collecting 50-byte readings from 10 sensors and sending one 500-byte packet is over 2x more efficient than 10 individual transmissions.
40.8 Worked Example: Statistical Multiplexing for IoT Gateway
Scenario: A smart building gateway aggregates data from 500 BLE sensors, each transmitting a 20-byte reading every 30 seconds. The gateway uplinks via an NB-IoT cellular connection with 250 kbps theoretical uplink capacity. Can the gateway handle all 500 sensors?
Sensor data rate:
Payload: 20 bytes = 160 bits
Protocol overhead (BLE -> MQTT -> TCP -> IP): ~100 bytes = 800 bits
Total per reading: 960 bits
Interval: 30 seconds
Average bit rate per sensor: 960 / 30 = 32 bps
40.8.2 Step 2: Calculate Aggregate Bandwidth
Without statistical multiplexing (worst case: all 500 transmit simultaneously):
Peak bandwidth: 500 x 960 bits = 480,000 bits in ~1 second burst
Peak rate: 480 kbps
With statistical multiplexing (sensors transmit at random times):
Average aggregate: 500 x 32 bps = 16,000 bps = 16 kbps
Peak (99th percentile, Poisson model): ~31 kbps
Statistical multiplexing gain: 480 / 31 = 15.5x
40.8.3 Step 3: Evaluate NB-IoT Link Capacity
NB-IoT uplink capacity: 250 kbps (theoretical maximum)
Practical throughput: ~160 kbps (64% of theoretical)
Average load: 16 kbps / 160 kbps = 10% utilization
Peak load: 31 kbps / 160 kbps = 19% utilization
Headroom: ~81% available for retransmissions and bursts
Putting Numbers to It
Statistical multiplexing enables massive IoT deployments by leveraging the fact that sensors rarely transmit simultaneously. The statistical multiplexing gain quantifies this benefit.
For the smart building gateway example, assume sensor transmissions arrive as a Poisson process with average rate \(\lambda = 500 \text{ sensors} \times \frac{1}{30 \text{ s}} = 16.67 \text{ transmissions/second}\).
Peak transmission rate (99th percentile using Poisson distribution): \[P(X \leq k) = 0.99 \text{ where } X \sim \text{Poisson}(\lambda t)\]
For a 1-second observation window with \(\lambda = 16.67\): \[P(X \leq 27) \approx 0.99 \text{ (from Poisson CDF tables)}\]
So we size the link for approximately 27-32 simultaneous transmissions per second, not 500.
The network needs only \(1/15.6\) of the capacity compared to dedicated circuits for each sensor. This is why statistical multiplexing makes IoT economically feasible at scale.
40.8.4 Step 4: Determine Maximum Sensor Count
Target maximum utilization: 60% (leaves headroom for retransmissions)
Available bandwidth: 160 kbps x 0.60 = 96 kbps
Per-sensor average rate: 32 bps
Maximum sensors: 96,000 / 32 = 3,000 sensors
Result: The NB-IoT link can support ~3,000 sensors
(6x the current 500-sensor deployment)
Key Insight: Statistical multiplexing is what makes IoT deployments economically viable. Without it, 500 sensors would need 480 kbps of dedicated bandwidth. With it, the same sensors need only 16-31 kbps on average – a 15x reduction that allows a single low-cost NB-IoT connection to serve thousands of sensors.
40.9 Real-World Case Study: Factory Floor Network Capacity
A manufacturer upgraded their factory floor network to support 1,200 IoT sensors alongside existing IP camera and workstation traffic on a shared Gigabit Ethernet backbone.
40.9.1 Before Upgrade: Bandwidth Crisis
Traffic Source
Devices
Average Rate
Peak Rate
Peak Aggregate
IP Cameras
48
4 Mbps each
8 Mbps
384 Mbps
Workstations
120
2 Mbps each
50 Mbps
240 Mbps
IoT Sensors
200
1 kbps each
50 kbps
10 Mbps
Total
368
634 Mbps
At 634 Mbps peak, the Gigabit backbone was at 63% utilization – acceptable but with camera footage degrading during peak production hours.
40.9.2 After Adding 1,000 More Sensors
Traffic Source
Devices
Average Rate
Peak Rate
Peak Aggregate
IP Cameras
48
4 Mbps
8 Mbps
384 Mbps
Workstations
120
2 Mbps
50 Mbps
240 Mbps
IoT Sensors
1,200
1 kbps
50 kbps
60 Mbps
Total
1,368
684 Mbps
Result: Adding 1,000 IoT sensors increased peak aggregate by only 50 Mbps (8% increase) – from 634 to 684 Mbps. The sensors’ tiny payloads and infrequent transmissions made them nearly invisible on the network.
Lesson: IoT sensor traffic is dominated by overhead, not payload. The real bottleneck was not bandwidth but switch port capacity and DHCP address pool exhaustion. Network planning for IoT must consider port density and management plane capacity, not just data plane bandwidth.
40.10 Worked Example: End-to-End Latency Calculation
Scenario: An ESP32 sensor sends a 100-byte temperature alert to a cloud server. The path crosses 4 routers over a 200 km fiber link. What is the total end-to-end latency?
Latency has four components, and engineers often forget one or more:
1. Transmission delay (time to push bits onto the wire)
2. Propagation delay (time for signal to travel the physical distance)
Distance: 200 km fiber
Speed of light in fiber: ~200,000 km/s (2/3 of vacuum speed)
Propagation delay = 200 / 200,000 = 1.0 ms
3. Processing delay (time each router takes to examine headers and look up routing table)
Per router: 0.05-0.5 ms (depends on router hardware)
4 routers x 0.2 ms average = 0.8 ms
4. Queuing delay (time waiting in router buffers behind other packets)
Light load: 0.1-1 ms per router
Heavy load: 5-50 ms per router (the dominant factor!)
4 routers x 0.5 ms average = 2.0 ms (light load)
4 routers x 20 ms average = 80 ms (congested network)
Total latency:
Condition
Transmission
Propagation
Processing
Queuing
Total
Light load
0.014 ms
1.0 ms
0.8 ms
2.0 ms
3.8 ms
Heavy load
0.014 ms
1.0 ms
0.8 ms
80 ms
81.8 ms
Key insight for IoT design: For short-distance IoT networks (within a building), propagation delay is negligible. The dominant latency factors are queuing delay under congestion and processing delay in resource-constrained gateways. A Zigbee mesh with 5 hops through battery-powered routers can add 50-200 ms of processing delay alone, far exceeding the propagation delay across the entire internet.
Design rule: If your IoT application requires sub-100 ms latency (industrial control, real-time alarms), limit the path to 3 hops maximum and ensure dedicated bandwidth to avoid queuing delays.
Scenario: Your manufacturing facility has an aging network infrastructure. Five industrial machines connect via 100BASE-T Ethernet (100 Mbps each) to a central switch. The switch connects to the router through a legacy 10BASE-T link (10 Mbps). The router has a modern 1 Gbps connection to the internet. All five machines simultaneously begin uploading large diagnostic files to the cloud for analysis.
Network path: Machines (100 Mbps) -> Switch -> 10 Mbps link -> Router (1 Gbps) -> Internet
Think about:
What is the maximum aggregate throughput that all five machines can achieve when uploading data simultaneously to the cloud?
If you had a $5,000 budget to upgrade one network segment, which link should you prioritize upgrading to immediately improve performance?
Key Insight: The bottleneck principle states that “throughput cannot exceed the slowest link” in the data path. Despite each machine having 100 Mbps connections and the router having 1 Gbps internet access, the legacy 10BASE-T link between switch and router becomes a 10 Mbps bottleneck – limiting aggregate throughput to 10 Mbps regardless of faster links before or after it. All five machines must share this 10 Mbps (~2 Mbps each). This is analogous to highway traffic: a 10-lane highway (100 Mbps machine links), then a 1-lane construction zone (10 Mbps bottleneck), then a 10-lane highway (1 Gbps internet) – traffic flows at 1-lane speed. Upgrading the switch-to-router link to 1 Gbps would allow 100 Mbps aggregate throughput (then limited by individual machine links).
40.12
Interactive: Packet Fragmentation and Reassembly
Common Pitfalls
1. Confusing Store-and-Forward Latency With End-to-End Latency
Store-and-forward adds one packet-transmission-time per switch hop. For a 1500-byte packet at 100 Mbps, that is 120 µs per hop. Across 10 hops, total store-and-forward delay is 1.2 ms — significant for real-time IoT control loops. Fix: count hops and calculate store-and-forward latency contribution explicitly in system latency budgets.
2. Underestimating Queuing Delay Under Burst Traffic
A well-provisioned network with average 10% utilisation can saturate momentarily during a coordinated sensor burst (e.g., all sensors transmitting after a power event). Fix: use traffic shaping and jitter at the sensor level to smooth bursts, or provision queues with enough depth to absorb expected burst sizes.
3. Assuming Packet Switching Guarantees Order
IP packet switching routes each packet independently; packets may arrive out of order. Applications that assume in-order arrival (without TCP) will corrupt data. Fix: either use TCP or add application-level sequence numbers and a reorder buffer.
🏷️ Label the Diagram
Code Challenge
40.13 Summary
Packet switching lets datagrams find their own paths, with each router making independent forwarding decisions
Multiplexing enables multiple data streams (video, sensor data, commands) to share a single network link
Network resilience comes from automatic rerouting – if one path fails, packets take alternate routes
Bandwidth is theoretical maximum, throughput is measured rate, goodput is usable application data (typically 60-70% of bandwidth)
Packet size tradeoffs: Small packets = lower latency but higher overhead; large/aggregated packets = better efficiency but higher latency
Latency is dominated by queuing delay under load, not transmission or propagation for typical IoT distances
Statistical multiplexing makes IoT economically viable by allowing thousands of sensors to share a single low-bandwidth link
40.14 What’s Next
Now that you can analyze how packets are switched through networks and calculate statistical multiplexing gains, the next section explores converged networks – how modern infrastructure carries voice, video, data, and IoT traffic on a single unified network. You will also learn about channel access mechanisms like CSMA/CA that enable shared wireless communication.