Framing determines how receivers find packet boundaries. Three techniques: length fields in headers (IP, TCP, MQTT), start/end delimiter bytes (Ethernet, HDLC), and byte stuffing to escape delimiter conflicts in payload data. When packets exceed the MTU (e.g., LoRaWAN’s 51 bytes), they are fragmented – and each fragment carries header overhead, so batching data is more efficient than many small packets.
15.1 Learning Objectives
By the end of this chapter, you will be able to:
Compare framing techniques: Distinguish between length-field, delimiter-based, and byte-stuffing approaches for packet boundary detection
Apply byte stuffing rules: Encode and decode payloads that contain delimiter or escape bytes using HDLC-style escaping
Calculate fragmentation overhead: Determine the number of fragments, total header bytes, and bandwidth efficiency for a given payload and MTU
Evaluate framing trade-offs: Select the appropriate framing method based on payload predictability, overhead tolerance, and protocol constraints
Diagnose worst-case stuffing scenarios: Predict how data-dependent byte stuffing expansion affects MTU compliance and design buffers accordingly
For Beginners: Packet Framing
When data flows over a network, the receiver needs to know where one message ends and the next begins – like how spaces separate words in a sentence. Packet framing solves this by adding markers or length counters that tell the receiver exactly how to slice up the stream of raw data into individual messages. Without framing, all the data would run together into meaningless noise.
Related Chapters
Prerequisites (Read These First):
Packet Anatomy - Headers, payloads, and trailers fundamentals
6LoWPAN - IPv6 header compression for constrained networks
15.2 Frame Delimiters and Boundaries
Time: ~10 min | Difficulty: Intermediate | Unit: P02.C02.U02
Key Concepts
Boundary detection rule: Receivers must know exactly where a frame starts and ends before they can trust any header or payload field inside it.
Key metric: Framing overhead is measured in extra bytes or bits added per frame, including delimiter bytes, escape sequences, and fragmentation headers.
Main trade-off: Simpler delimiter-based framing is easy to debug, but payload-dependent escaping can make worst-case size planning harder than fixed-length schemes.
Common implementation pattern: Constrained serial protocols often pair a delimiter with byte stuffing, while IP-style protocols prefer explicit length fields in the header.
Deployment consideration: MTU limits matter twice: once for the raw payload and again after escaping or fragmentation headers expand the frame.
Design checkpoint: The right framing method is the one whose worst-case overhead, synchronization behavior, and decoder complexity fit the link you actually have.
How does a receiver know where one packet ends and another begins?
What if payload contains the delimiter byte? - Solution: Escape it with a special character - Example: If delimiter is 0x7E, replace payload 0x7E with 0x7D 0x5E - Receiver reverses the escaping - Used by: HDLC, PPP, SLIP
15.2.4 Alternative: Bit Stuffing
Operates at the bit level rather than byte level: - After five consecutive 1 bits in the data, the sender inserts a 0 bit - The receiver removes any 0 bit that follows five consecutive 1 bits - Prevents the data from mimicking the flag pattern 01111110 (0x7E) - Used by: HDLC (at the bit level), CAN bus
A modern encoding that guarantees bounded overhead: - Eliminates all zero bytes from the payload by encoding run lengths - Overhead is exactly 1 byte per 254 payload bytes (worst case ~0.4%) - Produces predictable, fixed-overhead frames – unlike byte stuffing, which is data-dependent - Used by: Embedded serial protocols, USB CDC, sensor networks over UART
Enhanced Byte Stuffing Example:
Suppose you want to send payload [0x10, 0x7E, 0x20] where 0x7E is the frame delimiter:
Original: [0x7E] [0x10, 0x7E, 0x20] [0x7E]
^^^^ ^^^^^^^^^^^^^^^^^ ^^^^
Start Payload End
Problem: Payload contains 0x7E!
Escaped: [0x7E] [0x10, 0x7D, 0x5E, 0x20] [0x7E]
^^^^^^^^^
Escape sequence
Receiver: Sees 0x7D -> reads next byte -> converts 0x5E back to 0x7E
Put the byte stuffing steps in the correct order:
15.3 MTU and Fragmentation
Maximum Transmission Unit (MTU) is the largest packet size a network can handle:
Network Type
MTU Size
Notes
Ethernet
1500 bytes
Standard MTU for most wired networks
Wi-Fi
2304 bytes
Theoretical max, usually 1500 in practice
LoRaWAN
51-242 bytes
Varies by data rate (DR0-DR5)
BLE
27 bytes
Default LE Data PDU payload; up to 251 bytes with Data Length Extension
Zigbee
127 bytes
IEEE 802.15.4 frame size
If your payload exceeds MTU, the network layer fragments it into multiple packets, each with its own header/trailer overhead.
Putting Numbers to It
Let’s quantify fragmentation overhead’s impact on bandwidth efficiency for a constrained IoT network.
Scenario: Temperature sensors in a cold-chain logistics container transmit 1,000 bytes of data (50 temperature readings at 20 bytes each) via 6LoWPAN over IEEE 802.15.4 (Zigbee physical layer).
Network parameters:
MTU: 127 bytes (IEEE 802.15.4 maximum frame size)
FRAG1 header: 4 bytes (first fragment)
FRAGN header: 5 bytes (subsequent fragments)
Calculate fragmentation:
Fragment 1 (FRAG1):
Payload capacity: \(127 - 4 = 123\) bytes
Carries bytes 0-122 of original 1,000-byte payload
Fragments 2-8 (FRAGN):
Payload capacity per fragment: \(127 - 5 = 122\) bytes
Instead of one 1,000-byte payload, send ten 100-byte packets (10 readings each): - Each packet: 13-byte header + 100-byte payload + 4-byte MIC = 117 bytes - Total transmitted: \(10 \times 117 = 1{,}170\) bytes - Efficiency: \(\frac{1000}{1170} \times 100\% = 85.5\%\)
Key insight: For this scenario, fragmentation is MORE efficient (96.2%) than batching into smaller unfragmented packets (85.5%). The crossover point depends on MTU size and header overhead. Rule of thumb: if payload is 2-3× MTU, fragmentation is acceptable; if 10× MTU, consider compression or batching.
How It Works: Byte Stuffing in Real HDLC Transmission
Let’s trace a complete byte stuffing scenario for an HDLC-like protocol transmitting a sensor reading:
Scenario: Send temperature reading {"temp": 30.5} using HDLC framing where 0x7E is the frame delimiter.
Explore how large packets get fragmented when they exceed the Maximum Transmission Unit (MTU). Adjust the payload and MTU sizes to see fragmentation in action.
html`<div style="margin: 1.5rem 0; padding: 1rem; background: ${parseFloat(fragCalc.efficiency) >80?'#E8F5E9':parseFloat(fragCalc.efficiency) >50?'#FFF3E0':'#FFEBEE'}; border-radius: 8px; border-left: 4px solid ${parseFloat(fragCalc.efficiency) >80?'#16A085':parseFloat(fragCalc.efficiency) >50?'#E67E22':'#E74C3C'};"> <h4 style="color: ${parseFloat(fragCalc.efficiency) >80?'#16A085':parseFloat(fragCalc.efficiency) >50?'#E67E22':'#E74C3C'};">${parseFloat(fragCalc.efficiency) >80?'Good Efficiency':parseFloat(fragCalc.efficiency) >50?'Moderate Overhead':'High Overhead Warning'} </h4> <ul style="margin: 0.5rem 0;"> <li><strong>Original payload:</strong> ${payloadSize} bytes</li> <li><strong>Total transmitted:</strong> ${fragCalc.totalBytesTransmitted} bytes</li> <li><strong>Header overhead:</strong> ${fragCalc.totalHeaderOverhead} bytes (${fragCalc.overheadPercent}%)</li> <li><strong>Efficiency:</strong> ${fragCalc.efficiency}% of transmitted data is actual payload</li>${fragCalc.numFragments>1?html`<li><strong>Fragmentation penalty:</strong> Each fragment requires separate processing and potential retransmission</li>`:html`<li><strong>No fragmentation needed:</strong> Payload fits in single MTU</li>`} </ul>${parseFloat(fragCalc.efficiency) <50?html`<p style="margin-top: 0.5rem; color: #E74C3C;"><strong>Recommendation:</strong> Consider increasing MTU size or batching smaller payloads to improve efficiency.</p>`:''}</div>`
Tips for optimal fragmentation:
Choose MTU close to your typical payload size to minimize fragments
6LoWPAN uses 8-byte offset units, so align payloads accordingly
Consider payload batching to amortize fragment header overhead
Monitor reassembly timeout in constrained networks (fragments may arrive out of order)
15.5 Framing Techniques Comparison
Figure 15.1: Comparison diagram showing three framing strategies side by side: length fields with fixed parsing steps, delimiter framing with visible boundary markers, and byte stuffing where a delimiter inside the payload is escaped before transmission. Each lane includes example protocols and the main trade-off between predictability, simplicity, and overhead.
Figure 15.2: Decision guide for selecting a framing method. The chart asks whether the receiver knows the payload length in advance, whether the payload may contain arbitrary binary bytes, and whether bounded overhead is required. Outcomes point to length fields, simple delimiters, byte stuffing, bit stuffing, or COBS with notes about when each option is appropriate.
Match each framing technique to the protocol that uses it:
Common Misconception: “Smaller Packets Are Always Better”
The Myth: “To save bandwidth, I should make my packets as small as possible by minimizing headers and sending frequent tiny updates.”
Why This Is Wrong:
While minimizing packet size seems logical, header overhead doesn’t scale linearly. Consider this example:
Scenario 1: Many Small Packets
Send 100 temperature readings (2 bytes each) as 100 separate packets
Each packet needs 13-byte LoRaWAN header + 4-byte MIC = 17 bytes overhead
Radio wake-up energy: Each transmission requires powering up the radio (10-100 mW for 1-5 seconds)
Protocol overhead: Channel access, preamble, and synchronization for each packet
Gateway load: More packets = more processing and database writes
Collision risk: More transmissions = higher chance of interfering with other devices
The Right Approach:
Batch data when real-time updates aren’t critical
Balance packet size with latency requirements
Consider MTU limits: Don’t exceed Maximum Transmission Unit (LoRaWAN DR0 = 51 bytes)
Use compression: CBOR or Protocol Buffers can reduce payload size while batching multiple readings
Real-World Impact: A smart building with 500 temperature sensors sending every 10 minutes: - Small packets (2 bytes each): 1.9 GB/year, battery life 1 year - Batched packets (20 readings): 0.23 GB/year, battery life 5+ years - Bandwidth savings: 88%, Cost savings: $167/year at $0.10/MB
The Takeaway: Headers are overhead you pay per packet, not per byte. Batch your data intelligently to minimize both bandwidth costs and energy consumption.
15.6 Knowledge Check: Framing
Knowledge Check: Byte Stuffing Quick Check
Concept: Understanding byte stuffing for delimiter handling.
Common Mistake: Not Accounting for Worst-Case Byte Stuffing Overhead
The Problem:
Developers calculate that their 100-byte payload fits comfortably in a 127-byte MTU (IEEE 802.15.4), forgetting that byte stuffing can expand payloads unpredictably.
Byte stuffing adds overhead when delimiter bytes (0x7E) or escape bytes (0x7D) appear in the payload. The overhead is data-dependent – not fixed!
Payload Content
Overhead
Reason
No delimiters/escapes
0%
Best case: no stuffing needed
Random binary data
~0.8%
Statistically ~1 in 128 bytes needs escaping
Compressed data
~1.5%
Compression tends to create uniform byte distribution
Worst case (all delimiters)
100%
Every byte becomes 2 bytes
Real-World Impact:
A smart city project used SLIP (Serial Line IP) for sensor backhaul over RS-485. They tested with ASCII text (low escape rate) but deployed with binary sensor data (high escape rate). Result: - Expected: 90-byte payloads fit in 127-byte frames - Actual: 90-byte payloads expanded to 91-93 bytes after escaping - Problem: 3% of packets fragmented unexpectedly, causing 200ms latency spikes - Cost: $15,000 to debug and switch to length-field framing
The Fix:
Option 1: Budget for worst-case overhead (conservative)
// For HDLC-style byte stuffing with 0x7E delimiter#define MAX_PAYLOAD 100#define MTU 127#define WORST_CASE_OVERHEAD (MAX_PAYLOAD *2)// Every byte could double// Check if we fitif(payload_size + WORST_CASE_OVERHEAD > MTU){// Use fragmentation or compression}
Option 2: Pre-scan payload and calculate actual overhead (optimal)
// Frame structure:// [Start: 0x7E] [Length: 2 bytes] [Payload: N bytes] [CRC: 2 bytes]//// Length field tells receiver exact size - no escaping needed!// Delimiter only at start, not end - payload can contain anything
Comparison:
Framing Method
Overhead (100-byte payload)
Predictable?
Complexity
Byte stuffing (HDLC)
0-100 bytes (data-dependent)
No
Low
Length field
4 bytes (fixed)
Yes
Low
Length field + CRC
6 bytes (fixed)
Yes
Medium
Decision Rule:
Use byte stuffing when: Simple hardware, low CPU, delimiter-based sync is critical
Use length field when: Predictable overhead matters, binary payloads common, fragmentation must be avoided
Common Pitfalls
1. Assuming Delimiter Bytes Never Appear in Real Payloads
Delimiter-based protocols fail fast when teams test only with clean ASCII examples and forget that real binary payloads can contain the same byte values as frame markers. If you use delimiters, validate the escaping path with representative payloads before shipping.
2. Budgeting for Average Stuffing Instead of Worst Case
Byte stuffing overhead is data-dependent. Designing buffers and MTU limits around average expansion works until a delimiter-heavy payload arrives and suddenly forces fragmentation or truncation. Size the frame for the worst legal payload, not the nicest one.
3. Fragmenting Without a Reassembly Plan
Fragmentation is not just a size calculation. The receiver also needs tags, offsets, ordering rules, and timeouts for missing fragments. If one fragment disappears, the whole datagram is usually useless, so reassembly behavior must be treated as part of the framing design.
🏷️ Label the Diagram
Code Challenge
15.7 Summary
Frame delimiters and boundaries enable reliable packet detection:
Length Fields: Header specifies payload size – used by IP, TCP, UDP, MQTT
Delimiters: Special marker bytes signal start/end – used by Ethernet, HDLC, PPP
Byte Stuffing: Escape sequences handle delimiter conflicts in payload data (variable overhead)
Bit Stuffing: Inserts 0 bits after five consecutive 1 bits to prevent flag pattern mimicry
COBS: Modern encoding with bounded, predictable overhead (~0.4% worst case)
MTU Limits: Networks fragment packets that exceed Maximum Transmission Unit
Fragmentation Overhead: Each fragment adds header bytes, reducing efficiency
Key Takeaways:
Choose framing method based on whether payload length is known in advance
Sammy the Sensor is sending messages over a busy radio channel. But there’s a problem – “How does the receiver know where my message starts and ends?”
Max the Microcontroller has three ideas:
“Idea 1: Write the length first!” Max says. “Start with ‘This message is 10 bytes long.’ Then the receiver counts exactly 10 bytes. Done! That’s what MQTT and TCP do.”
Lila the LED suggests another way: “Idea 2: Use special bookends! Put a special marker at the start and end, like putting quotation marks around a sentence: [START] Hello World [END].”
“But what if my message contains the special marker?” asks Sammy.
“Idea 3: Byte stuffing!” says Lila. “It’s like using an escape character. If your message has the special word, you add a code before it so the receiver knows ‘this is data, not a marker.’ Like using backslash in programming!”
Bella the Battery has a warning: “And watch out for messages that are TOO BIG! If your message is bigger than the maximum size the network allows (called MTU), it gets split into pieces. Each piece needs its own wrapper, which wastes space. So it’s better to send ONE medium message than TEN tiny ones – less wrapping paper!”
The Squad’s Rule: Every message needs a frame – like an envelope for a letter. The receiver needs to know where it starts, where it ends, and how big it is!
Knowledge Check: MTU and Fragmentation
Scenario: You need to send 500 bytes of sensor data over a network with a 127-byte MTU (like IEEE 802.15.4/Zigbee). Each fragment adds a 5-byte fragmentation header.