37  RPL Routing & Traffic

In 60 Seconds

RPL offers two routing modes: Storing (distributed routing tables at each node for optimal P2P paths, typically 162-882 bytes per node) and Non-Storing (centralized routing at root via source routing headers, minimal node memory but 50-135 ms added latency for P2P traffic). Traffic patterns – Many-to-One, One-to-Many, Point-to-Point – determine which mode best fits your deployment.

37.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Compare routing modes: Explain the differences between Storing and Non-Storing modes in RPL
  • Evaluate trade-offs: Assess memory, latency, and scalability trade-offs for each routing mode
  • Identify traffic patterns: Recognize Many-to-One, One-to-Many, and Point-to-Point traffic patterns
  • Select appropriate modes: Choose the right routing mode based on network requirements and constraints
  • Design RPL networks: Apply routing mode concepts to design efficient IoT network topologies
  • Calculate memory requirements: Estimate memory overhead for different routing configurations
MVU: RPL Routing Mode Selection

Core Concept: RPL offers two routing modes - Storing (distributed routing tables for optimal paths) and Non-Storing (centralized routing at root for minimal node memory). The choice fundamentally shapes your network’s performance characteristics.

Why It Matters: A 500-node smart building network using the wrong mode can waste 40KB+ RAM per node (Storing) or add 50ms+ latency to every sensor-actuator interaction (Non-Storing). Barcelona’s smart lighting reduced node costs by 30% by selecting Non-Storing mode for their predominantly many-to-one traffic.

Hey, Future Network Engineer! Let’s learn about how tiny sensors send messages to their friends!

Meet the Mail Carriers of IoT-ville:

Imagine you live in a neighborhood where houses can only talk to their neighbors. How do you send a letter to Grandma who lives far away?

🏠 Storing Mode (Smart Houses)

  • Every house has a mini-map of the neighborhood
  • When a letter comes, the house checks its map and says: “I know a shortcut!”
  • Letters take the fastest path directly to Grandma
  • But every house needs a good memory to keep the maps

🏠 Non-Storing Mode (Simple Houses)

  • Only the Post Office (the root) has the big map
  • Letters go UP to the Post Office first
  • The Post Office writes directions on the envelope
  • Letters follow the directions to Grandma
  • Houses don’t need to remember anything - just follow directions!

Which is Better?

  • Smart Houses (Storing): Faster delivery, but houses need more brain power
  • Simple Houses (Non-Storing): Slower delivery, but even tiny birdhouses can join!

Real World Example: Your smart thermostat (sensor) tells your heater (actuator) to turn on: - Storing Mode: Thermostat → Closest Router → Heater (2 hops!) - Non-Storing Mode: Thermostat → Gateway → Back to → Heater (4 hops!)

Fun Challenge: If you had 100 battery-powered sensors, which mode would you choose? Why?

The Basic Idea: When IoT devices need to send data to each other, they have to decide HOW to find the path. RPL offers two different strategies:

  1. Storing Mode = Every device keeps its own “address book”
    • Knows how to reach nearby devices directly
    • Uses more memory but finds shorter paths
  2. Non-Storing Mode = Only the central gateway knows all addresses
    • Devices just pass messages upward
    • Uses less memory but all routes go through the gateway

Simple Analogy:

  • Storing is like having GPS in every car - each finds its own route
  • Non-Storing is like a taxi dispatch center - everyone calls central to get directions

When does this matter?

  • If your sensors have limited memory (< 16KB) → Non-Storing
  • If sensors need to talk directly to each other quickly → Storing
  • If you just collect data from sensors → Either works!

37.2 RPL Routing Modes

RPL supports two modes with different memory/performance trade-offs:

37.2.1 Storing Mode

Each node maintains routing table for its sub-DODAG.

RPL Storing mode showing distributed routing tables: ROOT maintains routes to all nodes, intermediate nodes B and C maintain routes to their descendants, enabling distributed forwarding decisions

RPL Storing mode with distributed routing tables at each node for optimal downward path forwarding
Figure 37.1: RPL Storing mode with distributed routing tables at each node for optimal downward path forwarding
Storing mode operation
Figure 37.2: RPL storing mode with distributed routing tables

In Storing mode, every node in the network maintains its own routing table for its sub-DODAG – the portion of the network tree beneath it. When a node joins the DODAG, it sends a DAO (Destination Advertisement Object) message to its parent, announcing “I am reachable through you.” The parent records this in its routing table and propagates the information upward. Over time, each router-capable node builds a local map of all descendants it can reach.

When a packet arrives, the forwarding node consults its routing table to determine the best next hop. If the destination is a descendant, the node forwards directly downward without involving the root at all. For example, if nodes E and F are both children of node B, a packet from E to F follows the path E → B → F – just two hops, with B making the routing decision locally.

This distributed approach yields three key benefits: routes between nearby nodes are optimal (no unnecessary detour through the root), latency stays low because routing decisions happen at every hop, and the root node is not a traffic bottleneck since forwarding responsibility is shared across the network.

The trade-off is memory. Every routing-capable node must store entries for all nodes in its sub-tree. In a 200-node network where a mid-level router has 15 descendants, that router needs approximately 15 entries x 18 bytes = 270 bytes of routing state. For resource-rich devices with 32 KB or more of RAM, this is negligible. But for the most constrained Class 1 devices (10 KB RAM), it may consume a significant fraction of available memory. Additionally, every time a node joins or leaves the network, DAO messages must propagate through the sub-tree to update routing tables, creating control traffic overhead.

Storing mode is best suited for networks with router-capable devices that have sufficient memory (32+ KB RAM), deployments where point-to-point communication is common (sensor-to-actuator interactions), and applications requiring low latency between peers.

37.2.2 Non-Storing Mode

Only root maintains routing information; exploits source routing.

RPL Non-Storing mode showing centralized routing at ROOT with complete topology, intermediate nodes maintain only parent pointers, downward routing uses source routing headers from root

RPL Non-Storing mode with centralized routing at root and source routing headers for downward traffic
Figure 37.3: RPL Non-Storing mode with centralized routing at root and source routing headers for downward traffic
Non-storing mode operation
Figure 37.4: RPL non-storing mode with source routing via root

In Non-Storing mode, the routing intelligence is centralized at the DODAG root. Individual nodes do not maintain routing tables at all – they only know their parent (the next hop toward the root). When a node has data to send, it simply forwards the packet upward to its parent, which forwards to its parent, and so on until the packet reaches the root.

The root is the only node with a complete picture of the network topology. It builds this picture from DAO messages that every node sends upward. When the root needs to forward a packet downward (or route point-to-point traffic), it inserts a source routing header into the packet. This header contains the complete hop-by-hop path – essentially a list of addresses saying “go to B, then to F.” Each intermediate node simply reads the next address from the header and forwards accordingly, with no routing table lookup required.

Consider the same E-to-F routing scenario: E sends the packet upward to B (its parent), B forwards upward to A (the root). The root looks up F in its routing table, determines the path [B, F], inserts this as a source route, and sends the packet back down to B, which delivers it to F. The route E → B → A → B → F takes five hops instead of two – the packet traverses the B → A link twice.

Let’s quantify the latency penalty of Non-Storing mode for point-to-point traffic in a smart building motion-to-light scenario.

Network Parameters:

  • Hop count (Storing): 2 hops (Motion Sensor → Relay → Light)
  • Hop count (Non-Storing): 4 hops (Motion Sensor → Relay → Root → Relay → Light)
  • Per-hop latency: 15 ms (typical 802.15.4 transmission + processing)
  • Radio duty cycle: 1% (devices sleep 99% of time)
  • Wake delay: 50 ms average (sender waits for receiver to wake)

Storing Mode Total Latency: \[ T_{\text{Storing}} = n_{\text{hops}} \times (t_{\text{hop}} + t_{\text{wake}}) \] \[ T_{\text{Storing}} = 2 \times (15 + 50) = 130 \text{ ms} \]

Non-Storing Mode Total Latency: \[ T_{\text{Non-Storing}} = n_{\text{hops}} \times (t_{\text{hop}} + t_{\text{wake}}) + t_{\text{root-processing}} \] Where \(t_{\text{root-processing}} = 5\) ms (source route computation). \[ T_{\text{Non-Storing}} = 4 \times (15 + 50) + 5 = 265 \text{ ms} \]

Latency penalty: \[ \Delta T = T_{\text{Non-Storing}} - T_{\text{Storing}} = 265 - 130 = 135 \text{ ms} \]

User perception threshold: Lighting systems feel responsive below 200 ms (human perception). Non-Storing at 265 ms exceeds this threshold.

Packet success rate impact: \[ P_{\text{success}} = (1 - PER)^{n_{\text{hops}}} \] Where \(PER = 0.05\) (5% packet error rate, typical indoor).

Storing mode (2 hops): \[ P_{\text{Storing}} = (1 - 0.05)^2 = 0.9025 \approx 90\% \]

Non-Storing mode (4 hops): \[ P_{\text{Non-Storing}} = (1 - 0.05)^4 = 0.8145 \approx 81\% \]

Conclusion: Non-Storing mode adds 104% latency (2× slower) and reduces reliability by 9% for P2P traffic – justifying Storing mode for interactive sensor-actuator control despite the memory cost.

This centralized design has compelling advantages for constrained networks. Each node needs only about 2 bytes of routing state (its parent pointer), making Non-Storing mode viable even on the tiniest microcontrollers with less than 16 KB of RAM. The network can scale to thousands of nodes without any increase in per-node memory. Simpler nodes mean simpler firmware, fewer bugs, and lower manufacturing costs.

The costs are equally clear: all point-to-point traffic must traverse the root, adding latency and extra hops. The root becomes a potential bottleneck and single point of failure since it handles all routing decisions. Every downward packet carries a source routing header that grows with path length – a 10-hop path adds approximately 160 bytes of header overhead (16 bytes per IPv6 address x 10 hops), which can be significant for small sensor payloads.

Non-Storing mode is best suited for networks dominated by many-to-one traffic (sensors reporting to a gateway), deployments using highly constrained devices with limited RAM, and large-scale networks where peer-to-peer communication is infrequent.

37.2.3 Interactive: P2P Latency Calculator

Use this calculator to compare Storing vs Non-Storing mode latency for point-to-point traffic in your deployment.

37.2.4 Storing vs Non-Storing Comparison

Aspect Storing Mode Non-Storing Mode
Routing Table Distributed (all nodes) Centralized (root only)
Node Memory Higher (routing table) Lower (parent pointer only)
Route Optimality Optimal (direct paths) Suboptimal (via root)
Latency Lower Higher (extra hops)
Root Load Low High (all routing decisions)
Scalability Limited by node memory Limited by root capacity
DAO Destination Parent Root (through parents)
Header Overhead Low High (source routing)
Best For Powerful nodes, low latency Constrained nodes, many-to-one

37.2.5 Worked Example: 50-Node Industrial Sensor Network

Consider a factory floor with 50 temperature and vibration sensors arranged in a three-level tree: 1 border router (root), 5 mains-powered relay nodes (level 1), and 44 battery-powered sensors (level 2, roughly 9 per relay). Each relay node serves as a router for its cluster of sensors.

Storing Mode Routing (E wants to reach F, both under different relays):

Sensor E sits under Relay 2, and sensor F sits under Relay 3. In Storing mode, E sends the packet to Relay 2. Relay 2 checks its routing table – F is not in its sub-tree, so it forwards upward to the root. The root checks its table, finds F is reachable via Relay 3, and forwards directly to Relay 3. Relay 3 delivers to F.

Path: E → Relay 2 → Root → Relay 3 → F (4 hops)

Non-Storing Mode Routing (same E to F path):

E sends upward to Relay 2 (default route). Relay 2 forwards upward to Root (default route). The Root looks up F, builds a source route header [Relay 3, F], and sends the packet back down. Relay 3 reads the header and delivers to F.

Path: E → Relay 2 → Root → Relay 3 → F (4 hops, same count)

In this topology, the hop count is identical because both paths go through the root anyway. The difference appears when two sensors share the same relay. If sensors E and G are both under Relay 2, Storing mode routes E → Relay 2 → G (2 hops), while Non-Storing mode routes E → Relay 2 → Root → Relay 2 → G (4 hops). For networks where sensor-to-sensor communication within a cluster is common, Storing mode provides a measurable latency advantage.

37.2.6 Memory Budget Calculation

How much RAM does each routing mode actually require for this 50-node network?

Storing Mode:

Each routing entry consists of an IPv6 address (16 bytes) plus a next-hop pointer (2 bytes), totaling 18 bytes per entry.

Node Descendants Routing Table Size
Root (border router) 49 nodes 49 x 18 = 882 bytes
Relay node (avg. 9 children) 9 sensors 9 x 18 = 162 bytes
Leaf sensor 0 2 bytes (parent pointer only)

Total network routing state: 882 + (5 x 162) + (44 x 2) = 1,780 bytes distributed across the network. The relay nodes each need 162 bytes for routing – easily affordable for mains-powered devices with 64+ KB RAM.

Non-Storing Mode:

Node Routing State
Root (border router) 49 x 18 = 882 bytes
Every other node 2 bytes (parent pointer only)

Total network routing state: 882 + (49 x 2) = 980 bytes, concentrated at the root.

The network saves 800 bytes total, but more importantly, each relay node drops from 162 bytes of routing state to just 2 bytes. For relay nodes that are also battery-powered sensors, this frees RAM for application buffers and sensor drivers.

Source routing header overhead must also be considered. In Non-Storing mode, every downward packet carries a Routing Header type 3 (RH3) listing the full path. For a 3-hop downward path, that adds 3 x 16 = 48 bytes per packet. If the gateway sends 20 configuration updates per hour, that is 20 x 48 = 960 bytes/hour of header overhead on the radio – modest, but it consumes airtime and energy.

37.2.7 Visual Comparison: Storing vs Non-Storing Data Flow

Side-by-side comparison of RPL Storing and Non-Storing modes showing how point-to-point traffic flows differently - Storing mode uses direct path through common parent while Non-Storing mode routes all traffic through the root

This variant provides a practical decision guide for choosing between Storing and Non-Storing modes based on your network characteristics.

Diagram illustrating Rpl Mode Selection Variant
Figure 37.5: RPL Mode Selection Decision Tree - Choose based on traffic patterns, memory, and network size

Key Insight: Non-Storing mode is the default choice for resource-constrained sensor networks. Only move to Storing mode when you have both the memory budget AND specific P2P or latency requirements.

The following sequence diagrams illustrate how RPL handles point-to-point (P2P) traffic in Non-Storing and Storing modes.

Non-Storing Mode P2P Traffic (Steps 1-3):

RPL P2P traffic in Non-Storing mode Step 1 showing network topology with root router at top and multiple nodes arranged in tree hierarchy. Source node at lower left needs to send data to Destination node at lower right. In non-storing mode the RPL domain routing table is only kept at the border router root node.

RPL P2P traffic in Non-Storing mode - Step 1
Figure 37.6: Step 1: Initial state - Source needs to reach Destination, but only root has routing table

RPL P2P traffic in Non-Storing mode Step 2 showing upward arrows from Source through Parent nodes toward Root. Other nodes only store parent information node above in rank for the default path to the root. Data flows upward through tree hierarchy.

RPL P2P traffic in Non-Storing mode - Step 2
Figure 37.7: Step 2: Source forwards packet upward to parent (default route), continuing until root

RPL P2P traffic in Non-Storing mode Step 3 showing border router (root) receiving data from child node after upward traversal, then transmitting it downward to destination using source routing based on its complete routing table. This mode is more popular and uses less memory and processing power than storing mode. Downward arrows show path from root to destination.

RPL P2P traffic in Non-Storing mode - Step 3
Figure 37.8: Step 3: Root uses routing table to forward packet downward to Destination via source routing

Storing Mode P2P Traffic:

RPL P2P traffic in Storing mode showing direct routing path from Source to Destination via common parent without going through root. In storing mode the whole RPL routing table is stored in each node so the direct path to the destination is known by each node. Arrows show optimized path through common parent.

RPL P2P traffic in Storing mode
Figure 37.9: Storing Mode: Direct routing through common parent - no root involvement needed

Key Insight: Non-Storing mode requires all P2P traffic to traverse the root (3 diagrams showing upward then downward path), while Storing mode enables direct routing through the nearest common ancestor (1 diagram showing optimized path). This explains why Storing mode has lower latency for P2P traffic despite higher memory requirements.

Source: CP IoT System Design Guide, Chapter 4 - Routing

37.3 RPL Traffic Patterns

RPL optimizes for different traffic directions:

RPL traffic patterns: Many-to-One shows multiple sensors sending to gateway (upward routing), One-to-Many shows gateway sending to multiple actuators (downward), Point-to-Point shows direct sensor to actuator communication

RPL traffic patterns: Many-to-One (upward), One-to-Many (downward), and Point-to-Point routing
Figure 37.10: RPL traffic patterns: Many-to-One (upward), One-to-Many (downward), and Point-to-Point routing

37.3.1 Many-to-One (Upward Routing)

Many-to-One is the dominant traffic pattern in IoT networks – sensors collecting data and sending it to a central gateway for cloud processing. In RPL, this pattern is handled identically in both Storing and Non-Storing modes because upward routing relies only on the parent pointer, not routing tables.

Every node knows its parent from the DODAG construction process. To send data toward the root, a node simply forwards the packet to its parent, which forwards to its parent, and so on. No routing table lookup is needed at any hop. A temperature sensor three hops from the root follows the path Sensor → Relay → Gateway Router → Root, with each node making the same simple decision: “forward to my parent.”

This is RPL’s most efficient traffic pattern. It requires minimal state (just one parent pointer per node), works identically in both routing modes, and naturally distributes forwarding load along the tree. Over 80% of real-world IoT deployments are primarily many-to-one data collection, which is why Non-Storing mode (with its lower memory requirements) is the default recommendation for most sensor networks.

Many-to-One traffic flow in RPL showing multiple sensors sending data upward through parent nodes to the root gateway

37.3.2 One-to-Many (Downward Routing)

One-to-Many traffic occurs when a central controller needs to send commands to multiple devices – for example, a building management system sending “turn off” to all lights on a floor, or a gateway pushing firmware updates to a group of sensors.

In Storing mode, the root (or any intermediate node) consults its routing table to determine the next hop for each destination. A command to Light 3 might follow the path Root → Node 1 → Light 3, with each node making a local forwarding decision. In Non-Storing mode, the root prepends a source routing header listing the full path [Node 1, Light 3], and intermediate nodes simply follow the instructions in the header.

The hop count is typically the same in both modes for downward traffic originating at the root. The practical difference is overhead: Storing mode uses per-hop table lookups (fast, but requires memory at each router), while Non-Storing mode adds source routing headers to each packet (no router memory needed, but larger packets on the wire).

37.3.3 Point-to-Point (P2P)

Point-to-Point traffic – where one sensor communicates directly with another sensor or actuator – is where the routing mode choice matters most. A motion sensor triggering a nearby light is a classic P2P scenario.

In Storing mode, the packet travels upward from the source until it reaches a common ancestor that has the destination in its routing table. If both the sensor and light are children of the same relay node, the packet only needs two hops: Sensor → Relay → Light. The relay recognizes the destination in its sub-tree and forwards directly, without involving the root.

In Non-Storing mode, the packet must always travel all the way up to the root first, because only the root has routing information. The root then source-routes it back down to the destination. Even if the two devices are neighbors, the path is Sensor → Relay → Root → Relay → Light – four hops instead of two. For latency-sensitive applications like motion-activated lighting (where users expect sub-200ms response), this extra round trip through the root can be the difference between a responsive system and a noticeably sluggish one.

Real-World Deployment Tip

When deploying RPL networks with mixed traffic patterns, consider these practical guidelines:

  1. Profile your traffic first: Log actual communication patterns for 1-2 weeks before choosing a routing mode
  2. Non-Storing is the safe default: 80%+ of IoT deployments are data collection (many-to-one), where mode choice doesn’t matter
  3. Hybrid deployments: Some implementations allow different modes for different traffic classes (e.g., Non-Storing for telemetry, Storing for control)
  4. Test P2P latency: If sensor-actuator response time is critical (< 100ms), benchmark both modes with realistic network load

37.4 Hands-On Lab: RPL Network Design

Lab Activity: Designing RPL Network for Smart Building

Objective: Design an RPL network and compare Storing vs Non-Storing modes

Scenario: 3-floor office building

Devices:

  • 1 Border Router (roof, internet connection)
  • 15 mains-powered sensors (temperature, humidity) - can be routers
  • 30 battery-powered sensors (door, motion) - end devices
  • 10 actuators (lights, HVAC) - mains powered

Floor Layout:

Floor 3: BR + 5 mains sensors + 10 battery sensors + 3 actuators
Floor 2: 5 mains sensors + 10 battery sensors + 4 actuators
Floor 1: 5 mains sensors + 10 battery sensors + 3 actuators

37.4.1 Task 1: DODAG Topology Design

Design the DODAG: 1. Place border router (root) 2. Identify routing nodes (mains-powered devices) 3. Draw DODAG showing parent-child relationships 4. Assign RANK values (assume RANK increase = 100 per hop)

Click to see solution

DODAG Design:

Smart building DODAG topology with three floors showing BR root node (Navy), mains-powered sensors (Teal), battery-powered sensors (Orange), and actuators (Gray). RANK values increase from 0 at root to 500 at leaf nodes, with a long-range link connecting Floor 3 BR to Floor 2 Sensor 2-1.

Smart building DODAG topology with three floors showing BR root node (Navy), mains-powered sensors (Teal), battery-powered sensors (Orange), and actuators (Gray). RANK values increase from 0 at root to 500 at leaf nodes, with a long-range link connecting Floor 3 BR to Floor 2 Sensor 2-1.
Figure 37.11: Smart building DODAG topology with three floors showing BR root node (Navy), mains-powered sensors (Teal), battery-powered sensors (Orange), and actuators (Gray). RANK values increase from 0 at root to 500 at leaf nodes, with a long-range link connecting Floor 3 BR to Floor 2 Sensor 2-1.

RANK Distribution:

  • BR: RANK 0
  • Floor 3 mains sensors: RANK 100-200
  • Floor 2 mains sensors: RANK 200-400
  • Floor 1 mains sensors: RANK 300-500
  • Battery/actuators: RANK based on parent + 100

Routing Nodes: 15 mains-powered sensors (can route for battery devices)

Total Nodes: 1 BR + 15 mains + 30 battery + 10 actuators = 56 devices

37.4.2 Task 2: Memory Requirements Comparison

Calculate memory requirements for Storing vs Non-Storing:

Assumptions:

  • IPv6 address: 16 bytes
  • Next-hop pointer: 2 bytes
  • Routing entry: 18 bytes total

Calculate:

  1. Memory per node (Storing mode)
  2. Memory at root (Non-Storing mode)
  3. Memory at leaf nodes (both modes)
Click to see solution

Storing Mode:

Routing Table Size per Node:

  • Root (BR): All 55 devices reachable
    • 55 entries × 18 bytes = 990 bytes
  • Mains sensor (typical, 3 children):
    • 3 entries × 18 bytes = 54 bytes
  • Battery sensor/actuator (leaf):
    • 0 entries (just parent pointer: 2 bytes) = 2 bytes

Total network memory (Storing):

  • BR: 990 bytes
  • 15 mains sensors × 54 bytes avg = 810 bytes
  • 40 battery/actuators × 2 bytes = 80 bytes
  • Total: ~1,880 bytes distributed across network

Non-Storing Mode:

Routing Table Size:

  • Root (BR): All 55 devices
    • 55 entries × 18 bytes = 990 bytes
  • All other nodes: Just parent pointer
    • 55 nodes × 2 bytes = 110 bytes

Total network memory (Non-Storing):

  • BR: 990 bytes
  • 55 other nodes × 2 bytes = 110 bytes
  • Total: ~1,100 bytes (mostly at root)

Comparison:

Mode Root Memory Node Memory (avg) Total Memory
Storing 990 bytes 15-54 bytes 1,880 bytes
Non-Storing 990 bytes 2 bytes 1,100 bytes

Analysis:

  • Non-Storing saves ~780 bytes network-wide
  • Mains sensors: Can afford 54 bytes (have 32-128 KB RAM)
  • Battery sensors: Benefit from 2 bytes vs 0 (minimal difference)
  • Trade-off: Memory savings vs routing efficiency

37.4.3 Task 3: Traffic Pattern Analysis

Analyze routing for different traffic patterns:

Scenarios:

  1. Many-to-One: All battery sensors report temperature every 5 minutes to cloud
  2. One-to-Many: Cloud sends firmware update to all sensors
  3. Point-to-Point: Motion sensor (Floor 1) triggers light (Floor 3)

For each scenario, calculate: - Average hop count (Storing vs Non-Storing) - Messages per minute - Network load

Click to see solution

Scenario 1: Many-to-One (Sensor → Cloud)

Route (any battery sensor to BR):

  • Storing: Sensor → Parent (mains) → Parent → … → BR
    • Average: 2-3 hops (battery→mains, mains→BR, possibly intermediate)
  • Non-Storing: Same (upward routing identical)
    • Average: 2-3 hops

Messages:

  • 30 battery sensors × 12 msgs/hour = 360 msgs/hour = 6 msgs/min

Conclusion: Identical performance (many-to-one is RPL’s strength)

Scenario 2: One-to-Many (Cloud → All Sensors)

Route (BR to any sensor):

  • Storing: BR → optimal path to sensor
    • BR has routing table, knows best path
    • Average: 2-3 hops (BR → mains → battery)
  • Non-Storing: BR → mains → battery (same path, but source routed)
    • BR inserts source route
    • Average: 2-3 hops

Difference: Minimal - Storing: routing table lookup at each hop - Non-Storing: source route in header (extra ~10 bytes overhead)

Messages: 55 devices × 1 update = 55 messages (infrequent)

Conclusion: Non-Storing adds header overhead but same hop count

Scenario 3: Point-to-Point (Floor 1 Sensor → Floor 3 Light)

Route:

  • Storing Mode:

    Floor 1 Sensor → Parent (Floor 1 mains)
    Floor 1 mains checks table: "Floor 3 Light not in my sub-tree"
    → Forward to parent (Floor 2 mains)
    Floor 2 mains checks table: "Floor 3 Light not in my sub-tree"
    → Forward to parent (BR)
    BR checks table: "Floor 3 Light via Sensor 3-2"
    → Forward to Sensor 3-2
    Sensor 3-2 → Floor 3 Light
    
    Path: F1-Sensor → F1-mains → F2-mains → BR → F3-mains → F3-Light
    Hops: 5
  • Non-Storing Mode:

    Floor 1 Sensor → Parent → ... → BR (upward, default route)
    BR inserts source route: [F3-mains, F3-Light]
    BR → F3-mains → F3-Light
    
    Path: F1-Sensor → F1-mains → F2-mains → BR → F3-mains → F3-Light
    Hops: 5 (same)

Conclusion: For this topology, same hop count, but: - Storing: Distributed routing decisions (higher memory) - Non-Storing: Centralized at root (source routing overhead)

If topology were flatter (e.g., many sensors directly under BR):

  • Storing: F1-Sensor → BR → F3-Light (2 hops)
  • Non-Storing: F1-Sensor → BR → F3-Light (2 hops)
  • Same performance

Summary:

Traffic Pattern Storing Advantage Non-Storing Advantage
Many-to-One None (equal) None (equal)
One-to-Many None (equal) Lower node memory
Point-to-Point Can optimize locally Simpler nodes
Recommendation: Non-Storing for this scenario - Battery-powered sensors dominate (low memory) - Many-to-one traffic primary (sensors reporting) - Point-to-point rare (motion→light scenarios uncommon)

37.5 Quiz: RPL Routing Protocol

What is the primary purpose of RANK in RPL?

  1. To measure the distance in meters from the root
  2. To prevent routing loops by defining hierarchical position
  3. To prioritize packets based on importance
  4. To encrypt routing information
Click to see answer

Answer: B) To prevent routing loops by defining hierarchical position

Explanation:

RANK Definition: RANK is a scalar value representing a node’s position in the DODAG hierarchy relative to the root.

Primary Purpose: Loop Prevention

How RANK Prevents Loops:

  1. Root has minimum RANK (typically 0)
  2. RANK increases as you move away from root
  3. Upward routing rule: Packets forwarded to nodes with lower RANK
  4. Parent selection rule: Node cannot choose parent with higher RANK

Loop Prevention Example:

RANK-based loop prevention mechanism showing valid parent selection (Node C can choose Node A, RANK 100 < 400) versus forbidden loop creation (Node A cannot choose Node C, RANK 400 > 100)

RANK-based loop prevention mechanism showing valid parent selection (Node C can choose Node A, RANK 100 < 400) versus forbidden loop creation (Node A cannot choose Node C, RANK 400 > 100)
Figure 37.12: RANK-based loop prevention mechanism showing valid parent selection (Node C can choose Node A, RANK 100 < 400) versus forbidden loop creation (Node A cannot choose Node C, RANK 400 > 100).

Attempt to create loop:

Node C wants to choose Node A as parent:

  • Current RANK: 400
  • Node A RANK: 100
  • ✅ ALLOWED (100 < 400, moving toward root)

Node A wants to choose Node C as parent:

  • Current RANK: 100
  • Node C RANK: 400
  • ❌ FORBIDDEN (400 > 100, moving away from root = loop!)

Without RANK (e.g., simple hop count):

Without RANK mechanism, routing loops can form when Node C chooses Node A as parent, creating a circular path (ROOT → A → B → C → A)

Without RANK mechanism, routing loops can form when Node C chooses Node A as parent, creating a circular path (ROOT → A → B → C → A)
Figure 37.13: Without RANK mechanism, routing loops can form when Node C chooses Node A as parent, creating a circular path (ROOT → A → B → C → A).

With RANK:

With RANK mechanism, Node C cannot choose Node A as parent because RANK 400 > 100 would create a loop by moving away from root

With RANK mechanism, Node C cannot choose Node A as parent because RANK 400 > 100 would create a loop by moving away from root
Figure 37.14: With RANK mechanism, Node C cannot choose Node A as parent because RANK 400 > 100 would create a loop by moving away from root.

Option Analysis:

A) Distance in meters ❌ - RANK is not physical distance - RANK is a logical hierarchy in DODAG - Two nodes same distance from root can have different RANKs - Via good link: RANK 100 - Via poor link: RANK 300

B) Prevent routing loops ✅ - Correct: RANK enforces hierarchy - Acyclic property: Prevents cycles in DAG - Rule: Can only route “upward” (decreasing RANK)

C) Prioritize packets ❌ - RANK is for routing, not QoS - Packet prioritization uses different mechanisms: - IPv6 Traffic Class field - RPL can support different objective functions (latency vs energy) - But RANK itself doesn’t prioritize packets

D) Encrypt routing ❌ - RANK is plaintext in DIO messages - Security handled separately: - RPL can use secured mode (RPL Security) - Encryption at link layer (802.15.4 security) - IPsec for end-to-end security - RANK is not a security mechanism

RANK Calculation Factors:

RANK is calculated by objective function, which may consider: - Hop count: Each hop adds fixed amount - ETX: Expected Transmission Count (link quality) - Latency: Time to reach root - Energy: Remaining battery, power consumption - Throughput: Link bandwidth

Example Objective Function (OF0 - Hop Count):

RANK = Parent_RANK + MinHopRankIncrease

MinHopRankIncrease = DEFAULT_MIN_HOP_RANK_INCREASE = 256

Node A (parent: ROOT): RANK = 0 + 256 = 256
Node B (parent: A):    RANK = 256 + 256 = 512
Node C (parent: B):    RANK = 512 + 256 = 768

Example Objective Function (ETX-based):

RANK = Parent_RANK + (ETX × MULTIPLIER)

Good link (ETX = 1.2): RANK increase = 1.2 × 100 = 120
Poor link (ETX = 3.5): RANK increase = 3.5 × 100 = 350

Node might prefer 2 good hops (RANK increase: 240)
over 1 poor hop (RANK increase: 350)
Conclusion: RANK’s primary purpose is loop prevention by establishing a strict hierarchy where packets can only flow “upward” (toward lower RANK), preventing cycles in the DODAG.

In a network of 100 battery-powered sensors sending data to a single gateway (many-to-one traffic), which RPL mode is more appropriate?

  1. Storing mode (better routing efficiency)
  2. Non-Storing mode (lower memory requirements)
  3. Both modes have identical performance for this scenario
  4. RPL doesn’t support many-to-one traffic well
Click to see answer

Answer: C) Both modes have identical performance for this scenario

Explanation:

Many-to-One Traffic is RPL’s primary use case and is handled identically in both modes.

How Many-to-One Works in RPL:

Upward Routing (toward root): - Every node knows its parent (from DODAG construction) - Default route: Send to parent - No routing table needed for upward routes

Both Modes:

Sensor 50 (RANK 500)
  → Parent Node 20 (RANK 300)
    → Parent Node 10 (RANK 150)
      → ROOT (RANK 0)

Process:
1. Sensor 50 has data for gateway
2. Looks up default route: "Send to parent (Node 20)"
3. Node 20 receives, looks up default route: "Send to parent (Node 10)"
4. Node 10 receives, looks up default route: "Send to parent (ROOT)"
5. ROOT receives - destination reached!

Storing Mode - Many-to-One:

  • Upward routing: Uses parent pointer (NOT routing table)
  • Routing table: Used for downward/P2P, not many-to-one
  • Memory: Routing table stored but not used for this traffic

Non-Storing Mode - Many-to-One:

  • Upward routing: Uses parent pointer (same as Storing)
  • No routing table: Not needed for upward routing
  • Memory: Lower (no table), but doesn’t matter for this traffic

Performance Comparison:

Aspect Storing Mode Non-Storing Mode
Hop Count Same (upward to root) Same (upward to root)
Latency Same Same
Forwarding Complexity Same (parent lookup) Same (parent lookup)
Memory Used Parent pointer + routing table Parent pointer only
Bandwidth Same Same

Key Insight: Routing tables are for downward/P2P routes!

Option Analysis:

A) Storing mode (better routing efficiency) ❌ - Storing mode adds no efficiency for many-to-one - Routing tables help with downward (root→sensors) and P2P (sensor↔︎sensor) - Upward routing doesn’t use routing tables

B) Non-Storing mode (lower memory) ⚠️ - True: Non-Storing saves memory - But: Not more appropriate for this traffic pattern - Memory savings, but same routing performance

C) Both modes have identical performance ✅ - Correct: Many-to-one uses same mechanism (parent pointers) - Same hop count: Both follow DODAG upward - Same latency: Identical routing path

D) RPL doesn’t support many-to-one ❌ - Wrong: Many-to-one is RPL’s primary design goal - RPL specifically optimized for: - Many sensors → Single gateway (IoT data collection) - Upward routing (toward root)

When Modes Differ:

Scenario 1: Many-to-One (100 sensors → gateway)

  • Performance: Identical
  • Memory: Non-Storing wins (lower)
  • Best choice: Non-Storing (if sensors constrained)

Scenario 2: One-to-Many (gateway → 100 actuators)

  • Storing: Gateway → direct path to each actuator
  • Non-Storing: Gateway adds source route to packets
  • Performance: Similar (same hops), but Non-Storing adds header overhead
  • Best choice: Non-Storing still OK (infrequent traffic)

Scenario 3: Point-to-Point (sensor 1 ↔︎ sensor 2)

  • Storing: Sensor 1 → common parent → Sensor 2 (may optimize path)
  • Non-Storing: Sensor 1 → root → Sensor 2 (always via root)
  • Performance: Storing can be better (depends on topology)
  • Best choice: Storing (if P2P common)

Recommendation for Question Scenario:

Given:

  • 100 battery-powered sensors
  • Single gateway
  • Many-to-one traffic only (sensors → gateway)

Best Choice: Non-Storing - Not because of better routing (identical) - But because of lower memory (battery devices benefit)

However, the question asks about performance, and performance is identical for many-to-one traffic.

Conclusion: For many-to-one traffic, both RPL modes have identical routing performance (same path, same hop count, same latency). The choice between modes should be based on memory constraints and other traffic patterns (downward, P2P), not many-to-one routing efficiency.

Which RPL control message is used by a node to request DODAG information from neighbors?

  1. DIO (DODAG Information Object)
  2. DIS (DODAG Information Solicitation)
  3. DAO (Destination Advertisement Object)
  4. DAO-ACK (DAO Acknowledgment)
Click to see answer

Answer: B) DIS (DODAG Information Solicitation)

Explanation:

RPL Control Messages (all ICMPv6):

RPL Control Messages showing DIO (advertise DODAG from ROOT/routers), DIS (request info from new nodes), DAO (build downward routes from all nodes), DAO-ACK (confirm receipt from parents)

RPL control message types: DIO for DODAG advertisement, DIS for solicitation, DAO for route building
Figure 37.15: RPL control message types: DIO for DODAG advertisement, DIS for solicitation, DAO for route building

DIS (DODAG Information Solicitation):

Purpose: Request DODAG information from neighbors

When Used:

  1. New node joins network: Doesn’t know about any DODAG
  2. Node loses DODAG info: Parent moved, connection lost
  3. Proactive discovery: Node wants to find better DODAG

Message Flow:

New Node:  "Is there a DODAG I can join?"
           ↓ (sends DIS - multicast)
Neighbors: "Yes, here's my DODAG info"
           ↓ (respond with DIO - unicast or multicast)
New Node:  Receives DIO(s), chooses DODAG and parent

DIS Contents:

  • Solicitation flags: What kind of DODAG information desired
  • Predicates: Conditions for response (optional)
  • Usually sent to multicast address (all RPL nodes)

Example Scenario:

Scenario: Sensor powers on for first time

Step 1: Sensor sends DIS (multicast)
  "Hello, any RPL DODAGs nearby?"

Step 2: Multiple neighbors respond with DIO
  Node A: "I'm in DODAG X, RANK 256"
  Node B: "I'm in DODAG X, RANK 300"
  Node C: "I'm in DODAG Y, RANK 128"

Step 3: Sensor evaluates DIOs
  - Chooses DODAG X (two neighbors, better connectivity)
  - Chooses Node A as parent (lower RANK)

Step 4: Sensor joins DODAG X with Node A as parent
  - Calculates own RANK: 256 + 100 = 356
  - Starts sending own DIOs (advertising DODAG to others)

Option Analysis:

A) DIO (DODAG Information Object) ❌ - Purpose: Advertise DODAG information (not request) - Direction: Root/nodes → neighbors (downward/broadcast) - Contents: DODAG ID, RANK, objective function, configuration - Sent: Periodically by root and nodes (using Trickle timer) - Used for: New nodes to discover DODAG, existing nodes to update info

B) DIS (DODAG Information Solicitation) ✅ - Purpose: Request DODAG information - Direction: Node → neighbors (multicast query) - Trigger: DIO response from neighbors - Used for: Active discovery when joining network or recovering from failure

C) DAO (Destination Advertisement Object) ❌ - Purpose: Advertise reachability (not request DODAG info) - Direction: Node → parent (upward, toward root) - Contents: “I am reachable via you” + node addresses/prefixes - Used for: Building downward routes (root→nodes) - Storing mode: DAO to parent, parent updates routing table - Non-Storing mode: DAO to root (via parents), root stores all routes

D) DAO-ACK (DAO Acknowledgment) ❌ - Purpose: Confirm DAO receipt (not request DODAG info) - Direction: Parent → child (downward) - Optional: Used for reliability in critical networks - Contents: Status (accepted/rejected), DAO sequence number - Used for: Ensuring DAO was received and processed

Message Relationships:

Network Formation:

1. ROOT sends DIO (periodic)
   ↓
2. Nodes receive DIO, join DODAG
   ↓
3. Nodes send DIO (periodic, propagate DODAG)
   ↓
4. New nodes send DIS (if haven't heard DIO)
   ↓
5. Neighbors respond with DIO (unicast or multicast)
   ↓
6. Nodes send DAO (advertise reachability to parents)
   ↓
7. Parents send DAO-ACK (confirm receipt)

Trickle Timer and DIS:

Without DIS (passive): - Nodes wait for periodic DIO - Trickle timer: DIOs sent infrequently when network stable (minutes) - Problem: New node may wait long time to join

With DIS (active): - New node sends DIS immediately - Neighbors respond with DIO promptly - Benefit: Fast joining (seconds instead of minutes)

DIS Use Cases:

  1. Initial Network Join:

    New sensor powers on
    → Sends DIS
    → Receives DIOs from neighbors
    → Joins DODAG
  2. Parent Loss Recovery:

    Node's parent moves/fails
    → Loses DODAG connection
    → Sends DIS
    → Discovers new parent
    → Rejoins DODAG
  3. Better Path Discovery:

    Node wants to check for better routes
    → Sends DIS
    → Receives DIOs from neighbors
    → May switch parent if better path available
Conclusion: DIS (DODAG Information Solicitation) is the RPL control message used to request DODAG information from neighbors, enabling active network discovery and faster DODAG joining compared to waiting for periodic DIOs.

Why is OSPF (Open Shortest Path First) not suitable for IoT Low-Power and Lossy Networks?

  1. OSPF doesn’t support IPv6
  2. OSPF requires too much processing power and memory
  3. OSPF can’t handle mesh topologies
  4. OSPF is proprietary and requires licensing
Click to see answer

Answer: B) OSPF requires too much processing power and memory

Explanation:

OSPF is a link-state routing protocol designed for traditional IP networks (enterprise, ISP). It’s fundamentally incompatible with resource-constrained IoT devices.

Resource Requirements Comparison:

Resource OSPF RPL
CPU GHz, multi-core MHz, single-core
RAM MB-GB KB (10-128 KB)
Flash MB-GB KB (32-512 KB)
Power Watts (mains) Milliwatts (battery)
Algorithm Dijkstra’s SPF Distance-vector
Database Link-state DB (entire network) Parent pointer / routing table

Why OSPF Doesn’t Work for IoT:

1. Memory Requirements:

OSPF:

  • Link-State Database (LSDB): Stores topology of entire network
  • Every router knows everything: All links, all routers
  • Example: 100-node network
    • Average 3 links per node = 300 links
    • Link-state data: ~50 bytes per link
    • LSDB size: 300 × 50 = 15 KB (minimum, just topology)
    • Add routing table, neighbor table, etc.: 50-100 KB total

IoT Device (e.g., nRF52840): - Total RAM: 256 KB - OSPF would use: 50-100 KB (~40% of RAM!) - Application left with: 150 KB (including OS, app, buffers)

RPL:

  • Storing mode: Routing table for sub-DODAG only (KBs)
  • Non-Storing mode: Just parent pointer (2 bytes)
  • Typical: 1-5 KB for routing state

2. Processing Requirements:

OSPF Algorithm:

1. Receive Link-State Advertisements (LSAs)
2. Update Link-State Database
3. Run Dijkstra's Shortest Path First (SPF) algorithm
   - For 100-node network: O(N^2) = 10,000 operations
   - Recalculate entire routing table
4. Update forwarding table

CPU Time (traditional router): - 100-node network: ~10-50ms (GHz processor) - Acceptable for mains-powered router

CPU Time (IoT device, 32 MHz): - 100-node network: ~500ms - 2 seconds - Energy: 20 mA × 1s = 20 mA·s (significant battery drain) - Worse: Runs every topology change!

RPL Algorithm:

1. Receive DIO from parent
2. Update RANK = parent_RANK + increase
3. (Storing mode) Update routing table for children
4. Done

CPU Time: < 1ms (simple calculation)

3. Protocol Overhead:

OSPF:

  • Hello packets: Every 10 seconds (keep neighbors alive)
  • LSA flooding: Every topology change
  • LSA refresh: Every 30 minutes (even if no change)
  • Packets: Large (detailed link-state info)

Example overhead (10 neighbors): - Hello: 10 packets/10s = 1 packet/s - Each hello: ~50-100 bytes - Bandwidth: ~800 bps continuous (just hellos!) - Power: RX always on to receive hellos

RPL:

  • DIO: Trickle timer (adaptive, minutes when stable)
  • DAO: Only when topology changes
  • Packets: Smaller (compressed headers with 6LoWPAN)

Example overhead (stable network): - DIO: 1 packet / 16 minutes (Trickle timer) - Bandwidth: ~10 bps average - Power: Can sleep between DIOs

4. Convergence Time:

OSPF Convergence:

Link fails
→ Detect failure (Hello timeout: 40s)
→ Flood LSAs (100ms-1s)
→ Run SPF algorithm (10ms-100ms on traditional router, seconds on IoT)
→ Update routing table
→ Convergence: 1-5 seconds (fast on traditional hardware)

Problem for IoT: SPF computation drains battery

RPL Convergence:

Link fails (parent unreachable)
→ Node detects (missed DIOs or data failure)
→ Select backup parent (if available, instantaneous)
→ Or send DIS (request new parent)
→ Join new parent's DODAG
→ Convergence: Seconds (no complex computation)

Option Analysis:

A) OSPF doesn’t support IPv6 ❌ - Wrong: OSPFv3 fully supports IPv6 - RFC 5340 (OSPFv3 for IPv6) published 2008 - IPv6 support is not the issue

B) OSPF requires too much processing power and memory ✅ - Correct: OSPF’s link-state database and SPF algorithm exceed IoT capabilities - Memory: LSDB stores entire network (50-100+ KB) - CPU: Dijkstra’s SPF algorithm computationally expensive - Power: Frequent Hello packets, always-on receiver

C) OSPF can’t handle mesh topologies ❌ - Wrong: OSPF excels at mesh topologies - Designed for arbitrary topologies (mesh, star, etc.) - Calculates shortest path in any topology - Not a limitation

D) OSPF is proprietary and requires licensing ❌ - Wrong: OSPF is open standard (IETF RFC 2328, RFC 5340) - No licensing fees - Multiple open-source implementations (Quagga, BIRD, FRRouting) - Not a barrier

Real-World Example:

IoT Device: nRF52840 (common for Thread, Zigbee) - CPU: 64 MHz ARM Cortex-M4 - RAM: 256 KB - Flash: 1 MB - Power: 5 mA RX, 5 µA sleep

Running OSPF:

  • LSDB: 50 KB (20% of RAM)
  • Hello packets: RX always on (5 mA continuous = 120 mA·h per day)
  • Battery (2000 mAh): Lasts 16 days (vs years with RPL)

Running RPL:

  • Routing state: 2 KB (< 1% of RAM)
  • Trickle DIOs: RX on briefly every 16 minutes
  • Battery: Lasts years (duty cycle < 1%)
Conclusion: OSPF is unsuitable for IoT LLNs primarily because its link-state database and SPF algorithm require far more memory and processing power than battery-powered IoT devices can afford, while RPL is specifically designed for these constraints.

37.6 Summary and Key Takeaways

This chapter covered RPL’s two routing modes and three traffic patterns – essential knowledge for designing efficient IoT networks.

37.6.1 Key Concepts

Summary mindmap showing RPL Routing Modes chapter key concepts: Storing Mode with distributed tables, Non-Storing Mode with centralized routing, and three traffic patterns

37.6.2 Decision Framework

Choose This When You Have Trade-off
Storing Mode Powerful nodes (32+ KB RAM), P2P traffic, low latency needs Higher memory, better performance
Non-Storing Mode Constrained devices (< 16 KB), many-to-one traffic, large networks Lower memory, root bottleneck

37.6.3 Critical Numbers to Remember

  • Storing routing entry: ~18 bytes (IPv6 address + next-hop)
  • Non-Storing node state: ~2 bytes (parent pointer only)
  • Memory savings: Non-Storing saves ~40% network-wide
  • Latency penalty: Non-Storing adds 2-4 extra hops for P2P traffic

37.6.4 Common Pitfalls

Pitfalls to Avoid
  1. Using Storing Mode for battery sensors - Wastes precious RAM that could extend battery life
  2. Using Non-Storing for sensor-actuator networks - All P2P traffic goes through root, creating bottleneck
  3. Ignoring traffic patterns - Many-to-one performs identically in both modes; don’t over-engineer
  4. Forgetting root capacity - In Non-Storing, root must store all routes; plan for growth

37.7 Concept Relationships

Storing Mode ↔︎ Memory Requirements: Distributed routing tables mean each router-capable node stores entries for its descendants. Memory scales with sub-tree size, not total network size.

Non-Storing Mode ↔︎ Source Routing: Centralized routing at root requires source routing headers for downward paths. This shifts memory burden from distributed nodes to the root.

Traffic Pattern → Mode Choice: Many-to-one traffic performs identically (both use parent pointers upward). Point-to-point traffic differentiates the modes (Storing optimizes, Non-Storing via root).

DAO Messages → Routing Tables (Storing): DAO propagation builds routing state hop-by-hop. Each parent accumulates routes from its children’s DAOs.

DAO Messages → Root State (Non-Storing): All DAOs flow to root, which builds complete topology. Root capacity limits network scale, not individual node memory.

37.8 See Also

RPL Series Navigation:

Memory and Performance:

Alternative Routing:


37.9 What’s Next

If you want to… Read this
Understand RPL DODAG construction RPL DODAG Construction
Study the Trickle timer in depth RPL DODAG Trickle
Apply modes in production scenarios RPL Production Scenarios
Practice with RPL labs RPL Labs and Quiz

Now that you understand RPL routing modes and traffic patterns, you’re ready to explore:

Connect with Learning Hubs
Previous Current Next
RPL Introduction RPL Routing and Traffic RPL Traffic and Design