98 DSR Worked Examples and Practice
Sensor Squad: Real-World Problem Solving
“Let me show you three real situations where DSR routing matters,” said Max the Microcontroller.
Problem 1 – The Wildfire: “A firefighter team leader needs to send an evacuation order across 3 relay nodes,” Max explained. “The route discovery takes about 60 milliseconds – that is the time for the question to travel 3 hops out and the answer to come 3 hops back.”
“That is fast enough for an emergency!” said Sammy the Sensor.
Problem 2 – The Shipping Port: “Container trackers are losing 35% of their messages,” Max continued. “The containers move every 90 seconds, but the route cache expires after 5 minutes. By the time you use a cached route, the container already moved!”
“So shorten the cache timeout to match the movement!” suggested Bella the Battery. “About 60 seconds should work.”
Problem 3 – The Elephant Collars: “GPS collars on elephants need to last 2 years. If we discover a fresh route for every single message, routing alone uses 11% of the battery!”
Lila the LED calculated: “But elephants move slowly. If we cache routes for 45 minutes, we only use 3.7% for routing. That is three times better!”
The squad learned that there is no single best setting – you have to match your routing strategy to how fast things move and how often you communicate!
For Beginners: Understanding DSR Through Examples
DSR worked examples help you understand how to tune the protocol for real-world deployments. The key trade-offs are:
Discovery latency: How long before you can send your first packet? This depends on how many hops the route is, not how many total nodes exist in the network.
Cache timeout: How long should you remember a route? Set it too long and you use stale routes (packets fail). Set it too short and you waste energy rediscovering routes constantly.
Energy budget: Every route discovery floods the entire network. If you communicate rarely (once per hour), the flood cost is tiny. If you communicate constantly, the floods add up to more overhead than just keeping routes always updated (like DSDV).
| Trade-off | Low Value | High Value |
|---|---|---|
| Cache timeout | More discoveries, fresher routes | Fewer discoveries, possibly stale |
| Communication frequency | DSR efficient (rare floods) | DSDV efficient (amortized overhead) |
| Network mobility | Short cache, frequent discovery | Long cache, rare discovery |
98.1 Learning Objectives
By the end of this chapter, you will be able to:
- Calculate Route Discovery Latency: Compute end-to-end discovery times based on network parameters
- Optimize Cache Timeout: Determine optimal cache lifetimes for different mobility scenarios
- Analyze Energy Trade-offs: Compare discovery strategies for battery-constrained deployments
- Design Recovery Strategies: Plan route error recovery for mission-critical applications
98.2 Prerequisites
Before diving into this chapter, you should be familiar with:
- DSR Fundamentals and Route Discovery: Understanding RREQ/RREP mechanisms and source routing is essential for the worked examples
- DSR Caching and Maintenance: Route caching strategies and RERR handling are prerequisites for the optimization scenarios
Key Concepts
- Route Discovery Walkthrough: Step-by-step trace of RREQ propagation, duplication detection, and RREP return in a small network
- Source Route Tracing: Reading the accumulated node list in a RREQ header to reconstruct the complete path taken
- Duplicate RREQ Filtering: Each node forwards a RREQ only once per (source, broadcast ID) pair; prevents network flooding loops
- RREP Unicast: The route reply is sent unicast along the reverse path of the route discovery (or using a cached route)
- Network Partition Example: Scenario where no route exists; RREQ floods the entire network and times out with no RREP
- Multi-hop Path Construction: How intermediate nodes append their address to the RREQ route record field
- Cache Benefit Example: Intermediate node with cached route to destination replies directly without forwarding RREQ to destination
- Header Overhead Growth: RREQ header grows by one address per hop; in large networks, header overhead becomes significant
98.3 Worked Example 1: Route Discovery in Emergency Response
DSR Route Discovery in Emergency Response Network
Scenario: A wildfire response team deploys portable radio nodes across a forest. Incident Commander (Node IC) at the command post needs to send evacuation orders to Firefighter Team 3 (Node F3) located 2 km away. The network has never communicated between these nodes before, so no cached route exists.
Given:
- Network nodes: IC (source), R1, R2, R3, R4, F3 (destination)
- Topology: IC neighbors R1, R2; R1 neighbors R3; R2 neighbors R3, R4; R3 neighbors F3; R4 neighbors F3
- RREQ packet size: 40 bytes base + 4 bytes per hop
- RREP packet size: 40 bytes + complete route
- Radio range: 300m, data rate: 115 kbps
Steps:
- IC initiates route discovery:
- IC broadcasts RREQ:
<IC, F3, [IC], ID=1001> - R1 and R2 receive the RREQ
- IC broadcasts RREQ:
- First hop forwarding:
- R1 appends self:
<IC, F3, [IC,R1], ID=1001>, forwards to R3 - R2 appends self:
<IC, F3, [IC,R2], ID=1001>, forwards to R3 and R4
- R1 appends self:
- Second hop forwarding:
- R3 receives from R1: path = [IC,R1,R3]
- R3 receives from R2: path = [IC,R2,R3] (same ID=1001, discards duplicate)
- R3 forwards first received:
<IC, F3, [IC,R1,R3], ID=1001>to F3 - R4 forwards:
<IC, F3, [IC,R2,R4], ID=1001>to F3
- Destination receives RREQ:
- F3 receives via R3: path = [IC,R1,R3,F3]
- F3 receives via R4: path = [IC,R2,R4,F3] (same ID, discards)
- F3 sends RREP with first path: route = [IC,R1,R3,F3]
- RREP returns to source:
- F3 → R3 → R1 → IC
- RREP:
<F3, IC, route=[IC,R1,R3,F3]>
- Calculate discovery latency:
- Each hop: transmission (40 bytes / 115 kbps = 2.8 ms) + propagation (<1 ms) + processing (~5 ms) ≈ 10 ms
- RREQ: 3 hops × 10 ms = 30 ms
- RREP: 3 hops × 10 ms = 30 ms
- Total discovery: ~60 ms
Result: IC discovers route [IC → R1 → R3 → F3] in approximately 60 ms. IC caches this route and can now send evacuation orders. The route [IC,R2,R4,F3] was discovered but not used because it arrived second. IC should cache both paths for redundancy.
Key Insight: DSR’s route discovery latency depends on network diameter (hop count), not network size. A 1000-node network with 5-hop diameter has similar discovery latency to a 50-node network with 5-hop diameter. The flood overhead increases with network size, but latency is hop-bound.
Interactive: DSR Route Discovery Latency Calculator
98.4 Worked Example 2: Cache Timeout Optimization
DSR Cache Timeout Optimization for Mobile Sensors
Scenario: A logistics company uses asset trackers on shipping containers in a port yard. Containers are moved by cranes and trucks, causing frequent topology changes. The network uses DSR with route caching. The operations team observes high packet loss (35%) and suspects stale cached routes.
Given:
- 200 container trackers reporting to 5 gateway nodes
- Current cache timeout: 300 seconds (5 minutes)
- Average container movement interval: 90 seconds
- Packet transmission frequency: every 60 seconds per tracker
- Observed metrics: 35% packet loss, average 2.1 retries per successful delivery
- Route discovery takes ~100 ms
Steps:
- Analyze mobility vs. cache timeout mismatch:
- Containers move every 90 seconds on average
- Cache timeout: 300 seconds
- Cache is 3.3× longer than mobility interval
- High probability cached routes are stale when used
- Calculate stale cache probability:
- Probability container moves in 300s: 1 - e^(-300/90) ≈ 96%
- Most cached routes become invalid before expiration
- This explains 35% packet loss (stale routes fail)
- Calculate optimal cache timeout:
- Target: Cache valid for most of its lifetime
- Conservative: cache_timeout < movement_interval
- Recommended: cache_timeout = 0.5 × movement_interval = 45 seconds
- More aggressive: cache_timeout = movement_interval = 90 seconds
- Evaluate trade-offs:
- 45s timeout: More discoveries (every 45s worst case), but ~15% stale probability
- 90s timeout: Fewer discoveries, but ~50% stale probability at expiration
- 300s timeout (current): Minimal discoveries, but 96% stale probability (current problem)
- Estimate improvement with 60s timeout:
- Stale probability when cache expires: 1 - e^(-60/90) ≈ 49%
- However, cache is refreshed on each route discovery, and routes are used within 60s
- Average cache age at use ≈ 30s: P(stale at 30s) = 1 - e^(-30/90) ≈ 28%
- Packet loss reduction: 35% → ~10-15% (55-70% improvement)
Result: Reducing cache timeout from 300 seconds to 60 seconds should reduce packet loss from 35% to approximately 10%. The trade-off is increased route discovery overhead (from ~0.3 discoveries/minute to ~1 discovery/minute per tracker), but this is acceptable given the 65% improvement in delivery success.
Key Insight: Optimal DSR cache timeout should be approximately equal to or less than the average mobility interval. When cache_timeout >> mobility_interval, stale routes dominate. When cache_timeout << mobility_interval, discovery overhead dominates. The sweet spot balances freshness against discovery cost. For highly mobile environments, consider disabling caching entirely and discovering fresh routes for each transmission.
98.5 Worked Example 3: Energy Trade-off Analysis
Route Discovery Latency vs. Energy Trade-off Analysis
Scenario: A wildlife tracking network monitors endangered elephants across a 50km² reserve. GPS collars on 30 elephants form an ad-hoc network to relay location data to ranger stations. The network uses DSR routing, and rangers need to optimize the discovery strategy for both battery life and tracking responsiveness.
Given:
- 30 elephant collar nodes, transmission range 500m
- Average distance between collars: 1.2km (requires multi-hop)
- Typical path length: 4-6 hops to reach gateway
- RREQ packet size: 40 bytes + 4 bytes per hop accumulated
- RREP packet size: 40 bytes + complete route (24 bytes for 6-hop path)
- Radio power: 100mW transmit, 50mW receive
- Data rate: 19.2 kbps
- Collar battery: 5000mAh, must last 2 years
- Location update frequency: every 15 minutes
- Route lifetime before movement invalidates: ~45 minutes
Steps:
Calculate route discovery energy cost:
- RREQ flooding: Broadcasts to all 30 nodes
- RREQ size at destination: 40 + (6 × 4) = 64 bytes
- Transmission time per node: 64 × 8 / 19200 = 26.7ms
- RREQ energy per node: 100mW × 26.7ms = 2.67mJ
- Network-wide RREQ: 30 × 2.67mJ = 80.1mJ
Calculate RREP energy:
- RREP travels 6 hops: 64 bytes each
- Per-hop transmit: 100mW × 26.7ms = 2.67mJ
- Per-hop receive: 50mW × 26.7ms = 1.33mJ
- 6-hop RREP total: 6 × (2.67 + 1.33) = 24mJ
- Total discovery: 80.1 + 24 = 104.1mJ
Compare discovery strategies:
Battery capacity: 5,000 mAh × 3.6V = 18,000 mWh = 64,800 J over 2 years
Strategy A: Fresh discovery every transmission (no caching)
- Discoveries per day: 96 (every 15 min)
- Daily energy: 96 × 104.1 mJ = 9,994 mJ ≈ 10 J/day
- 2-year energy: 730 × 10 J = 7,300 J = 563 mAh at 3.6 V
- 11% of battery for routing (significant given 2-year target)
Strategy B: Cache routes for 45 minutes
- Discoveries per day: 32 (every 45 min)
- Daily energy: 32 × 104.1 mJ = 3,331 mJ ≈ 3.3 J/day
- 2-year energy: 730 × 3.3 J = 2,409 J = 186 mAh at 3.6 V
- Only 3.7% of battery for routing — 3× improvement over Strategy A
Strategy C: Adaptive caching (45min when stationary, 15min when moving)
- Elephants stationary 70% of time, moving 30%
- Stationary discoveries: 0.7 × 32 = 22.4/day
- Moving discoveries: 0.3 × 96 = 28.8/day
- Total: 51.2 discoveries/day × 104.1 mJ = 5,330 mJ/day ≈ 5.3 J/day
- 2-year energy: 730 × 5.3 J = 3,869 J = 299 mAh = 4.6% of battery
Calculate latency impact:
- Discovery latency: 6 hops × 30ms/hop = 180ms RREQ + 180ms RREP = 360ms
- Strategy A: 360ms delay on every transmission (unacceptable for poaching alerts)
- Strategy B: 360ms delay every 3rd transmission on average
- Strategy C: Immediate for cached routes (~95% of time), 360ms for 5%
Result: Strategy B (45-minute cache) provides the best balance, consuming only 3.7% of battery for routing (vs. 11% for no caching — a 3× improvement), while maintaining acceptable responsiveness. Stale route probability is ~35% at cache expiration, but DSR’s route error mechanism handles failures gracefully.
Key Insight: DSR’s energy efficiency depends critically on cache hit rate. For slowly-moving networks (elephants walk ~2-4 km/h), long cache timeouts dramatically reduce discovery overhead. The 45-minute cache aligns with typical movement patterns — elephants change position significantly every 30-60 minutes. For faster-moving targets (vehicles at 50 km/h), cache timeout should shrink to 5-10 minutes.
98.6 Worked Example 4: Route Error Recovery
DSR Route Error Recovery in Disaster Response Network
Scenario: A post-earthquake search and rescue operation deploys a temporary ad-hoc network. Rescue workers carry radio nodes while searching collapsed buildings. A team leader (Node TL) needs to send survivor locations to the command post (Node CP), but the network is highly dynamic with workers constantly moving.
Given:
- Network: 25 rescue worker nodes, 3 command post nodes
- Terrain: Urban rubble, transmission range varies 50-150m
- Current route: TL → W1 → W2 → W3 → CP (4 hops)
- Worker W2 moves out of range of W1 at t=0
- Route cache contains this path from discovery 90 seconds ago
- Survivor location report queued at TL (critical priority)
- RERR (Route Error) processing time: 15ms per hop
- New route discovery: ~200ms expected
Steps:
Timeline of failure detection and recovery:
t=0ms: Worker W2 moves out of W1’s range
t=0ms: TL sends survivor report using cached route [W1,W2,W3,CP]
t=35ms: Packet arrives at W1
t=35-135ms: W1 attempts transmission to W2
- First attempt: No ACK (100ms timeout)
- t=135ms: W1 detects link failure
Route Error propagation:
t=135ms: W1 generates RERR: “Link W1-W2 broken”
- RERR packet: 32 bytes (error type, broken link, affected routes)
t=150ms: RERR arrives at TL (15ms transmission)
t=150ms: TL processes RERR:
- Removes route [TL,W1,W2,W3,CP] from cache
- Checks for alternate cached routes to CP: NONE FOUND
- Must initiate new route discovery
New route discovery:
t=150ms: TL broadcasts RREQ for CP
- RREQ floods network, accumulating path
t=250ms: CP receives RREQ via new path [TL,W1,W4,W5,CP]
- Note: W4 recently moved into W1’s range
t=350ms: TL receives RREP with new 4-hop route
t=350ms: TL caches new route and retransmits survivor report
Calculate total recovery time:
- Initial transmission attempt: 35ms
- Link failure detection: 100ms
- RERR propagation: 15ms
- Route discovery: 200ms
- Total: 350ms from send to successful route establishment
Putting Numbers to It
Route error recovery time determines mission-critical message reliability. Using formula \(T_{recovery} = T_{detect} + T_{RERR} + T_{discovery}\), Worked example: Disaster network with 4-hop paths. Link failure detection (ACK timeout) = 100ms, RERR propagation = \(4 \times 15ms = 60ms\), new RREQ/RREP = \(2 \times 4 \times 30ms = 240ms\). Total = \(100 + 60 + 240 = 400ms\). For sub-200ms requirements, reduce ACK timeout to 50ms and pre-cache backup routes: recovery = \(50 + 60 = 110ms\) (no rediscovery needed).
- Impact on critical message delivery:
- Original expected latency (if route worked): ~140ms (4 hops × 35ms)
- Actual latency with failure: 350ms + 140ms = 490ms
- Delay caused by stale route: 350ms
- Mitigation strategy for critical messages:
- Pre-compute backup routes during idle periods
- If TL had cached [TL,W4,W5,CP] as backup:
- Recovery: RERR (15ms) + cache lookup (5ms) + send (140ms) = 160ms
- Saves 330ms on critical message delivery
Result: DSR’s route error mechanism recovered from link failure in 350ms, acceptable for non-critical data but potentially problematic for time-sensitive survivor reports. Pre-caching backup routes reduces recovery to 160ms.
Key Insight: DSR’s RERR mechanism is reactive—it only detects failures after a transmission attempt fails. For mission-critical applications in dynamic networks, supplement DSR with proactive route maintenance: periodically probe cached routes with lightweight “route alive” packets, and pre-compute backup paths during idle periods. The 100ms ACK timeout is the dominant factor in failure detection; reducing it improves responsiveness but increases false positives from temporary interference.
98.7 Additional Practice
Understanding Check: Route Caching Strategy
Scenario: A precision agriculture network has 100 battery-powered soil sensors reporting to a gateway. Each sensor transmits once per hour. The network uses DSR with aggressive route caching (10-minute timeout). Tractors occasionally move through the field, temporarily blocking sensor-gateway links.
Think about:
- Is aggressive route caching beneficial here?
- What happens when a tractor blocks a cached route?
- How should cache timeout be adjusted for this environment?
Key Insight: Aggressive caching is problematic in this mobile-obstacle scenario. Analysis: (1) Communication pattern: Sensors transmit once per hour (3600s). Route discovery latency (500ms) is negligible compared to sensing interval. Caching benefit is minimal since route won’t be reused for another hour, and 10-minute cache may expire before reuse anyway. (2) Mobility impact: Tractors move through field → cached routes become stale. Sensor tries cached 3-hop route, but hop 2 now blocked by tractor. Transmission fails, sensor detects ROUTE ERROR, flushes cache, initiates new discovery. Wasted energy: Failed transmission (~50 mAh) + new discovery (~30 mAh) vs. discovering fresh route immediately (~30 mAh). Stale cache costs 60% extra energy. (3) Optimal strategy: Disable caching entirely or use very short timeout (1-2 minutes). Since communication is once per hour, each transmission should discover fresh route. Discovery overhead (30 mAh every hour) is trivial compared to sensor battery budget. Alternative: Use environmental sensors to predict cache invalidation—if accelerometer detects tractor vibration, proactively invalidate cached routes. General rule: Cache aggressively when communication >> mobility. Cache conservatively when mobility >> communication.
Understanding Check: Source Routing Overhead
Scenario: A disaster response network spans 5km with 200 nodes, average 8 hops between nodes and command center. DSR source routing includes complete path in every packet header. Each node ID is 2 bytes, packet payload is 20 bytes.
Think about:
- What percentage of packet is routing overhead?
- How does this impact wireless transmission energy?
- When does source routing overhead become prohibitive?
Key Insight: Source routing overhead is significant: 8 hops × 2 bytes = 16 bytes routing header. Total packet: 16 bytes (header) + 20 bytes (payload) = 36 bytes. Overhead ratio: 16/36 = 44% of packet is routing information! Energy impact: Wireless transmission energy is proportional to packet size. 44% overhead means 44% extra energy per packet. For battery-powered nodes transmitting 1000 packets over deployment lifetime, this wastes 440 packet-equivalents of energy. At 30 mAh per packet, that’s 13,200 mAh wasted (equivalent to 2-3 AA batteries). When overhead becomes prohibitive: (1) Network diameter > 10 hops: 10 hops × 2 bytes = 20 bytes overhead. For 10-byte payload, overhead is 66%! (2) Large node IDs: IPv6 addresses (16 bytes each) → 8 hops × 16 bytes = 128 bytes overhead >> payload. (3) Frequent communication: Overhead repeated every packet. Mitigation strategies: (1) Route compression: Use route indices instead of full paths. Gateway maintains route table, packets include index (2 bytes) instead of full path (16 bytes). Reduces overhead from 44% to 9%. (2) Hop-by-hop routing: Use routing tables (like DSDV) for frequent communication. Source routing benefits vanish when communication is constant. (3) Header compression: 6LoWPAN-style compression for common routes. Design guideline: DSR works best for sparse communication across short paths (≤5 hops). For dense communication or long paths, consider table-driven protocols.
98.8 Visual Reference Gallery
Visual: Ad Hoc Network Structure
This visualization captures the infrastructure-less nature of ad hoc networks where DSR enables on-demand route discovery between any pair of nodes.
Visual: Ad Hoc Routing Protocol Taxonomy
This diagram shows how DSR fits within the broader taxonomy of ad hoc routing protocols as a reactive, source-routing approach.
Visual: Ad Hoc Route Discovery Process
This figure illustrates the route discovery mechanism central to DSR, where route requests flood the network and replies return along discovered paths.
Test Your Understanding
Question 1: A wildlife tracking network has GPS collars on 30 elephants that report locations every 15 minutes. Elephants move slowly, changing position significantly every 30-60 minutes. What is the optimal DSR cache timeout for this scenario?
- 5 seconds – to always have fresh routes
- 15 minutes – matching the transmission interval
- 45 minutes – matching the movement interval
- No caching – discover fresh routes every time
Answer
c) 45 minutes – matching the movement interval. The optimal cache timeout should approximate the mobility interval. With 45-minute caching, routing consumes only 13% of battery (vs. 41% with no caching). Since elephants move slowly, routes remain valid for roughly 30-60 minutes. Setting the cache to 45 minutes balances freshness against discovery overhead. A 5-second timeout would trigger constant unnecessary rediscoveries, while 15-minute matching the transmission interval ignores that the network topology changes on a different timescale.
Question 2: In a disaster response ad-hoc network, a rescue worker’s radio detects that the next hop in a cached DSR route has moved out of range. What is the correct sequence of events?
- The source immediately discovers a new route by flooding RREQ
- The detecting node sends RERR to the source, which invalidates the cache and initiates new RREQ if no backup route exists
- The detecting node waits for the next periodic routing update
- The packet is dropped and the application must retransmit
Answer
b) The detecting node sends RERR to the source, which invalidates the cache and initiates new RREQ if no backup route exists. DSR’s route maintenance works as follows: (1) the forwarding node detects link failure (no ACK after ~100 ms), (2) it generates a RERR message identifying the broken link, (3) RERR propagates back to the source, (4) the source removes all affected cached routes, (5) if an alternate cached route exists, it is used immediately; otherwise new RREQ discovery begins. Total recovery is typically 350 ms with no backup, or 160 ms with a pre-cached alternate route.
Question 3: A 200-node sensor network has an average path length of 8 hops with 20-byte payloads. Each node ID is 2 bytes. What percentage of each DSR packet is routing overhead?
- About 10%
- About 25%
- About 44%
- About 80%
Answer
c) About 44%. Source routing header = 8 hops x 2 bytes = 16 bytes. Total packet = 16 bytes (header) + 20 bytes (payload) = 36 bytes. Overhead ratio = 16/36 = 44%. This means 44% of every packet is routing information rather than useful data. For networks with long paths (>10 hops) or small payloads, this overhead becomes prohibitive. Mitigation strategies include route compression (using indices instead of full paths) or switching to hop-by-hop routing like DSDV for dense communication patterns.
Worked Example: DSR Route Discovery Time Budget for Real-Time Alerts
Scenario: A factory safety system uses DSR routing to deliver gas leak alerts to control room within 500ms. Calculate whether DSR discovery latency meets requirements.
Given:
- Network: 25 sensor nodes, 100m radio range
- Typical alert path: 6 hops source to gateway
- RREQ propagation: 30ms per hop (includes transmission, propagation, processing)
- RREP propagation: 30ms per hop
- Cached route probability: 60% (routes expire after 5 minutes)
Steps:
- Calculate discovery latency for cache miss:
- RREQ floods outward: 6 hops × 30ms = 180ms
- RREP returns: 6 hops × 30ms = 180ms
- Total discovery: 360ms (uncached)
- Calculate cached route latency:
- Zero discovery delay (route already known)
- Data transmission: 6 hops × 35ms = 210ms
- Total: 210ms (cached)
- Calculate expected average latency:
- Cache hit rate: 60%
- Expected = (0.60 × 210ms) + (0.40 × (360ms + 210ms))
- Expected = 126ms + 228ms = 354ms average
- Evaluate against 500ms requirement:
- Worst case (no cache): 570ms = exceeds 500ms
- Average case: 354ms = meets 500ms
- Best case (cached): 210ms = meets 500ms
Result: DSR meets 500ms requirement on average (354ms) and when cached (210ms), but fails worst case (570ms). For guaranteed compliance, implement aggressive route caching with longer timeouts (10-15 min) to increase hit rate from 60% to 85%+, reducing expected latency to 285ms.
Key Insight: DSR’s discovery delay is acceptable for most IoT scenarios (environmental monitoring, telemetry) where seconds of latency are tolerable. For real-time critical alerts (safety, industrial control), either pre-warm caches with periodic keep-alive packets or use proactive routing (DSDV) where routes are always ready with zero discovery delay.
Decision Framework: DSR vs DSDV vs ZRP Protocol Selection
Use this framework to select the appropriate routing protocol for your IoT deployment based on measurable network characteristics.
| Factor | DSDV (Proactive) | DSR (Reactive) | ZRP (Hybrid) |
|---|---|---|---|
| Network Size | <50 nodes | 50-200 nodes | 100-1000 nodes |
| Topology Change Rate | <1 change/hour | 1-10 changes/hour | 0.5-5 changes/hour |
| Traffic Frequency | >1 pkt/30s per node | <1 pkt/5min per node | Mixed (local frequent, distant rare) |
| Energy Budget | Abundant (mains) | Limited (battery) | Moderate (solar) |
| Latency Requirement | <100ms critical | >1s acceptable | <500ms for 80% traffic |
| Network Diameter | <5 hops | 5-10 hops | 5-15 hops |
| Decision | ✓ Always-ready routes justify overhead | ✓ On-demand saves energy | ✓ Balanced for mixed scenarios |
Example Decision Process for Smart Building (80 nodes):
- Size: 80 nodes → DSR or ZRP (rules out DSDV for <50)
- Mobility: HVAC sensors fixed, but workers carry tags (2-5 topology changes/hour) → Moderate mobility favors DSR or ZRP
- Traffic: Sensor reports every 60s → Frequent (>1/30s) → Favors DSDV or ZRP
- Energy: Sensors on building power, tags on battery → Mixed → Favors ZRP
- Latency: Fire alarms need <2s, HVAC tolerates 10s → Mixed → Favors ZRP
- Diameter: 3-story building, ~6 hops max → All protocols viable
Decision: ZRP with zone radius = 2-3 hops - Rationale: Mixed power budget (building sensors + battery tags), mixed latency (fire vs HVAC), frequent local traffic (same floor) with occasional cross-floor communication - Configuration: ρ=2 covers same-floor sensors proactively (~15 nodes), inter-floor uses reactive discovery - Expected overhead: ~35% of DSDV’s full-proactive burden, with 90% of route availability
Common Mistake to Avoid: Don’t default to DSR because “reactive saves energy.” For networks with >1 packet/2min per node, proactive or hybrid actually consumes less energy because discovery flooding overhead exceeds periodic update cost.
Common Mistake: Ignoring Route Cache Staleness in Mobile Networks
The Mistake: Configuring DSR with long cache timeouts (10-30 minutes) in mobile IoT networks, leading to 40-60% packet loss from stale routes.
Real-World Example: A fleet tracking deployment used DSR with 15-minute cache timeout. Vehicles moving at 40 km/h changed network topology every 2-3 minutes. Result: 55% of cached routes were invalid by the time they were used, causing massive retransmissions and discovery floods.
Why It Happens:
- Cache timeout was tuned for static networks (long timeouts reduce overhead)
- Developers didn’t calculate actual mobility interval (time until topology changes invalidate route)
- Testing was done in static testbed (missed mobile failure modes)
Calculate Optimal Cache Timeout:
Mobility Interval Formula:
mobility_interval = radio_range / node_velocity
Example:
- Radio range: 100m
- Vehicle velocity: 40 km/h = 11 m/s
- Mobility interval = 100m / 11 m/s = 9 seconds
Rule: Set cache timeout ≤ 0.5 × mobility_interval for <10% stale probability. - Optimal cache: 0.5 × 9s = 4-5 seconds (not 15 minutes!)
Quantified Impact (Fleet Tracking Fix):
- Before (15min cache): 55% packet loss, 8.2 discoveries/packet, battery depleted in 18 hours
- After (5s cache): 8% packet loss, 1.4 discoveries/packet, battery lasted 72+ hours
- Result: 85% reduction in packet loss, 4× battery life extension
Adaptive Cache Strategy: For networks with variable mobility, implement adaptive timeout:
def adaptive_cache_timeout(route_error_rate):
if route_error_rate > 0.3: # High errors = high mobility
return 5 # seconds
elif route_error_rate > 0.1:
return 30 # seconds
else: # Low errors = stable topology
return 300 # seconds (5 min)Monitor RERR (Route Error) rate as mobility proxy. High errors → short timeout. Low errors → long timeout.
Key Lesson: Cache timeout must match your network’s mobility, not generic default values from academic papers. Measure topology change rate in real deployment environment and tune accordingly.
98.9 Concept Relationships
| Concept | Relationship | Connected Concept |
|---|---|---|
| Route Discovery Latency | Scales with network diameter rather than | Total Network Size |
| Cache Timeout Optimization | Must approximate mobility interval to balance | Stale Routes vs Discovery Overhead |
| Energy Trade-offs | Cache hit rate determines whether | DSR Beats Proactive Routing |
| Source Routing Overhead | Header size grows linearly with | Path Length in Hops |
| Route Error Recovery | Detection time dominated by | ACK Timeout Duration |
98.10 See Also
- DSR Fundamentals and Route Discovery - Core RREQ/RREP mechanisms
- DSR Caching and Maintenance - Cache strategies and error handling
- Ad Hoc Routing: Proactive (DSDV) - Comparison with table-driven approach
- Ad Hoc Routing: Hybrid (ZRP) - Balanced routing strategy
- Multi-Hop Fundamentals - Foundation for path establishment
Common Pitfalls
1. Not Tracing the Reverse Path for RREP
RREP is sent back along the reverse of the discovered route — not along the same forward path. If the RREQ took path A→B→C→D, the RREP goes D→C→B→A (or via a cached route). Confusing forward and reverse paths leads to incorrect route reply path analysis.
2. Allowing Multiple RREQ Forwarding for the Same Request
Each node stores a (source, broadcast ID) cache and forwards each unique RREQ only once. Allowing multiple forwarding of the same RREQ creates exponential message multiplication. The size of RREQ cache required scales with network size and discovery frequency.
3. Forgetting That Cached Route Replies Create Asymmetric Knowledge
When node C replies to a RREQ using a cached route to D, node C knows the source route A→B→C→D→… but node A only knows the route used. Other nodes on the original path may have incomplete route cache entries. Asymmetric cache knowledge affects future route reuse.
4. Confusing Broadcast ID With Sequence Number
DSR’s broadcast ID increments per route discovery initiated by a node. This is different from DSDV’s sequence number (per destination) or TCP sequence numbers (per byte). Using the wrong increment logic in route discovery examples produces incorrect duplicate detection behavior.
98.11 Summary
This chapter provided practical DSR worked examples covering:
- Route Discovery Latency: Discovery time depends on network diameter (hop count), not network size; emergency response scenario showed ~60ms for 3-hop discovery
- Cache Timeout Optimization: Optimal cache timeout should approximate mobility interval; port logistics scenario showed reducing timeout from 300s to 60s improved delivery from 65% to 90%
- Energy Trade-offs: Wildlife tracking example demonstrated 45-minute caching consuming only 3.7% battery vs. 11% for fresh discovery per transmission (3× improvement); cache hit rate is critical for energy efficiency
- Route Error Recovery: Disaster response scenario showed 350ms recovery time from link failure; pre-caching backup routes reduces recovery to 160ms for mission-critical applications
- Source Routing Overhead: Large network diameters cause significant header overhead (44% for 8-hop, 20-byte payload); consider hop-by-hop routing for dense communication
Related Chapters
Foundation:
- DSR Fundamentals and Route Discovery - Core DSR concepts
- DSR Caching and Maintenance - Cache strategies and RERR
Comparisons:
- Ad Hoc Routing: Proactive (DSDV) - Table-driven continuous route maintenance
- Ad Hoc Routing: Hybrid (ZRP) - Balancing proactive and reactive approaches
Learning:
- Simulations Hub - Route discovery visualizations
98.12 Knowledge Check
98.13 What’s Next
| If you want to… | Read this |
|---|---|
| Learn DSR caching and route maintenance | DSR Caching and Route Maintenance |
| Study DSR fundamentals | DSR Fundamentals and Route Discovery |
| Compare with DSDV proactive routing | DSDV Proactive Routing |
| Learn hybrid ZRP approach | Ad Hoc Routing: Hybrid (ZRP) |
| Review ad hoc routing protocols | Ad Hoc Networks Review |