16  Simulation Methods & Scenarios

16.1 Learning Objectives

  • Design systematic IoT network simulations from requirements analysis through deployment validation

Key Concepts

  • Simulation Methodology: A structured approach to planning, executing, and interpreting simulations to ensure results are reproducible, statistically valid, and relevant to deployment conditions
  • Experimental Design: Defining the independent variables (topology, node count, transmit interval), dependent variables (PDR, latency, energy), and controlled variables (radio model, channel model) before running simulations
  • Statistical Validity: Ensuring simulation results are based on sufficient samples (typically N > 100 packets per node per scenario) to support confident conclusions about network performance
  • Sensitivity Analysis: Systematically varying one parameter at a time (transmit power, node density, packet size) to quantify how performance depends on each factor
  • Monte Carlo Simulation: Running many simulation trials with randomly varied parameters (node positions, channel conditions) to characterize performance variability, not just average performance
  • Confidence Interval: A statistical range that captures the true performance metric value with a specified probability; reports uncertainty in simulation results
  • Validation Experiment: A comparison of simulation predictions against measurements from a real pilot deployment; essential for confirming that the simulator accurately models the target environment
In 60 Seconds

Network simulation methodology structures the simulation workflow from defining performance metrics and configuring realistic environments through running controlled experiments and comparing results against field measurements — turning simulation from an ad-hoc exercise into a rigorous validation process.

  • Configure four-layer network models (physical, MAC, network, application) with realistic parameters
  • Implement validation and verification processes to ensure simulation accuracy matches real-world performance
  • Apply performance optimization strategies for latency reduction, throughput improvement, and battery life extension
  • Run statistical analysis with multiple random seeds and confidence intervals for reliable simulation results

Design methodology gives you a structured, proven process for creating IoT systems from initial concept to finished product. Think of it like following a recipe when cooking a complex meal – the methodology tells you what to do first, how to handle each step, and how to bring everything together into a successful final result.

“Simulation methodology means having a scientific approach to testing our network design,” explained Max the Microcontroller. “You do not just press play once and call it done. You run the simulation many times with different random seeds, measure the results, calculate averages, and check if the results are statistically reliable.”

Sammy the Sensor described the four layers: “A good simulation models the physical layer (radio signals and interference), the MAC layer (who gets to talk when), the network layer (routing data through the mesh), and the application layer (the actual sensor data). Skip any layer and your results will not match reality.”

“Verification and validation are two different things,” said Lila the LED. “Verification asks ‘Did we build the simulation correctly?’ – like checking your code for bugs. Validation asks ‘Does the simulation match the real world?’ – you compare simulation results against actual measurements.” Bella the Battery added, “A simulation that does not match reality is just a fancy guess. Always validate with real-world data!”

⏱️ ~35 min | ⭐⭐⭐ Advanced | 📋 P13.C05.U04

The following diagram illustrates the systematic approach to IoT network design and simulation, from initial requirements through deployment validation:

Flowchart showing systematic IoT network design methodology: Requirements Analysis phase branches into three parallel streams (Define use case with device count and area, Performance targets for latency and throughput, Constraints including power/cost/reliability) that converge into Network Design phase. Network Design branches into three parallel decisions (Select topology as star/mesh/hybrid, Choose protocols like Wi-Fi/Zigbee/LoRa, Plan addressing with IPv4/IPv6/custom) feeding into Create Simulation Model. Model phase splits into three parallel configurations (Define nodes as sensors/gateways, Configure protocols with parameters and channels, Set traffic patterns as periodic or event-driven) that merge into Run Simulation. Simulation flows to Analyze Results then Meet requirements decision point. No path loops back to Adjust design parameters then returns to Network Design for iteration. Yes path proceeds to Deploy and Validate, then Pilot deployment small scale, finally reaching Scale to production. Iterative refinement loop ensures design optimization before costly physical deployment.
Figure 16.1: IoT Network Design Methodology: Requirements to Production Deployment

16.1.1 Defining Simulation Objectives

Performance Metrics: Clearly define what you’re measuring: - Latency (end-to-end, per-hop) - Throughput (aggregate, per-device) - Packet delivery ratio - Energy consumption - Network lifetime - Collision rate - Channel utilization

Scenarios to Test:

  • Baseline performance (ideal conditions)
  • Stress testing (maximum load)
  • Failure scenarios (node/link failures)
  • Mobility (if applicable)
  • Interference conditions
  • Scalability (varying node counts)

16.1.2 Network Model Development

Once simulation objectives are defined, the next step is building a realistic network model. A complete IoT simulation requires modeling four distinct layers of the network stack, each contributing specific aspects of system behavior. The following diagram illustrates this four-layer architecture:

Four-layer network modeling architecture diagram showing Physical Layer (radio propagation, transmission power, frequency/bandwidth, path loss, shadowing/fading), MAC Layer (channel access methods CSMA/CA/TDMA/ALOHA, collision avoidance, retry limits, acknowledgments), Network Layer (routing protocols AODV/RPL/static, address assignment, packet forwarding, route maintenance), and Application Layer (traffic patterns periodic/event-driven/burst, packet sizes, data generation rates, CoAP/MQTT protocols). Each layer feeds into next, building complete IoT network simulation model from radio physics to application behavior.
Figure 16.2: Diagram showing four-layer network modeling architecture for IoT simulation

Physical Layer:

  • Radio propagation model (free space, two-ray ground, log-distance)
  • Transmission power and sensitivity
  • Frequency and bandwidth
  • Path loss exponent
  • Shadowing and fading

Example NS-3 configuration:

YansWifiChannelHelper wifiChannel;
wifiChannel.SetPropagationDelay("ns3::ConstantSpeedPropagationDelayModel");
wifiChannel.AddPropagationLoss("ns3::LogDistancePropagationLossModel",
                                "Exponent", DoubleValue(3.0),
                                "ReferenceDistance", DoubleValue(1.0),
                                "ReferenceLoss", DoubleValue(40.0));

MAC Layer:

  • Channel access method (CSMA/CA, TDMA, ALOHA)
  • Collision avoidance parameters
  • Retry limits and backoff
  • Acknowledgment mechanisms

Network Layer:

  • Routing protocol (static, AODV, RPL)
  • Address assignment
  • Packet forwarding rules
  • Route maintenance

Application Layer:

  • Traffic patterns (periodic, event-driven, burst)
  • Packet sizes
  • Data generation rates
  • Application protocols (CoAP, MQTT)

Example application traffic:

// Periodic sensor readings every 10 seconds
OnOffHelper onoff("ns3::UdpSocketFactory",
                  InetSocketAddress(gatewayAddress, port));
onoff.SetAttribute("PacketSize", UintegerValue(100));
onoff.SetAttribute("DataRate", StringValue("800bps")); // 100 bytes / 10 sec
onoff.SetAttribute("OnTime", StringValue("ns3::ConstantRandomVariable[Constant=0.1]"));
onoff.SetAttribute("OffTime", StringValue("ns3::ConstantRandomVariable[Constant=9.9]"));

16.1.3 Topology Configuration

With the network layers configured, the next critical decision is how to arrange nodes in space. Topology configuration determines radio link quality, multi-hop paths, and ultimately network performance.

Node Placement:

  • Grid placement (regular spacing)
  • Random placement (uniform, Gaussian)
  • Real-world coordinates (GPS-based)
  • Clustered (grouped sensors)

Mobility Models:

  • Static (sensors, infrastructure)
  • Random waypoint (mobile devices)
  • Traces from real deployments
  • Predictable paths (vehicles)

Network Size: Start small (10-50 nodes) to verify correctness, then scale up for performance testing.

16.1.4 Running Simulations

Simulation Time:

  • Warm-up period (allow network to stabilize)
  • Measurement period (collect metrics)
  • Cool-down period (optional)

Typical: 100-1000 seconds simulation time (depends on application)

Interactive: Simulation Parameter Impact Calculator

Explore how simulation parameters affect network performance metrics.

How to use: Adjust the sliders to see how different network parameters affect performance. This simplified model helps build intuition before running detailed simulations. Notice how increasing node count without proportionally increasing area degrades PDR due to collision probability, while increasing transmit power improves PDR but reduces battery life.

Random Seeds: Run multiple simulations with different random seeds to get statistical confidence:

// NS-3 example: run 30 simulations with different seeds
for (int run = 0; run < 30; run++) {
    RngSeedManager::SetSeed(1);
    RngSeedManager::SetRun(run);

    // Setup and run simulation
    Simulator::Run();
    Simulator::Destroy();
}
Interactive: Statistical Confidence Calculator

Why 30 simulation runs? Statistical confidence follows the Central Limit Theorem. For a 95% confidence interval:

\[\text{Margin of Error} = 1.96 \times \frac{\sigma}{\sqrt{n}}\]

With 30 runs and typical PDR standard deviation σ = 0.05 (5%), the confidence interval width is ±1.8%. With only 5 runs, the interval would be ±4.4% (too wide for design decisions). With 100 runs, it tightens to ±1.0% (diminishing returns).

Parameter Sweeps: Systematically vary parameters to understand impact: - Node density: 10, 20, 50, 100, 200 nodes - Transmission power: -10, 0, 10, 20 dBm - Data rate: 1, 5, 10, 20 packets/min

16.1.5 Data Collection and Analysis

After running simulations with proper statistical rigor, the next step is extracting meaningful insights from the raw simulation output. Understanding what data to collect and how to analyze it separates useful simulation from wasted computational effort.

Trace Files: Most simulators output detailed trace files: - Packet traces (transmission, reception, drops) - Node energy traces - Routing table updates - Application-layer events

The following diagram shows how key simulation metrics are calculated and how they relate to each other:

Key IoT network simulation metrics diagram showing five essential performance measures with formulas: Packet Delivery Ratio (PDR = Packets Received / Packets Sent × 100%), Average End-to-End Latency (sum of receive time minus send time divided by packets received), Throughput (total bytes received × 8 divided by simulation time in bps), Energy Consumption (sum of transmit, receive, sleep, and processing energy), and Network Lifetime (time until first node battery depletion or network partition). Metrics interconnected showing PDR affects latency, throughput depends on PDR, energy consumption determines network lifetime.
Figure 16.3: Diagram showing key IoT network simulation metrics and their calculation formulas

Metrics Calculation:

Packet Delivery Ratio (PDR):

PDR = (Packets Received / Packets Sent) × 100%

Average End-to-End Latency:

Avg Latency = Σ(Receive Time - Send Time) / Packets Received

Throughput:

Throughput = (Total Bytes Received × 8) / Simulation Time (in bps)

Energy Consumption:

Total Energy = Σ(Tx Energy + Rx Energy + Sleep Energy + Processing Energy)

Network Lifetime: Time until first node depletes battery or network becomes partitioned.

Statistical Analysis:

  • Mean, median, standard deviation
  • Confidence intervals (typically 95%)
  • Box plots, CDFs for distributions
  • Hypothesis testing for comparing protocols

16.1.6 Validation and Verification

The following diagram illustrates the comprehensive validation and verification process for IoT network simulation:

Flowchart showing network simulation validation and verification methodology with two-phase quality assurance: Simulation Results feed into Verification Phase which branches into three parallel checks (Model correctness for protocol implementation, Parameter validation for realistic values, Edge case testing for failures and congestion), all converging to Validation Phase. Validation Phase splits into three parallel validation methods (Benchmark vs real-world data, Compare with analytical models, Pilot deployment field testing) that merge at Results match decision point. Decision evaluates match quality: less than 10% error path leads to Accept simulation model then High confidence for deployment (success outcome); greater than 10% error path leads to Debug model and refine parameters which loops back to Simulation Results for iterative improvement. Process ensures simulation accuracy through systematic verification (building model correctly) and validation (building correct model) before production deployment commitment.
Figure 16.4: Simulation Validation and Verification: Benchmarking Against Real-World Data

Verification (are we building it right?):

  • Check simulation code for bugs
  • Validate against mathematical models
  • Compare with theoretical limits (Shannon capacity, etc.)
  • Sanity checks (conservation of packets, energy)

Validation (are we building the right thing?):

  • Compare simulation results with real deployments (if available)
  • Validate propagation models with measurements
  • Ensure traffic patterns match real applications
  • Verify protocol implementations against standards

16.2 Common IoT Network Scenarios

⏱️ ~20 min | ⭐⭐ Intermediate | 📋 P13.C05.U05

16.2.1 Scenario 1: Smart Home Sensor Network

Requirements:

  • 50 devices (sensors, actuators)
  • Star topology (all → gateway)
  • Latency <500ms
  • Battery life >1 year
  • Indoor environment

Simulation Setup:

  • Protocol: Zigbee (802.15.4)
  • Topology: 50 nodes randomly placed in 20m×20m area
  • Traffic: Periodic (every 30-60 seconds)
  • Gateway: Central position

Key Metrics:

  • PDR (should be >95%)
  • Average latency
  • Battery lifetime
  • Gateway load

Challenges to Model:

  • Indoor propagation (walls, furniture)
  • Interference from Wi-Fi
  • Burst traffic during events (alarm triggered)

16.2.2 Scenario 2: Industrial Wireless Sensor Network

Requirements:

  • 200 sensors in factory
  • Mesh topology (multi-hop)
  • Latency <100ms (control loops)
  • Reliability >99.9%
  • Harsh RF environment

Simulation Setup:

  • Protocol: WirelessHART or ISA100.11a
  • Topology: Grid placement (factory floor)
  • Traffic: Periodic (100ms-1s)
  • Multiple gateways for redundancy

Key Metrics:

  • End-to-end latency distribution
  • Worst-case latency
  • Path diversity
  • Network resilience to node failures

Challenges to Model:

  • Metallic reflections and multipath
  • Interference from machinery
  • Time-sensitive networking requirements
  • Redundancy and failover

16.2.3 Scenario 3: Smart City LoRaWAN Deployment

Requirements:

  • 10,000 sensors across city
  • Star-of-stars topology (sensors → gateways → network server)
  • Long range (2-5 km)
  • Low data rate (few packets/hour)
  • Multi-year battery life

Simulation Setup:

  • Protocol: LoRaWAN
  • Topology: Real city map with sensor locations
  • Traffic: Sparse (1-10 packets/hour per device)
  • Multiple gateways with overlapping coverage

Key Metrics:

  • Coverage (% of nodes reaching gateway)
  • Gateway utilization
  • Collisions and packet loss
  • Energy per packet
  • Scalability limits

Challenges to Model:

  • Large geographic area
  • Urban propagation model
  • Duty cycle restrictions
  • Adaptive data rate (ADR) algorithm
  • Collision probability with many devices

16.2.4 Scenario 4: Agricultural Monitoring

Requirements:

  • 100 sensors across farm (soil, weather, etc.)
  • Hybrid topology (clusters + long-range backbone)
  • Variable data rates
  • Solar-powered with battery backup
  • Outdoor, large area (several km²)

Simulation Setup:

  • Protocol: Zigbee clusters + LoRa backhaul
  • Topology: Clustered sensors, sparse gateways
  • Traffic: Adaptive (frequent during critical periods)
  • Energy harvesting model

Key Metrics:

  • Coverage
  • Multi-hop latency
  • Energy balance (harvest vs. consumption)
  • Data aggregation efficiency

Challenges to Model:

  • Long distances between clusters
  • Variable solar energy availability
  • Seasonal vegetation impact on propagation
  • Rare critical events (frost detection)

16.3 Performance Optimization Strategies

⏱️ ~15 min | ⭐⭐ Intermediate | 📋 P13.C05.U06

The following diagram illustrates key performance optimization strategies across different network dimensions:

Four-quadrant performance optimization strategies diagram showing: Reducing Latency (shorter routes, optimized gateway placement, faster MAC access with TDMA, edge processing, priority queuing for critical traffic), Improving Throughput (multi-channel operation, frequency hopping, protocol efficiency with 6LoWPAN header compression, data aggregation, load balancing across gateways), Enhancing Reliability (redundant gateways, mesh topology alternate paths, packet retransmission, Forward Error Correction, robust routing with link quality metrics), and Extending Battery Life (duty cycling with coordinated sleep schedules, adaptive sampling based on environmental stability, energy-aware routing, transmission power control adjusted per link distance and quality). Strategies address different performance dimensions with specific technical approaches.
Figure 16.5: Diagram showing four categories of IoT network performance optimization strategies

16.3.1 Reducing Latency

Latency reduction attacks three layers of the network stack simultaneously. At the network layer, shorter routes are the most direct lever: optimising gateway placement to minimise hop counts, limiting maximum hop depth (a 4-hop limit reduces worst-case latency by 50% compared to 8 hops), and using direct single-hop links where signal strength permits. At the MAC layer, faster channel access reduces the time each packet spends waiting: shortening contention windows, tuning backoff parameters for your traffic density, or switching to TDMA (time-division multiple access) for deterministic latency guarantees in industrial applications. At the application layer, edge processing reduces latency by filtering and aggregating data at intermediate nodes rather than forwarding every reading to the cloud – a gateway that averages 10 temperature readings locally and sends one summary eliminates 9 cloud round-trips. For mixed-criticality networks, priority queuing separates alarm traffic from routine periodic reports, ensuring that a fire alarm is not stuck behind 50 temperature readings in a queue.

16.3.2 Improving Throughput

Throughput bottlenecks in IoT networks are rarely about raw radio speed – they are about contention and overhead. Multi-channel operation is the most effective throughput multiplier: a Zigbee network using 4 channels instead of 1 can carry nearly 4x the aggregate traffic. Frequency hopping adds interference avoidance as a bonus. Protocol efficiency attacks overhead: 6LoWPAN header compression reduces 40-byte IPv6 headers to as few as 2 bytes, data aggregation combines multiple sensor readings into one packet, and minimising control overhead (beacons, routing updates) frees bandwidth for payload data. Load balancing across multiple gateways prevents the common bottleneck where all traffic funnels through a single collection point – in mesh networks, the nodes closest to a single gateway often become congested long before the rest of the network reaches capacity.

16.3.3 Enhancing Reliability

Reliability in IoT networks means tolerating failures that will inevitably occur over multi-year deployments. Redundancy provides the foundation: multiple gateways ensure that the loss of one does not partition the network, mesh topology creates alternate paths when links fail, and packet retransmissions recover from transient interference. Error correction adds a second defence layer: Forward Error Correction (FEC) allows receivers to reconstruct corrupted packets without retransmission (trading bandwidth for reliability), while application-layer redundancy (sending critical alerts via two independent paths) provides end-to-end guarantees. Robust routing ties these together: using link quality metrics (signal strength, packet success rate) rather than hop count for route selection, maintaining backup routes proactively before failures occur, and triggering fast rerouting when link quality drops below threshold – measured in seconds, not minutes.

16.3.4 Extending Battery Life

Battery life is often the constraining design parameter for IoT networks, and optimisation operates across all layers. Duty cycling is the primary mechanism: coordinating sleep schedules across the network so that nodes spend 99%+ of their time in microamp-level sleep mode, with asynchronous low-power listening protocols that allow a sleeping node to detect incoming packets within milliseconds. Adaptive sampling reduces unnecessary work: when environmental conditions are stable (temperature changing less than 0.1C per hour), the sensing interval can safely extend from 1 minute to 10 minutes, reducing energy consumption by 10x with negligible information loss. Energy-aware routing distributes the forwarding burden across nodes based on remaining battery, preventing the scenario where relay nodes near the gateway deplete first while leaf nodes still have full batteries. Transmission power control uses the minimum power sufficient for each link – a node 5 metres from the gateway needs far less transmit power than one 50 metres away, and adapting power based on real-time link quality measurements can reduce energy per packet by 3–5x.

16.4 Best Practices

⏱️ ~12 min | ⭐⭐ Intermediate | 📋 P13.C05.U07

16.4.1 Simulation Best Practices

Start Simple: Begin with simplified models, gradually add complexity. Validate each layer before adding the next.

Document Assumptions: Clearly document all model parameters, propagation models, traffic patterns, and simplifications.

Version Control: Use git for simulation code and configuration files. Track parameter changes and results.

Reproducibility: Record random seeds, software versions, and exact configurations to enable reproducing results.

Sensitivity Analysis: Test impact of uncertain parameters (propagation exponent, node placement variation) to understand result robustness.

Compare Protocols: When evaluating protocols, ensure fair comparison with identical scenarios and traffic.

Validate Against Reality: Whenever possible, compare simulation predictions with measurements from real deployments.

16.4.2 Network Design Best Practices

Plan for Growth: Design networks with headroom for growth (2-3× initial capacity).

Redundancy for Critical Applications: Multiple gateways, mesh topologies, and failover mechanisms for high-reliability needs.

Monitor and Adapt: Build in diagnostics and monitoring to identify issues. Use adaptive protocols that adjust to conditions.

Security by Design: Include encryption, authentication, and secure firmware update mechanisms from the start.

Standardize Where Possible: Use standard protocols (MQTT, CoAP, LoRaWAN) to avoid vendor lock-in and enable interoperability.

Test Failure Modes: Simulate node failures, network partitions, gateway outages, and degraded conditions.

16.4.3 Common Pitfalls to Avoid

Over-Simplification: Using ideal propagation models or ignoring interference leads to unrealistic results.

Ignoring Edge Cases: Rare but important events (simultaneous sensor triggers, gateway failures) must be tested.

Neglecting Energy: Forgetting to model sleep modes and energy consumption leads to unrealistic battery life estimates.

Static Scenarios: Real deployments face varying conditions (interference, mobility, weather). Test dynamic scenarios.

Insufficient Statistical Rigor: Single simulation runs with one random seed provide false confidence. Use multiple runs for statistical validity.

Simulation Without Validation: Always question if simulation matches reality. Validate assumptions with measurements when possible.

16.5 Case Study: Optimizing Smart Building Network

⏱️ ~15 min | ⭐⭐⭐ Advanced | 📋 P13.C05.U08

16.5.1 Problem Statement

A smart building deployment needs to support: - 500 sensors (temperature, occupancy, air quality) - 100 actuators (HVAC, lighting) - Sub-second response for occupancy-based control - 10-year battery life for battery-powered sensors - 99% reliability

Initial design: Single Wi-Fi access point struggled with interference and limited range.

16.5.2 Simulation Approach

Step 1: Baseline Simulation

  • Modeled building layout (5 floors, 50m × 30m each)
  • Placed 100 devices per floor
  • Simulated Wi-Fi with single AP per floor
  • Result: 70% PDR, high latency (2-5 seconds), frequent disconnections

Step 2: Alternative Topology

  • Changed to Zigbee mesh network
  • Multiple coordinators per floor
  • Result: Improved PDR to 95%, latency reduced to 200-500ms

Step 3: Optimization

  • Added router nodes at strategic locations
  • Optimized routing parameters (max hops = 4)
  • Implemented priority for actuator commands
  • Result: PDR >99%, latency <200ms for priority traffic

Step 4: Energy Analysis

  • Modeled duty cycling (1% for sensors)
  • Calculated energy consumption: Tx = 30mW, Rx = 20mW, Sleep = 3µW
  • Result: Battery life >8 years for sensors with 1000mAh battery

Step 5: Validation

  • Deployed 50-node pilot
  • Measured PDR: 98.5% (close to simulated 99%)
  • Measured latency: 150-250ms (matched simulation)
  • Energy consumption validated through battery monitoring

16.5.3 Lessons Learned

  • Simulation guided protocol selection (Zigbee over Wi-Fi)
  • Topology optimization reduced deployment cost (fewer coordinators than initially planned)
  • Energy modeling prevented under-specifying batteries
  • Early validation with pilot deployment confirmed simulation accuracy

16.6 Simulation Accuracy: When Models Diverge from Reality

A common frustration for engineers is that simulation results do not match real-world deployment. Understanding the systematic sources of divergence helps you build appropriate safety margins into designs.

16.6.1 Sources of Simulation Error

Error Source Typical Impact How to Detect Mitigation
Idealized propagation model 10-30% overestimation of range Compare simulated vs measured RSSI at 10+ distances Use log-distance model calibrated with site measurements
Missing interference 5-40% underestimation of packet loss Deploy spectrum analyzer during pilot Add background noise floor and co-channel interferers to model
Uniform node placement 15-25% mismatch in coverage Overlay simulated topology on actual floor plan Use exact GPS/coordinates from site survey
Ideal MAC timing 5-15% optimistic latency Measure real device boot + TX times Add measured device-specific overhead to MAC parameters
Static channel conditions Variable (depends on environment) Run simulation across multiple propagation scenarios Use time-varying shadowing model with empirical variance

16.6.2 Case Study: Smart Building Simulation vs Reality at Siemens

Siemens Building Technologies reported results from a 2019 internal study comparing NS-3 simulations with deployed BACnet/ZigBee networks across 3 commercial buildings:

Building A (modern open-plan office, 12 floors):

  • Simulated PDR: 99.2%
  • Measured PDR: 97.8%
  • Primary divergence cause: Glass partitions and metal cable trays not in simulation model caused 2-4 dB additional path loss per floor

Building B (hospital, concrete construction):

  • Simulated PDR: 98.5%
  • Measured PDR: 91.3%
  • Primary divergence cause: Medical equipment (MRI, X-ray) generated electromagnetic interference not modeled in simulation. The simulation assumed clean spectrum; reality included 15-20 dB noise floor elevation near radiology departments.

Building C (warehouse, steel structure):

  • Simulated PDR: 97.0%
  • Measured PDR: 96.2%
  • Primary divergence cause: Minimal – open warehouse geometry closely matched simulation’s free-space model

Siemens’ resulting calibration practice: After this study, Siemens adopted a policy of applying a “reality factor” to simulation results before making deployment decisions:

  • For modern construction (steel + glass): multiply simulated path loss by 1.15
  • For dense concrete (hospitals, old buildings): multiply by 1.35
  • For open structures (warehouses, parking): multiply by 1.05

These multipliers are derived from empirical data, not theory, and are re-validated annually. The lesson: simulations are decision-support tools, not ground truth. Always validate with a pilot deployment covering at least 10% of the planned network before committing to full-scale rollout.

16.7 Knowledge Check

A smart agriculture deployment has 1,000 soil sensors transmitting to 5 gateways. Should they use SF7 (fast, short range) or SF12 (slow, long range)?

SF7 configuration:

  • Airtime per packet (20 bytes): 41 ms
  • Maximum range: 2 km
  • Sensors reachable: 600 (40% beyond 2 km)
  • Collision probability (Aloha model): G = 600 × (1/600s) × 0.041s = 0.041
  • Throughput: S = 0.041 × e^(-0.082) = 0.038
  • PDR: 0.038/0.041 = 93% (close to 95% requirement)

SF12 configuration:

  • Airtime per packet: 2,400 ms (59× longer!)
  • Maximum range: 10 km
  • Sensors reachable: 1,000 (100% coverage)
  • Collision probability: G = 1000 × (1/600s) × 2.4s = 4.0 (extremely high!)
  • Throughput: S = 4.0 × e^(-8.0) = 0.0013
  • PDR: 0.0013/4.0 = 0.03% (catastrophic collision rate)

Optimal solution - Adaptive Data Rate (ADR):

  • Sensors <2 km use SF7 (600 sensors, G=0.041, PDR=93%)
  • Sensors 2-5 km use SF10 (300 sensors, distributed across gateways)
  • Sensors >5 km use SF12 (100 sensors, distributed across gateways)
  • Add 2 more gateways (7 total) to reduce devices per gateway from 200 to ~143
  • Final result: 96% average PDR (meets spec!)

This simulation prevented deploying 1,000 sensors at SF12, which would have achieved <1% delivery.

Metric Simulation Measured Delta Acceptable? Action
PDR 95% 93% 2% ✓ Yes Within measurement noise - deploy
PDR 95% 88% 7% ⚠ Marginal Investigate propagation assumptions, refine model
PDR 95% 78% 17% ✗ No Fundamental model error - do not trust simulation
Latency 120 ms 145 ms 21% ✓ Yes Acceptable variance (140-160 ms spikes)
Latency 120 ms 350 ms 192% ✗ No Missing congestion or retry overhead
Energy/packet 50 mJ 68 mJ 36% ⚠ Marginal Check for idle listening, failed TX retries

Validation criteria:

  • Excellent (<5% error): Simulation model is accurate, proceed with full deployment
  • Good (5-10% error): Minor calibration needed, deploy with monitoring
  • Poor (>10% error): Revisit assumptions - propagation model, traffic pattern, MAC parameters likely wrong

Why some divergence is expected:

  • Real hardware has manufacturing variance (±5% RX sensitivity)
  • Environment changes (people moving, doors opening)
  • Unmodeled interference (neighboring Wi-Fi, microwave ovens)
  • Temperature effects on radio performance

When to re-calibrate: If pilot deployment differs by >10%, measure RSSI at 5-10 distances, calculate actual path loss exponent, update simulation, re-run. Iterate until error <5%.

Common Mistake: Using Identical Traffic Patterns for All Nodes in Simulation

What they do wrong: Engineers configure NS-3 simulation where all 500 sensors start transmitting at exactly the same time, synchronized every 60 seconds. “It’s simpler to configure and shouldn’t matter.”

Why it fails: Synchronized transmissions create collision storms. At t=0, t=60, t=120, all 500 packets attempt transmission simultaneously, overwhelming CSMA/CA backoff. Real deployments randomize transmission timing to spread load.

Demonstration:

  • Synchronized: 500 nodes TX at t=0. Collision probability approaches 100% (all competing in same 50 ms window). Measured PDR: 23%
  • Random jitter (±30s): TX times uniformly distributed over 60±30 sec. Collision probability: ~5%. Measured PDR: 94%

Correct approach:

// Add random start time offset for each sensor
double startTime = 1.0 + (rand() % 60);  // 1-61 seconds
app.SetAttribute("StartTime", TimeValue(Seconds(startTime)));

// Add random jitter to periodic interval
app.SetAttribute("Interval", StringValue("ns3::UniformRandomVariable[Min=55|Max=65]"));

Real-world example: A data center IoT deployment simulated 2,000 temperature sensors with synchronized 5-minute reporting. Simulation showed 91% PDR (acceptable). Actual deployment: 64% PDR (failed SLA). Root cause: Real HVAC control logic triggered all sensors to report simultaneously when temperature exceeded threshold. Simulation had assumed independent random timing. Fix: Add ±60 second jitter to break synchronization. Post-fix PDR: 93%. Lesson: Model realistic traffic patterns, including correlated events (all sensors in Zone A report when zone temperature alarm triggers).

16.8 How It Works

Network simulation methodology follows a structured four-phase workflow. In the requirements phase, you define performance metrics, scenarios to test, and network parameters based on your application needs. The model development phase creates layered representations of physical propagation, MAC protocols, network routing, and application traffic. The execution phase runs simulations with multiple random seeds, parameter sweeps, and statistical validation to ensure reproducible results. Finally, the validation phase compares simulation predictions against theoretical models, real-world measurements, and pilot deployments to confirm accuracy before scaling to production.

16.9 Concept Relationships

Prerequisites:

Builds Toward:

  • Network Traffic Analysis - Validating simulation predictions with real traffic
  • Performance Optimization - Applying simulation insights

Complements:

16.10 See Also

16.11 Try It Yourself

Setup: Install NS-3 or use the Cooja simulator (part of Contiki-NG for IoT)

Task: Model a 20-node Zigbee smart home network with: - 1 coordinator (gateway) - 10 sensors (temperature, motion) publishing every 60 seconds - 5 actuators (lights, locks) receiving commands - 4 routers for mesh coverage

Steps:

  1. Define topology: Random placement in 20m × 20m area
  2. Configure traffic: Periodic sensor data (100 bytes) + event-driven commands (50 bytes)
  3. Run simulation: 1000 seconds, 10 random seeds
  4. Measure: PDR, average latency, energy consumption per node
  5. Optimize: Adjust router placement to achieve PDR > 98%

What to Observe:

  • How does PDR change with node density?
  • Which nodes consume the most energy (hint: routers near coordinator)?
  • What happens if you reduce transmission power by 3 dBm?

Expected Outcome: You should see PDR around 92-96% initially. Adding one strategically placed router should push it above 98%. This hands-on exercise demonstrates why simulation saves time versus trial-and-error with physical hardware.

16.12 What’s Next

If you want to… Read this
Apply simulation to design assessment scenarios Network Design Assessment
Review foundational network design principles Network Design Introduction
Learn which simulation tools to use Network Simulation Tools
Practice with hands-on exercises Network Design Exercises
Analyze traffic in real deployments Network Traffic Analysis
Previous Current Next
Network Design Methodology Network Simulation Methodology Network Traffic Analysis