12  Edge & Fog Simulator

In 60 Seconds

The edge-fog simulator lets you experiment with workload placement across three tiers and observe real-time impacts on latency, bandwidth, and cost. Key insight from simulation: moving just video analytics from cloud to edge reduces latency from 200+ ms to under 20 ms while cutting bandwidth costs by 95% – but increases edge hardware costs by $50-200 per node. The optimal split for most IoT deployments processes 60-80% of data at edge/fog and sends only aggregated results and anomalies to the cloud.

Key Concepts
  • Simulation Fidelity: Degree to which a simulator accurately replicates real-world behavior; high-fidelity sims include network jitter, packet loss, and hardware constraints
  • Discrete-Event Simulation: Model where system state changes only at specific event times (packet arrival, sensor reading), enabling fast simulation of long time periods
  • Emulation vs. Simulation: Emulation runs actual software on virtualized hardware; simulation uses mathematical models — emulation is more accurate, simulation is faster
  • Latency Injection: Artificially introducing delays in a simulator to model WAN propagation, queuing, and processing delays at each tier
  • Network Topology Modeling: Representing bandwidth constraints, link delays, and failure probabilities between edge, fog, and cloud nodes in the simulator
  • Workload Replay: Using captured real-world sensor traces to drive simulation, ensuring results reflect realistic data patterns rather than synthetic inputs
  • Performance Metrics: Key simulator outputs — throughput (events/sec), latency distribution (P50/P95/P99), packet drop rate, and node CPU/memory utilization
  • What-If Analysis: Using simulation to compare architectural alternatives (edge-only vs. fog+cloud) before committing to hardware procurement

12.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Analyze latency trade-offs: Decompose total latency into transmission, propagation, processing, and queuing components for edge, fog, and cloud tiers
  • Calculate bandwidth cost savings: Compute monthly cloud bandwidth costs for high-volume IoT deployments and quantify savings from edge/fog preprocessing (e.g., 99.8% reduction for video analytics)
  • Evaluate simulation outputs: Interpret latency bar charts to identify bottleneck components and determine which computing tiers meet a given application’s real-time requirement
  • Compare tier selection across scenarios: Map at least 5 real-world IoT applications (autonomous vehicles, smart meters, video surveillance, industrial sensors, wearables) to their optimal computing tier using quantitative evidence
  • Design hybrid architectures: Construct multi-tier processing pipelines that assign each subsystem to the appropriate tier based on latency budget, data volume, and processing complexity
Minimum Viable Understanding
  • Latency equation drives tier selection: Total Latency = Transmission Time + Propagation Delay + Processing Time + Queuing Delay; edge minimizes propagation (0 ms network hop), fog balances all four terms, and cloud maximizes processing capacity at the cost of 5-25 ms propagation per 1000 km
  • Bandwidth cost compounds at scale: A single camera sending 500 KB frames at 1 fps generates 1.3 TB/month; edge preprocessing that emits only 1 KB alerts reduces cloud ingestion cost from $117/month to $0.23/month per device – a 99.8% savings
  • Complexity sets a hard boundary: Edge devices handle threshold checks and simple aggregation (under 10 ms), but ML inference pushes edge latency above 50 ms, making fog (20-80 ms for medium complexity) or cloud the only viable option for advanced analytics
  • No single tier wins every scenario: Autonomous vehicles require edge (less than 10 ms), video analytics need fog (50-100 ms with ML capability), and cross-site trend analysis belongs in the cloud (tolerance over 100 ms, unlimited compute)

Hey everyone! Sammy the Sensor here with our coolest adventure yet – you get to be the architect of a smart city!

The Three Delivery Services

Imagine your city has three mail delivery services:

  • Edge Express (a robot on your street): Super fast delivery (under 10 seconds!) but can only carry small packages and does simple tasks
  • Fog Courier (a bicycle messenger in your neighborhood): Pretty fast (about 30 seconds) and can handle medium packages with some smarts
  • Cloud Postal Service (a big delivery truck from across the country): Slow (takes minutes!) but can carry HUGE packages and has the biggest brain

Your Challenge: Match the Right Service!

What Needs Delivering? Which Service? Why?
“STOP! A cat is in the road!” Edge Express Must be instant – no time to wait!
“What’s the weather pattern this week?” Fog Courier Needs neighborhood data, but not urgent
“What will weather be like next year?” Cloud Postal Service Needs ALL the data, and we can wait

Try This at Home!

  1. Think of 5 things in your house that are “smart” (like a thermostat, doorbell camera, or game console)
  2. For each one, decide: Does it need Edge Express, Fog Courier, or Cloud Postal?
  3. Ask yourself: “What happens if the internet goes down?” If the answer is “something bad,” it needs Edge Express!

Max the Microcontroller says: “I’m small but mighty! I might not be as smart as the cloud, but I’m RIGHT HERE when you need me. When a smoke alarm needs to sound, you don’t want to wait for a letter from across the country!”

Remember: The best architect picks the right delivery service for each job – not the fanciest one!

Why a Simulator?

Reading about latency and bandwidth in a textbook is like reading about swimming without getting in the water. A simulator lets you experiment with real numbers and see the consequences immediately.

What the Numbers Mean:

Term Plain English Example
Latency How long you wait for an answer Like waiting for a web page to load
Data Size How much information you’re sending A text message (tiny) vs. a video (huge)
Bandwidth How wide your internet “pipe” is Garden hose (narrow) vs. fire hose (wide)
Distance How far data must travel Across the room vs. across the country
Processing Complexity How hard the calculation is Adding 2+2 (easy) vs. recognizing a face (hard)

How to Read the Results:

  1. Green bar = This tier meets your latency requirement (safe to use)
  2. Red bar = This tier is too slow for your needs (avoid for this use case)
  3. Shorter bar = Lower latency (faster response)
  4. Cost numbers = Monthly bill for sending data through that tier

First Experiment to Try: Set “Autonomous Car” as the scenario and notice that only edge computing has a green bar. Then switch to “Smart Meter” and watch how cloud becomes the green option. This shows that different applications need different tiers.


12.2 Why Simulation Matters for Architecture Decisions

Before diving into the simulator, it helps to understand what you are actually simulating and why it matters for real-world IoT deployments.

12.2.1 The Latency Equation

Every data transaction in a distributed IoT system incurs four types of delay. The simulator models each component:

Flowchart showing the four components of total latency in IoT systems: Transmission time (data size divided by bandwidth, example 4ms), Propagation delay (distance divided by speed of light, example 1ms), Processing time (complexity times tier factor, example 15ms), and Queuing delay (network congestion, variable 0-50ms), summing to a total latency of 25ms.

Each computing tier optimizes different terms:

  • Edge: Minimizes propagation delay (co-located with sensor) and transmission time (no network hop), but has limited processing capability
  • Fog: Moderate propagation delay (regional), better processing power than edge, aggregates data from multiple edges
  • Cloud: Maximum propagation delay (continental distance), but unlimited processing capability and storage

Total latency is the sum of four components: \(L_{total} = t_{trans} + t_{prop} + t_{proc} + t_{queue}\), where transmission time is \(t_{trans} = \frac{D}{B}\) (data size \(D\) divided by bandwidth \(B\)), propagation delay is \(t_{prop} = \frac{d}{c}\) (distance \(d\) divided by speed of light in fiber \(c \approx 200,000 \text{ km/s}\)), processing time \(t_{proc}\) depends on computational complexity, and queuing delay \(t_{queue}\) varies with network congestion.

Worked example: A 100 KB image sent 2,000 km over 50 Mbps link to cloud with medium complexity processing: - Transmission: \(t_{trans} = \frac{800 \text{ Kb}}{50 \text{ Mbps}} = 16 \text{ ms}\) - Propagation: \(t_{prop} = \frac{2000 \text{ km}}{200,000 \text{ km/s}} = 10 \text{ ms}\) (one-way, 20 ms round-trip) - Processing (cloud GPU): \(t_{proc} = 50 \text{ ms}\) (ML inference) - Queuing: \(t_{queue} \approx 10 \text{ ms}\) (network congestion) - Total cloud latency: \(16 + 20 + 50 + 10 = 96 \text{ ms}\)

For edge processing (same device, no network hop): - Transmission: \(0 \text{ ms}\) (local), Propagation: \(0 \text{ ms}\), Processing (edge NPU): \(15 \text{ ms}\), Queuing: \(0 \text{ ms}\) - Total edge latency: \(15 \text{ ms}\) (6.4× faster)

12.2.2 Why Each Parameter Matters

Understanding the simulator’s input parameters helps you design real architectures:

Mind map showing the four simulator input parameters: Data Size (small 1-10 KB for sensor readings, medium 10-100 KB for audio clips, large 100-1000 KB for images and video), Network Bandwidth (low 1-10 Mbps for cellular IoT and LoRaWAN, medium 10-100 Mbps for Wi-Fi and LTE, high 100-1000 Mbps for 5G and fiber), Distance (near 10-50 km for same building, regional 50-500 km, far 500-5000 km cross-continent), and Complexity (low for threshold checks, medium for pattern matching, high for ML inference, very high for model training).


12.3 Interactive: Edge-Fog-Cloud Latency Simulator

Explore how data size, processing complexity, network bandwidth, and distance affect total latency and bandwidth costs across the three-tier computing hierarchy. This simulator helps you understand when to use edge, fog, or cloud processing based on your application’s real-time requirements.

Interactive Animation: This animation is under development.

12.4 How to Use This Simulator

12.4.1 Step-by-Step Guide

  1. Select a reference scenario from the dropdown to see recommended values for typical IoT use cases
  2. Adjust data size (1-1000 KB) to model your data payload
  3. Set network bandwidth (1-1000 Mbps) to model your network capacity
  4. Change distance to cloud (10-5000 km) to understand propagation delay impact
  5. Modify processing complexity to understand compute trade-offs (edge struggles with “Very High”, cloud excels)
  6. Set latency requirement to see which tiers meet your application’s real-time needs
  7. Adjust requests per day (1-86,400) to calculate monthly bandwidth costs accurately

12.4.2 Guided Experiments

To build intuition, try these experiments in order:

Experiment 1: The Distance Effect
  1. Set data size to 50 KB, bandwidth to 100 Mbps, complexity to Medium
  2. Set distance to 50 km and note latency for each tier
  3. Change distance to 2000 km – what happened to cloud latency?
  4. Change distance to 5000 km – is cloud still viable for a 100 ms requirement?

Expected Insight: Propagation delay dominates for cloud computing at long distances. Edge latency barely changes because data stays local.

Experiment 2: The Complexity Cliff
  1. Set data size to 100 KB, bandwidth to 50 Mbps, distance to 200 km
  2. Start with complexity = Low and observe all three tiers
  3. Increase to Medium, then High, then Very High
  4. At which complexity level does edge fail to meet a 50 ms requirement?

Expected Insight: Edge devices have limited compute. Beyond medium complexity, fog or cloud becomes necessary even when latency is critical, creating a design tension that requires hybrid approaches.

Experiment 3: The Bandwidth Tax
  1. Set a video analytics scenario: 500 KB data, High complexity, 100 km distance
  2. Set requests per day to 86,400 (one per second)
  3. Compare monthly bandwidth costs across all three tiers
  4. Now reduce to 1,000 requests per day – how do costs change?

Expected Insight: For high-volume, large-payload applications, edge processing saves enormous bandwidth costs by processing locally and sending only results upstream.


12.5 Understanding the Results

12.5.1 How to Read the Latency Bars

The simulator produces a bar chart comparing latency across three tiers. Here is how to interpret the results:

Diagram showing the four components of total IoT latency: transmission time, propagation delay, processing time, and queuing delay with their contribution to total latency across edge, fog, and cloud tiers

12.5.2 The Cost vs. Latency Trade-off

A critical insight the simulator reveals is that the cheapest tier is rarely the fastest, and the fastest is rarely the cheapest. Real architecture decisions balance both:

Tier Latency Advantage Cost Advantage Best For
Edge Lowest latency (1-20 ms) Lowest bandwidth cost (data stays local) Safety-critical, high-volume, privacy-sensitive
Fog Moderate latency (20-100 ms) Moderate cost (regional aggregation) Multi-sensor fusion, regional analytics, moderate real-time
Cloud Highest latency (100-500+ ms) Lowest compute cost (shared infrastructure) Batch analytics, model training, long-term storage, cross-site


12.6 Real-World Reference Scenarios

Use these as starting points in the simulator to understand typical IoT architectures:

Scenario Data Size Complexity Distance Latency Req Best Tier Rationale
Autonomous Car 50 KB Very High 50 km 10 ms Edge Life-or-death: 200 ms cloud round-trip = 5+ meters of uncontrolled travel at highway speed
Smart Meter 1 KB Low 2000 km 500 ms Cloud Low volume, latency-tolerant, benefits from centralized analytics across millions of meters
Video Analytics 500 KB High 100 km 50 ms Fog Edge too weak for ML inference; cloud too slow and too expensive for video bandwidth
Industrial Sensor 10 KB Medium 200 km 20 ms Fog Real-time anomaly detection requires regional aggregation; edge lacks multi-sensor context
Wearable Health 5 KB Low 1000 km 200 ms Fog/Cloud Simple threshold alerts at edge; detailed health analysis aggregated at fog or cloud
Agricultural Drone 200 KB High 300 km 100 ms Fog Aerial imagery too large for edge; time-sensitive crop analysis for same-day spraying decisions
Retail POS 2 KB Low 1500 km 1000 ms Cloud Latency tolerance is high; cloud provides centralized inventory and analytics
Robotic Arm 15 KB Medium 10 km 5 ms Edge Sub-10 ms requirement for collision avoidance; even fog introduces unacceptable delay

12.6.1 How Different Industries Choose Tiers

Diagram showing how different industries map to edge, fog, and cloud tiers. Edge tier (under 20ms) contains autonomous vehicles, robotic arms, and emergency shutoffs. Fog tier (20-100ms) contains video analytics, industrial monitoring, and agricultural drones. Cloud tier (100ms+) contains smart meters, retail analytics, and historical reporting. Arrows show data flow from edge to fog (aggregated events, production stats) and from fog to cloud (daily summaries, maintenance reports, crop maps).


12.7 Common Pitfalls and Misconceptions

Pitfalls When Interpreting Simulation Results
  • Assuming edge is always the best tier: Students see edge’s low latency and conclude it should be used for everything. However, edge devices have limited compute (try “Very High” complexity and watch edge spike above fog), they lack cross-device context for multi-sensor fusion, and each edge device adds per-unit hardware cost that cloud amortizes across millions of devices. Rule of thumb: if your latency requirement is above 200 ms and complexity is Low, cloud is almost always more cost-effective.

  • Ignoring bandwidth costs entirely: Many architects focus only on latency and forget that bandwidth has a monthly recurring cost. A single camera sending 500 KB frames at 1 fps generates 1.3 TB/month. At $0.09/GB cloud ingestion, that is $117/month per camera. With 100 cameras, you pay $11,700/month just for bandwidth. Edge processing that sends only 1 KB alert summaries reduces this to $23/month – a 99.8% savings. Always check both the latency bars and the cost panel before choosing a tier.

  • Confusing propagation delay with transmission time: Propagation delay depends on physical distance (speed of light: ~200,000 km/s in fiber), while transmission time depends on data size divided by bandwidth. Doubling the distance doubles propagation delay but has zero effect on transmission time. Doubling the data size doubles transmission time but has zero effect on propagation delay. The simulator lets you isolate each factor by changing one slider at a time.

  • Treating simulation results as exact production numbers: The simulator models idealized conditions – no packet loss, no variable congestion, no hardware failures. Real-world edge devices may have thermal throttling that increases processing time by 30-50%. Real-world fog networks experience congestion spikes during peak hours. Always add a 20-40% safety margin to simulator latency numbers when making production architecture decisions.

  • Overlooking hybrid architectures: Students often select a single tier for their entire system. In practice, most IoT deployments use all three tiers simultaneously – edge for safety-critical real-time decisions (under 10 ms), fog for regional analytics and ML inference (20-100 ms), and cloud for batch processing, model training, and cross-site analytics (100+ ms). The simulator’s scenario comparison feature is designed to reveal exactly this pattern.


12.8 Knowledge Check

12.9 Question 1: Latency Components

An IoT sensor sends 100 KB of data over a 50 Mbps link to a cloud server 1000 km away. The cloud server processes with medium complexity. Which latency component contributes the MOST to total delay?

    1. Transmission time (data size / bandwidth)
    1. Propagation delay (distance / speed of light)
    1. Processing time (complexity at cloud tier)
    1. Queuing delay (network congestion)

C) Processing time (complexity at cloud tier)

Let’s calculate each component:

  • Transmission: 100 KB / 50 Mbps = 800 Kb / 50 Mbps = 16 ms
  • Propagation: 1000 km / 200,000 km/s = 5 ms one-way, 10 ms round-trip
  • Processing: Cloud with medium complexity typically takes 50-100 ms
  • Queuing: Variable but typically 5-20 ms

Processing time dominates at 50-100 ms, followed by transmission at 16 ms. This illustrates why processing complexity is often the largest contributor to total latency – not just distance, as many students assume.

12.10 Question 2: Tier Selection

A factory has vibration sensors on 50 machines, each sending 10 KB readings every second. The plant manager needs anomaly detection within 30 ms and also wants weekly trend reports. Which architecture is correct?

    1. All processing at the edge – fastest response time
    1. All processing in the cloud – simplest architecture
    1. Edge for anomaly detection, cloud for weekly reports
    1. Fog for anomaly detection, cloud for weekly reports

C) Edge for anomaly detection, cloud for weekly reports

The 30 ms latency requirement for anomaly detection is too tight for fog (typically 20-80 ms for this complexity) and far too tight for cloud (100+ ms). Edge processing can detect vibration anomalies in under 10 ms using simple threshold or FFT analysis.

However, weekly trend reports require correlating data across all 50 machines over 7 days – a complex analytical task that benefits from cloud’s unlimited compute and storage. Edge devices lack the memory and processing power for this scale of analysis.

Option D is tempting but the 30 ms requirement is borderline for fog. Edge is the safer choice for the real-time component. Option A fails because edge cannot efficiently produce cross-machine trend reports.

12.11 Question 3: Cost Analysis

You are designing a video surveillance system with 200 cameras, each producing 500 KB frames at 1 fps. Using the simulator, you calculate cloud bandwidth would cost $23,400/month. An edge processing approach sends only 1 KB alert summaries. What is the approximate monthly cloud bandwidth cost with edge processing?

    1. $468/month (2% of original)
    1. $46.80/month (0.2% of original)
    1. $4.68/month (0.02% of original)
    1. $2,340/month (10% of original)

B) $46.80/month (0.2% of original)

The calculation:

  • Original: 200 cameras x 500 KB x 86,400 frames/day = 8.64 TB/day = ~259 TB/month. At $0.09/GB: ~$23,400/month
  • With edge processing: Each camera sends only 1 KB alerts instead of 500 KB frames, a 500:1 reduction
  • New cost: $23,400 / 500 = $46.80/month

This 99.8% cost reduction is one of the most compelling arguments for edge computing in high-bandwidth applications. The simulator’s cost panel lets you verify this by adjusting data size from 500 KB to 1 KB while keeping the same number of requests.

Note: This ignores the one-time cost of edge compute hardware per camera, which must be factored into the total cost of ownership.

12.12 Question 4: Parameter Sensitivity

In the simulator, which single parameter change causes the LARGEST increase in the gap between edge latency and cloud latency?

    1. Doubling data size from 100 KB to 200 KB
    1. Doubling distance from 500 km to 1000 km
    1. Changing complexity from Low to Very High
    1. Halving bandwidth from 100 Mbps to 50 Mbps

B) Doubling distance from 500 km to 1000 km

The key insight is about the gap between edge and cloud, not absolute latency:

  • Data size doubling: Affects all tiers equally (all must transmit), so the gap barely changes
  • Distance doubling: Edge latency is unaffected (data stays local!), but cloud latency increases significantly due to propagation delay. This maximally widens the gap
  • Complexity increase: Actually narrows the gap for very high complexity because edge devices struggle more than cloud servers
  • Bandwidth halving: Affects cloud more than edge (cloud must transmit over longer links), but the effect is smaller than distance

Distance uniquely affects cloud but not edge, making it the most powerful differentiator between tiers. This is why edge computing becomes increasingly important as IoT deployments spread geographically.


12.13 Applying Simulator Insights to Real Designs

12.13.1 The Three-Question Framework

After experimenting with the simulator, apply these three questions to any IoT architecture decision:

Three-question decision framework for IoT architecture. Question 1: What is the latency budget? Under 20ms means edge mandatory, 20-100ms means fog viable, over 100ms means cloud acceptable. Question 2: What is the data volume? Over 100KB per request at more than 1 request per second means edge saves bandwidth costs; under 10KB infrequent means cloud is fine. Question 3: What processing is needed? Simple thresholds go to edge, ML inference to fog, model training to cloud. Final note: most real systems are hybrid with multiple tiers handling different tasks.

12.13.2 Worked Example: Designing a Smart Building System

Scenario: A 20-story office building with 500 environmental sensors (temperature, humidity, CO2), 100 security cameras, and 200 access control points.

Using the Simulator:

Subsystem Data Size Complexity Distance Latency Req Simulator Result Tier
Access control 2 KB Low 10 km 5 ms Only edge passes Edge
HVAC optimization 20 KB Medium 10 km 30 s All tiers pass; fog cheapest Fog
Security cameras 500 KB High 10 km 100 ms Edge too weak; fog passes Fog
Energy reporting 50 KB High 500 km 1 hour All pass; cloud cheapest Cloud
Fire detection 5 KB Low 10 km 1 ms Only edge passes Edge

Result: The building uses all three tiers, with each handling the subsystem it is best suited for. The simulator confirms what intuition suggests – but with exact numbers that justify the hardware budget to stakeholders.

Pro Tip: Optimize Data Placement Strategy

Do not blindly process everything at the edge or everything in the cloud. Use a tiered decision framework:

  1. Edge (< 10 ms budget): Process time-critical decisions at edge devices – safety shutoffs, collision avoidance, access control
  2. Fog (10-100 ms budget): Aggregate and filter data at fog layer – reduces bandwidth by 90-99% while supporting regional analytics and ML inference
  3. Cloud (> 100 ms budget): Send only insights, anomalies, or compressed summaries to cloud for long-term storage and cross-site analytics

Example: A video surveillance system should detect motion at the camera (edge), perform object recognition at the fog gateway, and send only identified security events to cloud – not raw 24/7 video streams. This reduces a 1 TB/day camera load to just 1 GB/day in cloud storage costs.


12.14 Knowledge Check

Test Your Understanding

Question 1: A factory safety system requires sub-10ms response time to halt a robotic arm when a worker enters a danger zone. Which computing tier is the ONLY viable option?

  1. Cloud computing with a fast internet connection
  2. Fog computing at a local gateway
  3. Edge computing at the sensor/actuator
  4. Any tier with sufficient processing power

c) Edge computing at the sensor/actuator. With a sub-10ms latency budget, propagation delay alone eliminates cloud (50-200ms round trip) and fog may be borderline (1-50ms depending on network hops). Only edge processing, with 0ms network propagation to the actuator, can reliably meet this constraint. Safety-critical systems must never depend on network availability for emergency responses.

Question 2: A single IP camera generating 500 KB frames at 1 fps produces ~1.3 TB/month. If edge processing reduces this to 1 KB alert messages, what is the approximate monthly bandwidth cost savings at $0.09/GB?

  1. $11.70 savings (from $117 to $105.30)
  2. $58.50 savings (50% reduction)
  3. ~$116.77 savings (from $117 to $0.23)
  4. $0 savings (same data just compressed)

c) ~$116.77 savings (from $117 to $0.23). The raw data costs: 1.3 TB = 1,300 GB x $0.09 = $117/month. With edge processing sending only 1 KB alerts (assuming ~100 alerts/day): 100 x 1 KB x 30 days = 3 MB/month = 0.003 GB x $0.09 = $0.00027/month, effectively $0.23 with overhead. This represents a 99.8% cost reduction. Scale this to 1,000 cameras and the savings reach $116,770/month.

Question 3: An IoT architecture processes weather sensor data for both real-time greenhouse control AND monthly climate trend analysis. Which design is most appropriate?

  1. Process everything at the edge for lowest latency
  2. Send everything to the cloud for maximum compute power
  3. Hybrid: edge for real-time control, cloud for trend analytics
  4. Fog tier for all processing as a compromise

c) Hybrid: edge for real-time control, cloud for trend analytics. Real-time greenhouse control (adjusting vents, irrigation) needs sub-second response and works with simple threshold logic – perfect for edge. Monthly climate trends require aggregating data across multiple greenhouses over long periods with statistical analysis – cloud excels here. No single tier optimizes both workloads. The fog layer can serve as an intermediary, aggregating raw sensor data before sending summaries to the cloud.


Scenario: A factory has 100 vibration sensors sampling at 5 kHz with 16-bit resolution. Calculate bandwidth and processing requirements for edge, fog, and cloud tiers.

Given:

  • 100 sensors × 5,000 samples/sec × 2 bytes/sample = 1,000,000 bytes/sec = 1 MB/sec raw data rate
  • Edge processing: Each sensor has ARM Cortex-M4 @ 80 MHz
  • Fog gateway: Intel i5 with 4 cores @ 2.8 GHz
  • Cloud uplink: 10 Mbps (1.25 MB/sec)

Step 1: Edge processing capability

Each sensor performs local FFT before transmitting: - 512-point FFT on Cortex-M4: ~20ms processing time - Can process: 1,000ms / 20ms = 50 FFTs per second - Sensor generates 5,000 samples/sec, needs: 5,000 / 512 = 9.8 FFTs/sec - Result: Edge CPU sufficient (50 > 9.8) ✓

Data reduction at edge: - Raw: 5,000 samples × 2 bytes = 10,000 bytes/sec per sensor - After FFT: 256 frequency bins × 4 bytes = 1,024 bytes/sec per sensor - Reduction: 10× per sensor

Step 2: Fog aggregation

100 sensors after edge FFT: 100 × 1,024 bytes/sec = 102.4 KB/sec to fog

Fog performs anomaly detection and aggregation: - Input: 102.4 KB/sec (all sensors) - Output: Only anomalies (99% normal) = 1.024 KB/sec to cloud - Reduction: 100× at fog layer

Step 3: Cloud bandwidth check

  • Fog to cloud: 1.024 KB/sec = 0.008 Mbps
  • Available: 10 Mbps
  • Utilization: 0.08% (plenty of headroom) ✓

Total system data reduction:

  • Raw: 1 MB/sec
  • After edge FFT: 102.4 KB/sec (10× reduction)
  • After fog filtering: 1.024 KB/sec (100× additional reduction)
  • Total: 1,000× reduction (1 MB/sec → 1 KB/sec)

Key insight: Hierarchical processing achieves multiplicative data reduction. Edge preprocessing reduces 10×, fog filtering reduces another 100×, for total 1,000× reduction before cloud transmission.

Use this decision tree to determine which processing tier should handle each component of your IoT workload:

START: New data processing requirement

Q1: What is the latency requirement?
├─ <10ms → Edge mandatory (local processing only)
├─ 10-100ms → Continue to Q2
└─ >100ms → Continue to Q3

Q2: Can edge devices handle the computation?
├─ Yes (simple thresholds, basic filtering) → Edge
└─ No (ML inference, FFT, aggregation) → Fog

Q3: Is continuous cloud connectivity reliable?
├─ Yes → Continue to Q4
└─ No (rural, mobile, intermittent) → Fog for offline operation

Q4: What is the data volume?
├─ <10 KB/sec per device → Cloud direct (bandwidth cheap)
├─ 10 KB - 1 MB/sec → Fog aggregation recommended
└─ >1 MB/sec → Edge or fog aggregation mandatory

Q5: Does data contain PII or regulated information?
├─ Yes (GDPR, HIPAA, etc.) → Edge or fog (local processing)
└─ No → Cloud acceptable

Example application of framework:

Smart thermostat decision:

  • Q1: Latency? ~1 second acceptable → Continue to Q3
  • Q3: Connectivity? Reliable home Wi-Fi → Continue to Q4
  • Q4: Data volume? 1 reading every 30 seconds = ~30 bytes/sec → Cloud direct ✓
  • Result: Cloud-only architecture appropriate

Factory robot arm decision:

  • Q1: Latency? <5ms required → Edge mandatory
  • Result: Edge processing for collision detection; fog/cloud for analytics only

Smart city traffic camera decision:

  • Q1: Latency? 50-100ms acceptable → Continue to Q2
  • Q2: Edge capability? Video ML inference needs GPU → Fog
  • Q4: Data volume? 5 Mbps video stream × 1,000 cameras = 5 Gbps → Fog aggregation mandatory
  • Result: Fog gateway with GPU for local processing, send alerts to cloud
Common Mistake: Underestimating Peak Burst Bandwidth

The mistake: Designing fog-to-cloud links based on average bandwidth instead of peak burst requirements.

Real scenario: Smart building with 500 IoT devices (lights, HVAC, sensors)

Normal operation:

  • Each device reports status every 60 seconds: 100 bytes/reading
  • Average bandwidth: 500 devices × 100 bytes / 60 sec = 833 bytes/sec (negligible)
  • Architect provisions: 1 Mbps uplink (“plenty of headroom”)

What actually happens:

Event Devices Reporting Data Volume Duration Bandwidth Needed
Fire alarm test All 500 simultaneously 500 × 100 bytes = 50 KB 1 second 400 Kbps
HVAC schedule change All 150 HVAC units 150 × 500 bytes = 75 KB 1 second 600 Kbps
Security camera motion 20 cameras send video 20 × 1 Mbps each 30 seconds 20 Mbps (20× over capacity!)

Result: Camera footage gets corrupted, security event missed, critical alarm delayed.

How to calculate correctly:

  1. Identify burst scenarios: List all scenarios where many devices report simultaneously

    • Fire alarm (all smoke detectors)
    • Power restoration (all devices reconnect)
    • Security event (all cameras activate)
    • System reboot (all devices re-register)
  2. Calculate peak bandwidth per scenario:

    peak = max(device_count × data_size / burst_duration)
    
    Fire alarm: 500 × 100 bytes / 1s = 50 KB/s = 400 Kbps
    Camera burst: 20 × 1 Mbps = 20 Mbps
  3. Apply 2-3× safety margin:

    provisioned_bandwidth = max(all_peaks) × 3
    = 20 Mbps × 3 = 60 Mbps minimum
  4. Result for this building: Need 60 Mbps uplink, not 1 Mbps

Rule of thumb: Provision fog-to-cloud links for 3× your worst-case burst, not average load.

12.15 Summary

12.15.1 Key Takeaways

  1. Latency is physics, not preference: The speed of light creates unavoidable minimum delays. The simulator makes this concrete by showing that distance uniquely affects cloud latency while leaving edge latency unchanged
  2. Bandwidth costs compound: High-volume, large-payload applications (especially video) incur massive cloud bandwidth costs. Edge processing can reduce monthly costs by 99%+ by sending only results upstream
  3. Complexity determines viability: Edge devices excel at simple tasks but struggle with ML inference or complex analytics. The simulator reveals the exact complexity threshold where edge becomes slower than fog
  4. Most real systems are hybrid: The simulator demonstrates that no single tier is optimal for all subsystems. Real IoT architectures use edge for real-time safety, fog for regional intelligence, and cloud for batch analytics
  5. Numbers beat intuition: The simulator transforms qualitative arguments (“edge is faster”) into quantitative evidence (“edge delivers 8 ms vs. fog’s 45 ms for this scenario”) that drives engineering decisions and budget approvals

12.15.2 Quick Reference

Decision Factor Edge Winner Fog Winner Cloud Winner
Latency < 20 ms Always Never Never
Complexity = Very High Never Sometimes Usually
Data > 100 KB, high volume Bandwidth savings Aggregation Storage
Privacy-sensitive data Local processing Regional compliance After anonymization
Cross-site analytics Cannot do Limited Best option

12.16 Knowledge Check

12.17 What’s Next?

Topic Chapter Description
Use Cases Edge & Fog Use Cases Apply simulator insights to real-world deployments across manufacturing, autonomous vehicles, and smart cities
Decision Framework Decision Framework Formalize tier selection with structured decision trees and quantitative scoring models
Common Pitfalls Edge & Fog Pitfalls Avoid the eight most common mistakes in edge/fog deployments, from retry logic to clock synchronization