11 Edge & Fog: Hands-On Labs
Lab execution time can be estimated before starting runs:
\[ T_{\text{total}} = N_{\text{runs}} \times (t_{\text{setup}} + t_{\text{run}} + t_{\text{review}}) \]
Worked example: With 5 runs and per-run times of 4 min setup, 6 min execution, and 3 min review, total lab time is \(5\times(4+6+3)=65\) minutes. This prevents under-scoping and helps schedule complete experimental cycles.
In 60 seconds, understand Edge Computing Labs:
These hands-on labs teach you edge and fog computing through real hardware simulation. Instead of just reading about latency differences, you will measure them on an ESP32 microcontroller:
- Lab 1: Build an edge vs. cloud processing demo – observe 1000x latency differences firsthand
- Lab 2: Implement multi-sensor aggregation – reduce bandwidth by 99.98% with edge processing
What you will prove experimentally:
| Metric | You Will Measure | Significance |
|---|---|---|
| Edge latency | < 500 microseconds | Real-time safety decisions |
| Cloud latency | 150-400 milliseconds | Visible processing delay |
| Bandwidth savings | 95-99% | Cost reduction at scale |
| Offline operation | 100% uptime | Resilience during outages |
The key insight: Edge computing is not just faster – it is a fundamentally different architecture that enables capabilities cloud-only systems cannot provide.
Complete the labs below to gain hands-on experience, or jump to Knowledge Check: Edge vs. Cloud Processing to test your understanding.
11.1 Learning Objectives
By the end of these labs, you will be able to:
- Compare edge vs cloud latency: Measure response time differences in real hardware
- Implement threshold-based edge processing: Create autonomous edge decisions
- Distinguish local vs cloud workloads: Evaluate when to process locally vs offload to cloud
- Calculate bandwidth savings: Quantify how edge aggregation reduces data transmission
- Design resilient systems: Create IoT solutions that continue operating during outages
11.2 Lab Overview: What You Will Build
Before diving into the labs, here is the overall architecture you will implement across both exercises:
Think of edge computing like a local fire alarm vs. calling the fire department:
- Edge processing = Your smoke detector beeps immediately when it detects smoke (milliseconds)
- Cloud processing = You call the fire department, they drive to your house, assess the situation, then sound the alarm (minutes)
In these labs, you will build a tiny computer (ESP32) that can make decisions right where the data is collected – just like a smoke detector makes its own decision without waiting for help.
Lab 1 shows you the speed difference. Lab 2 shows you how to handle many sensors at once without overwhelming the network.
No prior hardware experience needed – everything runs in a web browser simulator!
11.3 Lab 1: Build an Edge Computing Demo
This hands-on lab demonstrates the fundamental difference between edge and cloud processing using an ESP32 microcontroller. You will implement both processing models and observe the dramatic latency differences in real-time.
11.3.1 Components
| Component | Purpose | Wokwi Element |
|---|---|---|
| ESP32 DevKit | Main controller with edge processing logic | esp32:devkit-v1 |
| Temperature Sensor (NTC) | Simulates sensor input for threshold detection | ntc-temperature-sensor |
| Green LED | Indicates EDGE processing (fast local decision) | led:green |
| Blue LED | Indicates CLOUD processing (slow remote decision) | led:blue |
| Red LED | Indicates ALERT state (threshold exceeded) | led:red |
| Resistors (3x 220 ohm) | Current limiting for LEDs | resistor |
11.3.2 Key Concepts
| Aspect | Edge Processing | Cloud Processing |
|---|---|---|
| Latency | 1-10 ms | 100-500+ ms |
| Network Required | No | Yes |
| Processing Power | Limited | Unlimited |
| Reliability | Works offline | Depends on connectivity |
| Use Case | Time-critical alerts | Complex analytics |
Real-world impact: In industrial safety, a 500ms delay detecting a dangerous temperature could mean the difference between a controlled shutdown and equipment damage. Edge processing enables sub-10ms response times.
11.3.3 Interactive Wokwi Simulator
Use the embedded simulator below to build and test your edge computing demo. Click “Start Simulation” after entering the code.
11.3.4 Circuit Connections
11.3.5 Code Overview
The lab code demonstrates three processing modes:
- EDGE mode: All processing happens locally on the ESP32 in microseconds
- CLOUD mode: Simulates cloud round-trip with 150-400ms artificial latency
- HYBRID mode: Edge handles time-critical alerts, cloud handles logging
Key code structure:
// Processing mode options
enum ProcessingMode {
MODE_EDGE, // Local processing only (~1-50 microseconds)
MODE_CLOUD, // Simulated cloud processing (~150-400 ms)
MODE_HYBRID // Edge for alerts, cloud for logging
};
// Edge processing - instant response
void processAtEdge(float temperature) {
unsigned long startTime = micros();
// All computation happens RIGHT HERE on the ESP32
if (temperature >= TEMP_CRITICAL) {
// IMMEDIATE action - no network delay
triggerEmergencyShutdown();
}
unsigned long edgeLatency = micros() - startTime;
// Typical: 5-50 microseconds
}
// Cloud processing - significant delay
void processAtCloud(float temperature) {
unsigned long startTime = micros();
// Simulate network round-trip
delay(random(150, 400)); // 150-400ms latency
// Same logic, but AFTER the delay
if (temperature >= TEMP_CRITICAL) {
// Response delayed by network latency
triggerEmergencyShutdown();
}
unsigned long cloudLatency = micros() - startTime;
// Typical: 150,000-400,000 microseconds
}11.3.6 Processing Mode Decision Flow
The following diagram shows how the hybrid mode selects the appropriate processing tier based on the data criticality:
11.3.7 Step-by-Step Instructions
- Open the Wokwi simulator above
- Add components from the parts panel
- Wire according to the circuit diagram
- Double-check all connections
- Click “Start Simulation”
- Open the Serial Monitor
- Observe the default HYBRID mode:
- Green LED blinks: Edge processing (microseconds)
- Blue LED blinks: Cloud processing (hundreds of milliseconds)
- Note how edge is 1000-10000x faster than cloud
- Type
E- Edge-only mode - Type
C- Cloud-only mode - Type
H- Hybrid mode - Type
S- View statistics
- Click on the NTC temperature sensor
- Adjust temperature slider above 30C (warning)
- Watch the Red LED illuminate
- Increase above 35C (critical)
- Notice the Red LED blinks rapidly
11.3.8 Expected Outcomes
| Metric | Expected Value | Significance |
|---|---|---|
| Edge latency | < 500 microseconds | 1000x faster than cloud |
| Cloud latency | 150-450 milliseconds | Network dominates total time |
| Bandwidth reduction | 95-99% | Only aggregates sent to cloud |
| Alert response | < 1 millisecond | Safety-critical capability |
| Offline operation | Fully functional | Resilience during outages |
Real-World Applications:
- Smart Factory: Aggregate vibration data from 1000 sensors, only send anomalies to cloud
- Smart Building: Process HVAC data locally, optimize energy without cloud latency
- Healthcare Monitoring: Detect arrhythmias at the edge in <5ms
- Autonomous Vehicles: Process LIDAR/camera locally, make decisions in <20ms
11.3.9 Challenge Exercises
Objective: Demonstrate edge computing’s reliability advantage
Modify the code to simulate network failures: 1. Add bool networkAvailable = true; 2. Add command ‘N’ to toggle network availability 3. When unavailable, cloud mode shows “NETWORK ERROR” 4. Edge mode continues working normally
Learning: Edge computing provides resilience during outages.
Objective: Reduce cloud traffic through edge aggregation
- Store last 10 temperature readings
- Only send aggregated data (min, max, avg) to cloud every 10 readings
- Continue real-time edge threshold monitoring
Learning: Edge aggregation reduces bandwidth by 90%+.
Objective: Implement latency budget enforcement for safety-critical system
- Add
const unsigned long LATENCY_BUDGET_US = 50000; - Check if latency exceeded budget after each processing
- Track how often each mode meets the budget
Learning: Latency budgets determine when edge processing is mandatory.
Objective: Add intermediate fog processing tier
- Add
MODE_FOGwith 20-50ms simulated latency - Fog aggregates from multiple “sensors”
- Add Yellow LED indicator for fog processing
Learning: Fog provides intermediate processing between edge and cloud.
11.3.10 Knowledge Check: Edge vs Cloud Processing
11.4 Question 1: Latency Comparison
An industrial safety system needs to detect a dangerous temperature and trigger an emergency shutdown. The edge processor responds in 200 microseconds while the cloud path takes 300 milliseconds. How many times faster is the edge response?
C) 1,500 times faster. 300 milliseconds = 300,000 microseconds. Dividing 300,000 by 200 gives 1,500. This massive speed difference is why safety-critical industrial systems require edge processing – a 300ms delay in an emergency shutdown could allow equipment damage or worker injury.
11.5 Question 2: Hybrid Architecture Decision
In a smart building HVAC system, which processing mode should handle the following scenario: “A carbon monoxide sensor reads a dangerous level at 3:00 AM when internet connectivity is down”?
A) Edge processing. This is a life-safety scenario where two critical factors align: the reading is safety-critical AND the network is unavailable. Edge processing provides both immediate response (sub-millisecond) and offline resilience. Waiting for cloud connectivity (options B and D) would be dangerous. Fog processing (C) could be a secondary action, but the initial alert must happen at the edge device itself.
11.6 Question 3: Processing Mode Selection
You are designing a system where an ESP32 monitors vibration on a factory motor. Normal readings are logged every minute, but readings above 5g require immediate motor shutdown (within 10ms). Which architecture pattern is most appropriate?
C) Hybrid architecture. The 10ms shutdown requirement makes cloud-only (A) impossible since cloud round-trip is 150-400ms. Edge-only (B) wastes the opportunity for cloud-based trend analysis and predictive maintenance. Fog-only (D) adds unnecessary latency for the critical alert path. The hybrid approach processes safety-critical threshold events at the edge (sub-millisecond response) while sending aggregated trend data to the cloud for ML-based predictive maintenance – the best of both worlds.
11.7 Lab 2: Edge Data Aggregation and Smart Decision Making
This advanced lab demonstrates multi-sensor edge aggregation and autonomous decision making. You will build a system that reduces cloud bandwidth by 99.98% while maintaining full situational awareness.
11.7.1 The Bandwidth Problem
A single sensor at 100 Hz generates: - Raw data: 100 samples/sec x 4 bytes = 34.5 MB/day - With 100 sensors: 3.45 GB/day = 103.5 GB/month
With edge aggregation (1-minute averages): - Aggregated data: 5.76 KB/day per sensor (1,440 averages x 4 bytes) - With 100 sensors: 576 KB/day = 17.3 MB/month
Bandwidth reduction: 99.98%
11.7.2 Data Aggregation Pipeline
The following diagram shows how raw sensor data flows through the edge aggregation pipeline, with each stage reducing the data volume while preserving the critical information:
11.7.3 Components
| Component | Purpose | Wokwi Element |
|---|---|---|
| ESP32 DevKit | Edge computing node | esp32:devkit-v1 |
| Temperature Sensor | Environmental monitoring | ntc-temperature-sensor |
| Light Sensor (LDR) | Ambient light detection | photoresistor-sensor |
| Potentiometer | Simulates humidity sensor | slide-potentiometer |
| Push Button | Manual cloud sync trigger | pushbutton |
| 4x LEDs | Status indicators | Various colors |
11.7.4 Key Concepts Demonstrated
- Multi-sensor aggregation: Combine readings from temperature, light, and humidity sensors
- Statistical summarization: Calculate min, max, average, standard deviation locally
- Anomaly detection: Flag unusual readings at the edge without cloud
- Bandwidth optimization: Send only aggregated summaries instead of raw data
- Offline resilience: Continue operating during network outages
11.7.5 Aggregation Algorithm
struct SensorAggregation {
float min;
float max;
float sum;
int count;
float sumSquares; // For standard deviation
void reset() {
min = FLT_MAX;
max = -FLT_MAX;
sum = 0;
count = 0;
sumSquares = 0;
}
void addReading(float value) {
if (value < min) min = value;
if (value > max) max = value;
sum += value;
sumSquares += value * value;
count++;
}
float getAverage() {
return count > 0 ? sum / count : 0;
}
float getStdDev() {
if (count < 2) return 0;
float avg = getAverage();
return sqrt((sumSquares / count) - (avg * avg));
}
};The standard deviation formula above uses sumSquares / count - avg * avg, which can produce negative values due to floating-point rounding when variance is very small. This is the “catastrophic cancellation” problem. In production code, use Welford’s online algorithm instead:
// Welford's algorithm - numerically stable
void addReadingStable(float value) {
count++;
float delta = value - mean;
mean += delta / count;
float delta2 = value - mean;
m2 += delta * delta2;
}
float getStdDevStable() {
return count < 2 ? 0 : sqrt(m2 / (count - 1));
}This avoids catastrophic cancellation entirely and is the industry-standard approach for streaming statistics.
11.7.6 Reflection Questions
A factory with 10,000 sensors at 100 Hz generates 1 GB/second raw data. Edge aggregation reduces this to approximately 1 MB/second of actionable insights. Without aggregation, cloud bandwidth costs alone would be prohibitive – at $0.09/GB egress pricing, raw data would cost over $7,000/day. Edge aggregation reduces this to under $8/day.
Implement a decision hierarchy with clear precedence:
- Safety-critical (edge wins) – Emergency shutdowns, gas leak alerts
- Cloud policy override – Business rules that update edge behavior remotely
- Edge defaults – Local rules when cloud is unreachable
The edge device should always have a safe fallback state that does not depend on cloud availability.
Well-designed aggregation preserves extremes through multiple mechanisms:
- Min/max capture the worst-case readings even when the average looks normal
- Standard deviation flags unusual variability that the average alone would miss
- Anomaly detection triggers full-resolution upload when something unusual occurs
- Adaptive windowing shortens the aggregation window during volatile periods
The ML lifecycle for edge computing follows a train-in-cloud, deploy-to-edge pattern:
- Train models in the cloud using historical aggregated data
- Quantize the model (e.g., TensorFlow Lite, ONNX) to fit edge constraints
- Deploy inference models to edge devices via OTA updates
- Infer at the edge in microseconds – classify sensor readings, detect anomalies
- Retrain periodically in the cloud as new data accumulates
11.7.7 Knowledge Check: Data Aggregation
11.8 Question 4: Bandwidth Calculation
A smart factory has 500 temperature sensors, each sampling at 50 Hz. Each sample is a 4-byte float. With edge aggregation using 1-minute windows that output 5 summary values per window, what is the approximate bandwidth reduction ratio?
C) 99.83% reduction.
Raw data per sensor per minute: 50 Hz x 60 sec x 4 bytes = 12,000 bytes
Aggregated data per sensor per minute: 5 values x 4 bytes = 20 bytes
Reduction: 1 - (20 / 12,000) = 1 - 0.00167 = 0.9983 = 99.83%
For 500 sensors: Raw = 6,000,000 bytes/min = ~8.6 GB/day. Aggregated = 10,000 bytes/min = ~14.4 MB/day. The exact percentage depends on whether metadata overhead is included with the summary values.
11.9 Question 5: Anomaly Detection Trade-off
Your edge anomaly detector uses a Z-score threshold of 2.0 (flag readings more than 2 standard deviations from the mean). A colleague suggests lowering it to 1.5 for “better safety.” What is the primary risk of this change?
B) Significantly more false positives. In a normal distribution, a Z-score threshold of 2.0 flags about 4.6% of readings as anomalies, while 1.5 flags about 13.4% – nearly 3x more alerts. For IoT systems with thousands of sensors, this can overwhelm both the network bandwidth (defeating the purpose of edge aggregation) and the cloud analytics pipeline. The correct approach is to tune the threshold based on the cost of missing a true anomaly versus the cost of investigating a false positive. Safety-critical systems should use lower thresholds only for the most dangerous parameters.
11.10 Question 6: Aggregation Window Design
You are designing an edge aggregation system for a building HVAC monitoring application. Temperature changes slowly (time constant of ~15 minutes) while occupancy sensors change rapidly (events every few seconds). What aggregation strategy is most appropriate?
C) Different window sizes per sensor type. This is the “adaptive windowing” approach. Temperature changes slowly, so 5-10 minute aggregation windows preserve all meaningful information while dramatically reducing data. Occupancy changes rapidly and generates discrete events, so it needs shorter windows (30-60 seconds) or event-driven reporting to capture transitions. Using the same window for both either over-samples the slow signal (wasting bandwidth) or under-samples the fast signal (missing events). This principle applies broadly: match the aggregation window to the Nyquist rate of the physical process being monitored.
11.11 Lab Comparison: Edge vs. Cloud Architecture Patterns
To consolidate your learning, the following diagram compares the three architecture patterns you have explored across both labs:
Hey Sensor Squad! Let’s explore edge computing with a fun story!
Meet the Characters:
- Sammy the Smoke Detector lives on the ceiling of a kitchen
- Cloudy the Cloud Server lives far away in a data center
The Big Race:
One day, toast starts burning in the kitchen. Both Sammy and Cloudy need to sound the alarm!
Sammy’s approach (Edge Computing): “I can smell smoke RIGHT HERE! BEEP BEEP BEEP!” – Sammy sounded the alarm in less than 1 second!
Cloudy’s approach (Cloud Computing): “Hmm, let me check… The smoke data needs to travel through the internet to my data center… let me analyze it with my big computers… OK, I’ve decided there IS smoke… now let me send the alarm back…” – Cloudy took 5 whole seconds!
The Lesson: For things that need FAST responses (like fire!), it’s much better to decide right where the action is happening. That’s edge computing!
But wait – Cloudy is still useful!
Cloudy said: “I may be slower, but I can look at smoke detector data from EVERY building in the whole city! I can find patterns and predict which buildings might have fire risks before fires even start!”
The Real Answer: We need BOTH! Sammy handles emergencies instantly, and Cloudy handles the big-picture thinking. Working together, they are an unstoppable team!
Try This at Home: Next time you hear a smoke detector beep, remember – that is edge computing in action! The detector makes its own decision right there on your ceiling, without asking the internet for permission.
Scenario: A manufacturing plant has 200 vibration sensors monitoring CNC machines. Each sensor samples at 1 kHz producing 4-byte readings (4,000 bytes/sec per sensor = 800 KB/sec total). The factory must detect anomalies within 50ms to prevent tool breakage.
Step-by-step calculation:
- Edge processing approach (each sensor has local microcontroller):
- Each sensor runs FFT locally: 1024-point FFT takes ~150ms on 48 MHz ARM Cortex-M0
- Result: Cannot meet 50ms requirement (150ms > 50ms)
- Cost: $8 per sensor × 200 = $1,600 total
- Fog processing approach (central gateway with Intel i5):
- All 200 sensor streams aggregate to fog node: 800 KB/sec total
- FFT processing on i5 @ 2.4 GHz: ~0.8ms per sensor × 200 = 160ms total
- Result: Cannot meet 50ms if sequential (160ms > 50ms)
- Parallel processing with 8 cores: 160ms / 8 = 20ms
- Result: Meets requirement (20ms < 50ms) ✓
- Cost: $600 fog gateway hardware
- Bandwidth comparison:
- Edge: Each sensor processes locally, sends only alert flags (1 byte every 5 seconds when normal)
- Bandwidth: 200 sensors × 1 byte / 5 sec = 40 bytes/sec
- Fog: Raw sensor streams to fog node
- Bandwidth: 800 KB/sec on local network (Ethernet easily handles this)
- Cloud upload: Same as edge (40 bytes/sec of alerts)
- Edge: Each sensor processes locally, sends only alert flags (1 byte every 5 seconds when normal)
Decision: Fog processing wins because edge MCUs lack compute power for real-time FFT, while fog gateway meets latency requirement with parallel processing and costs less than upgrading all 200 edge sensors.
Use this table to determine optimal processing tier for your IoT workload:
| Criterion | Edge | Fog | Cloud | Decision Rule |
|---|---|---|---|---|
| Latency requirement | <10ms | 10-100ms | >100ms | Choose tier that meets your P99 latency budget |
| Data volume | <1 MB/day per device | 1-100 MB/day per device | >100 MB/day | High volume needs local aggregation |
| Computation complexity | Simple thresholds | ML inference, FFT, aggregation | Model training, historical analytics | Match workload to available compute |
| Connectivity reliability | Must work offline | Tolerates brief outages | Requires consistent connection | Edge/fog for unreliable networks |
| Device count | <100 devices | 100-10,000 devices | >10,000 devices | Fog aggregates many edge devices |
| Privacy/compliance | PII stays on-device | Local network boundary | Data leaves premises | GDPR/HIPAA may mandate local processing |
Example decisions:
- Smart doorbell: Edge (ML inference on-device, <50ms for video analysis, works without Wi-Fi)
- Factory anomaly detection: Fog (aggregate 200 sensors, need 20-50ms response, requires ML inference beyond edge capability)
- Smart city traffic optimization: Cloud (analyze patterns across thousands of intersections, 5-minute decision cycle acceptable)
- Medical device monitoring: Fog (HIPAA compliance requires local processing, 100ms alert latency acceptable, needs cross-patient analytics edge cannot provide)
The mistake: Deploying a fog gateway that handles average load but crashes during peak concurrent events.
Real scenario: A smart building has 500 devices (lights, HVAC, sensors) connecting to a single Raspberry Pi fog gateway. During normal operation, 10-20 devices report per second. During fire alarm test, all 500 devices report simultaneously within 2 seconds (250/sec burst).
Why it fails:
- Raspberry Pi 4 can handle ~50 MQTT messages/sec with processing
- Burst of 250/sec causes:
- Message queue overflow (dropped packets)
- CPU saturation (gateway becomes unresponsive)
- Cascading failure (devices retry, making problem worse)
How to avoid:
Calculate P99 load, not average:
- Average: 15 messages/sec
- P99 (fire alarm scenario): 250 messages/sec
- Provision for P99: Need gateway handling ≥300 messages/sec
Add headroom: 2× peak load buffer
- Target capacity: 500 messages/sec
- Hardware: Upgrade to Intel NUC or industrial gateway
- Alternative: Load balancing across 2-3 Raspberry Pis
Implement backpressure:
if queue_depth > threshold: send_slow_down_signal_to_devices() prioritize_critical_messages() # fire alarms first
Rule of thumb: Provision fog nodes for 2× your P99 load, not average load.
11.12 Summary
11.12.1 Key Takeaways
These hands-on labs demonstrate the fundamental principles of edge and fog computing through real hardware simulation:
| Principle | What You Learned | Lab |
|---|---|---|
| Latency reduction | Edge processing provides 1,000-10,000x lower latency than cloud round-trips | Lab 1 |
| Bandwidth optimization | Edge aggregation reduces network traffic by 95-99.98% | Lab 2 |
| Hybrid architecture | Combining edge alerts with cloud analytics provides the best of both worlds | Lab 1 |
| Offline resilience | Edge devices continue full operation during network outages | Lab 1 |
| Statistical summarization | Min/max/avg/stddev preserves critical information while compressing data | Lab 2 |
| Anomaly detection | Edge intelligence triggers full-resolution uploads only when needed | Lab 2 |
11.12.2 Design Decision Framework
When designing your own edge/fog/cloud system, use this decision matrix:
| Requirement | Recommended Tier | Rationale |
|---|---|---|
| Response < 10ms | Edge | Network latency exceeds budget |
| Response < 100ms | Edge or Fog | Fog gateway if coordination needed |
| Response < 1s | Fog or Cloud | Cloud acceptable with good connectivity |
| Must work offline | Edge | No network dependency |
| Complex ML inference | Fog or Cloud | Edge MCUs lack compute power |
| Global data correlation | Cloud | Requires data from multiple sites |
| Regulatory data residency | Edge or Fog | Data stays within jurisdiction |
11.12.3 Common Mistakes to Avoid
Top 3 Lab Mistakes:
- Assuming cloud is always available – Design for offline-first, cloud-enhanced
- Sending raw data to cloud – Always aggregate at the edge; raw data creates cost and latency problems at scale
- Using a single aggregation window – Match the window to the physical process dynamics (Nyquist criterion applies!)
11.13 Knowledge Check
11.14 Concept Relationships
| Concept | Relationship to Labs | Why It Matters | Demonstrated In |
|---|---|---|---|
| Edge Latency | Lab 1 measures <500μs edge vs 150-400ms cloud | Quantifies the 1000x speed difference that makes edge mandatory for safety-critical systems | Lab 1 latency comparison |
| Data Aggregation | Lab 2 reduces 100 samples/sec to 5 values/min | Shows how edge processing achieves 99.98% bandwidth reduction at scale | Lab 2 aggregation pipeline |
| Hybrid Architecture | Both labs combine edge alerts with cloud analytics | Demonstrates that edge and cloud are complementary, not competitive | Lab 1 MODE_HYBRID |
| Statistical Summarization | Lab 2 computes min/max/avg/stddev locally | Preserves critical information while compressing data 600x | Lab 2 SensorAggregation struct |
| Anomaly Detection | Lab 2 Z-score threshold flags unusual readings | Edge intelligence determines what needs full-resolution cloud upload | Lab 2 anomaly check stage |
| Offline Resilience | Both labs continue operating without network | Proves that edge computing provides operational continuity during outages | Network failure simulation |
| Welford’s Algorithm | Lab 2 numerically stable variance calculation | Production-ready statistics avoid floating-point precision errors | Lab 2 challenge exercise |
11.15 See Also
- Edge-Fog Decision Framework – Apply the “Four Mandates” framework to determine when these lab patterns are required versus optional in your own deployments
- Edge-Fog Use Cases – See these lab concepts scaled to real-world production: factories with 1,000 sensors, autonomous vehicle fleets, smart agriculture
- Edge-Fog Latency Analysis – Deep dive into the physics and network stack analysis behind the latency measurements you observed in Lab 1
- Edge-Fog Bandwidth Optimization – Advanced aggregation techniques beyond the min/max/avg demonstrated in Lab 2, including adaptive windowing and predictive filtering
- Edge-Fog Architecture – Formal architecture patterns showing how to orchestrate thousands of edge devices like the ESP32 labs at production scale
11.16 What’s Next?
Build on your hands-on experience with these related topics:
| Topic | Chapter | Description |
|---|---|---|
| Decision Framework | Edge-Fog Decision Framework | Formalize when to use edge, fog, or cloud processing |
| Use Cases | Edge-Fog Use Cases | See how these patterns apply in real-world deployments |
| Latency Analysis | Edge-Fog Latency Analysis | Deep dive into latency measurement and optimization |
| Bandwidth Optimization | Edge-Fog Bandwidth Optimization | Advanced aggregation and compression techniques |
| Architecture | Edge-Fog Architecture | Formal architecture patterns for edge-fog-cloud systems |