9  Edge and Fog Computing: Use Cases

In 60 Seconds

Edge is mandatory when latency is life-critical: autonomous vehicles require under 10ms for collision avoidance, while cloud round-trip adds 200ms+. A factory with 100 sensors at 10kHz generates 32 Mbps of raw data requiring 99.9% local reduction, and GDPR mandates fog-based anonymization so only aggregate statistics (not raw PII) reach the cloud.

Key Concepts
  • Industrial IoT (IIoT): Manufacturing and process control applications requiring deterministic sub-10ms response times for machine safety and quality control
  • Smart City Infrastructure: Large-scale deployments of traffic sensors, parking monitors, and environmental sensors requiring edge aggregation to manage bandwidth
  • Autonomous Systems: Vehicles, drones, and robots requiring local perception and decision-making because cloud latency (100ms+) is physically unsafe at operational speeds
  • Healthcare at the Edge: Patient monitoring systems (ECG, SpO2) processing data locally to meet HIPAA data locality requirements and provide offline resilience
  • Predictive Maintenance: Vibration and temperature analysis on industrial equipment running locally to detect bearing failures 2-4 weeks before breakdown
  • Precision Agriculture: Soil sensors and drone imagery analyzed at farm-edge gateways in areas with intermittent connectivity to optimize irrigation and fertilization
  • Retail Analytics: In-store video analytics running at edge to count customers and detect shelf gaps without transmitting sensitive video footage to cloud
  • Energy Grid Management: Smart meters and grid sensors processing locally for real-time frequency regulation that requires <100ms response to prevent cascading failures

9.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Analyze edge/fog computing requirements: Determine whether a use case demands edge, fog, or cloud processing based on latency, bandwidth, and regulatory constraints
  • Design predictive maintenance architectures: Apply three-tier fog computing to industrial scenarios with 1kHz sensor data and sub-100ms anomaly detection
  • Evaluate autonomous vehicle compute placement: Compare on-vehicle edge (10ms), roadside fog (20-50ms), and cloud (200ms+) processing tradeoffs for safety-critical decisions
  • Implement privacy-preserving data pipelines: Architect fog-based anonymization, differential privacy, and data minimization flows that comply with GDPR requirements
  • Calculate bandwidth and cost tradeoffs: Quantify data reduction ratios, compute offloading costs, and break-even points for edge vs. cloud deployments
  • Compare use case architectures: Contrast the deployment patterns across smart factory, autonomous vehicle, agricultural drone, and healthcare monitoring scenarios
Minimum Viable Understanding
  • Edge is mandatory when latency is life-critical: Autonomous vehicles require under 10ms for collision avoidance; cloud round-trip adds 200ms+, meaning the crash happens before the response arrives
  • Bandwidth physics often force edge processing: A factory with 100 sensors at 10kHz sampling generates 32 Mbps of raw data, exceeding typical uplink capacity and requiring 99.9% local data reduction before transmission
  • Privacy regulations make fog a legal requirement: GDPR mandates data minimization; fog nodes anonymize video (face blur, people count) so only aggregate statistics like “47 customers” reach the cloud, never raw PII
  • Hybrid architectures combine all three tiers: Real-time decisions happen at edge (under 50ms), regional coordination at fog (under 500ms), and model training and trend analysis in the cloud (hours to days)

Hey Future IoT Explorer! Let’s learn about how smart cities work using a story about three layers!

Welcome to EdgeFog City!

Imagine a city with three levels, like a three-layer cake:

Layer 1: The Street Level (Edge)

  • These are the tiny helpers right where things happen
  • Traffic lights, streetlights, parking sensors
  • They’re like crossing guards - they make instant decisions!
  • When a car is coming, the crossing guard doesn’t call the mayor first!

Layer 2: The Neighborhood Level (Fog)

  • These are the neighborhood managers
  • They coordinate several blocks together
  • Like a school principal who manages many classrooms
  • They know what’s happening in their area and help things work together

Layer 3: The City Hall Level (Cloud)

  • This is the mayor’s office
  • They make big plans for the whole city
  • Like deciding where to build new roads or when to have festivals
  • They don’t need to decide RIGHT NOW - they plan ahead

A Real Example - Traffic Jam!

  1. Edge (Street): Traffic light sees cars piling up → turns green immediately (2 seconds)
  2. Fog (Neighborhood): “Streets 1, 2, 3 are busy!” → Coordinates all lights together (30 seconds)
  3. Cloud (City Hall): “Monday mornings are always busy here” → Plans new traffic patterns (days later)

Fun Activity: Next time you’re in a car, watch the traffic lights. Are they making instant decisions (edge) or do they seem coordinated with other lights (fog)?

Remember: The closer the decision-maker is to the problem, the faster the solution!

Why Study Use Cases?

Instead of just learning theory, use cases show you HOW edge and fog computing solve real problems. Each use case demonstrates:

  1. The Problem: What couldn’t be done with cloud-only?
  2. The Solution: How does distributing compute help?
  3. The Numbers: What improvement do we actually get?

The Three Use Cases in This Chapter:

Use Case Main Constraint Key Insight
Smart Factory Bandwidth (too much data) 99.9% less data sent to cloud
Autonomous Vehicles Latency (life-or-death speed) 10ms vs 200ms response time
Privacy Systems Regulations (data can’t leave) Process locally, send only results

Simple Questions to Ask:

When evaluating any IoT system, ask: - “What happens if the internet goes down?” → If bad, need edge - “How fast must decisions happen?” → If < 100ms, need edge - “Does sensitive data travel?” → If yes, consider fog for privacy


9.2 Use Case 1: Smart Factory Predictive Maintenance

9.2.1 Scenario

Manufacturing facility with hundreds of machines, each instrumented with vibration, temperature, and acoustic sensors generating data at 1kHz sampling rate.

9.2.2 Requirements

  • Real-time anomaly detection (<100ms)
  • Predictive failure alerts (hours to days advance warning)
  • Minimal network load
  • Continued operation during internet outages

9.2.3 Fog Architecture

Edge Tier: Machine Controllers

  • Collect sensor data at 1kHz
  • Basic filtering and feature extraction
  • Detect critical threshold violations (immediate shutdown)

Fog Tier: Factory Edge Servers

  • Deployed per production line
  • Run ML models for anomaly detection
  • Analyze vibration patterns, thermal signatures
  • Predict component failures
  • Store recent data (rolling 24-hour window)
  • Generate maintenance work orders

Cloud Tier: Enterprise Data Center

  • Aggregate data from all factories
  • Train improved ML models
  • Long-term trend analysis
  • Supply chain and inventory optimization
  • Dashboards for management

Hierarchical diagram showing smart factory architecture: Edge tier with machine controllers and sensors at bottom, Fog tier with factory edge servers in middle, Cloud tier with enterprise data center at top. Data flow shows 1kHz raw data at edge, filtered events at fog, and daily summaries to cloud.

Smart Factory Three-Tier Edge-Fog-Cloud Architecture

9.2.4 Benefits

Latency: Immediate shutdown on critical failures; real-time anomaly alerts Bandwidth: 99.9% reduction (1kHz data -> event summaries) Reliability: Continues operating during internet outages Value: Reduced downtime, optimized maintenance, extended equipment life

Data reduction at the fog tier follows the formula \(R = \frac{D_{raw} - D_{filtered}}{D_{raw}} \times 100\%\), where \(R\) is reduction percentage, \(D_{raw}\) is raw data volume, and \(D_{filtered}\) is filtered output volume.

Worked example: Factory with 100 machines at 10 kHz vibration sampling, 4 bytes per sample, 8 hours/day operation: - Raw data rate: \(D_{raw} = 100 \times 10,000 \text{ Hz} \times 4 \text{ bytes} = 4,000,000 \text{ bytes/s} = 32 \text{ Mbps}\) - Daily raw: \(32 \text{ Mbps} \times 28,800 \text{ seconds} = 115.2 \text{ GB/day}\) - After FFT feature extraction (64-point spectrum every 100 ms): \(D_{filtered} = 100 \times 10 \text{ Hz} \times 256 \text{ bytes} = 256 \text{ KB/s}\) - Daily filtered: \(256 \text{ KB/s} \times 28,800 \text{ s} = 7.37 \text{ GB/day}\) - Reduction: \(R = \frac{115.2 - 7.37}{115.2} \times 100\% = 93.6\%\) - Monthly bandwidth savings at \(\$0.09/\text{GB}\): \((115.2 - 7.37) \times 30 \times 0.09 = \$291 \text{/month}\)

If only anomaly alerts are sent (1% of filtered data): Final reduction = \(\frac{115.2 - 0.074}{115.2} \times 100\% = 99.94\%\), saving \(\$311/\text{month}\).

9.3 Use Case 2: Autonomous Vehicle Edge Computing

9.3.1 Scenario

Connected autonomous vehicles requiring instant decision-making with sensing, communication, and coordination.

9.3.2 Requirements

  • Ultra-low latency (<10ms for critical decisions)
  • High reliability (safety-critical)
  • Massive sensor data (cameras, LIDAR, radar)
  • Vehicle-to-vehicle (V2V) communication
  • Infrastructure coordination

9.3.3 Fog Architecture

Edge Tier: Vehicle On-Board Computing

  • Powerful edge servers in vehicle
  • Real-time sensor fusion
  • Immediate driving decisions (steering, braking, acceleration)
  • Trajectory planning
  • Collision avoidance

Fog Tier: Roadside Units (RSUs)

  • Deployed along roads at intersections
  • Coordinate multiple vehicles
  • Provide local traffic information
  • Extend sensor range (communicate what’s around corner)
  • Handle V2V message relay

Fog Tier: Mobile Edge Computing (MEC) at Base Stations

  • Cellular network edge
  • Regional traffic management
  • HD map updates
  • Software updates
  • Non-critical cloud services

Cloud Tier: Central Data Centers

  • Fleet management
  • Route optimization
  • Long-term learning
  • Software development
  • Regulatory compliance

Diagram showing autonomous vehicle computing layers: On-board edge computing for safety-critical decisions under 10ms, roadside fog units for multi-vehicle coordination, cellular MEC for regional services, and cloud for fleet management and AI training.

Autonomous Vehicle Multi-Tier Edge-Fog Architecture

9.3.4 Processing Example

Collision Avoidance Scenario:

  1. Vehicle sensors detect potential collision (5ms)
  2. On-board edge processing decides evasive action (3ms)
  3. Action executed (braking/steering) (2ms)
  4. Total: 10ms (Cloud round-trip would be 200ms+ - collision already occurred)

Cooperative Perception:

  1. RSU combines sensor data from multiple vehicles
  2. Shares augmented awareness (blind spot information)
  3. Vehicles receive enhanced situational awareness
  4. Better decisions through cooperation

9.3.5 Benefits

Safety: Life-critical response times achieved Bandwidth: Terabytes/day of sensor data processed locally Reliability: Critical functions independent of cloud connectivity Scalability: Millions of vehicles supported through distributed architecture

9.4 Use Case 3: Privacy-Preserving Architecture

Fog computing enables privacy-preserving architectures that process sensitive data locally while still providing useful insights and services.

9.4.1 Privacy Challenges in IoT

Personal Data Exposure:

  • Video surveillance
  • Health monitoring
  • Location tracking
  • Behavioral patterns

Cloud Privacy Risks:

  • Data breaches
  • Unauthorized access
  • Third-party sharing
  • Government surveillance

9.4.2 Fog-Based Privacy Preservation

Local Processing Principle: “Process data where it’s collected; send only necessary insights”

Techniques:

Data Minimization:

  • Extract only required features
  • Discard raw sensitive data
  • Aggregate individual data

Example: Smart home: Count people in room (1 number) instead of sending video stream

Anonymization:

  • Remove personally identifiable information
  • Blur faces in video
  • Generalize location (area vs. precise GPS)

Differential Privacy:

  • Add noise to data before transmission
  • Provide statistical guarantees on privacy
  • Enable aggregate analytics while protecting individuals

Encryption:

  • End-to-end encryption for necessary transmissions
  • Homomorphic encryption for cloud processing of encrypted data
  • Secure multi-party computation

9.4.3 Architecture Pattern

  1. Edge Devices: Collect raw sensitive data
  2. Fog Nodes:
    • Extract privacy-safe features
    • Anonymize or aggregate
    • Encrypt if transmission needed
  3. Cloud:
    • Receives only privacy-preserved data
    • Performs authorized analytics
    • Returns results to fog/devices

Flowchart showing privacy transformation at each layer: raw sensitive data at edge devices, privacy techniques applied at fog nodes (anonymization, aggregation, differential privacy), only privacy-safe insights reach cloud.

Privacy-Preserving Data Flow in Fog Architecture

Example: Healthcare Monitoring

  • Wearable: Collects heart rate, location, activity
  • Fog (smartphone): Detects anomalies, triggers alerts
  • Cloud: Receives only: “Anomaly detected at approximate location X”
  • Privacy preserved: Raw health data never leaves personal fog node

9.5 Worked Example: Compute Offloading Decision for Agricultural Drone

Scenario: A fleet of agricultural drones surveys 1,000-acre farms for crop disease detection. Each drone carries cameras, processes images to detect diseased plants, and triggers precision pesticide spraying. The question: should image processing happen on the drone (edge), at a ground station (fog), or in the cloud?

Given:

  • 20 drones per farm
  • Each drone: 5 cameras, 12 MP each, 2 fps capture rate
  • Processing requirement: ResNet-50 inference for disease classification
  • Latency requirement: <500ms to identify diseased plant and trigger spray nozzle
  • Flight time: 2 hours per charge
  • Connectivity: 4G LTE (25 Mbps upload, 150ms latency to cloud)
  • Cloud GPU instance cost: $3/hour (NVIDIA T4 equivalent)

Step-by-step Analysis:

  1. Calculate raw data rate per drone:

    • 5 cameras x 12 MP x 3 bytes/pixel x 2 fps = 360 MB/s per drone
    • 20 drones x 360 MB/s = 7.2 GB/s total farm
  2. Evaluate cloud processing:

    • Upload bandwidth needed: 7.2 GB/s = 57.6 Gbps
    • Available: 20 drones x 25 Mbps = 500 Mbps (0.5 Gbps)
    • Bandwidth deficit: 115x more data than upload capacity!
    • Even with 10:1 compression: 5.76 Gbps needed, 11.5x shortfall
    • Cloud processing is physically impossible for real-time
  3. Evaluate fog processing (ground station):

    • Wireless to ground: 5.8 GHz link, 100 Mbps per drone = 2 Gbps total
    • Still need: 57.6 Gbps
    • Bandwidth deficit: 29x shortage
    • Per-drone bandwidth: 100 Mbps; each 36 MB image takes 0.29 seconds to transmit
    • With 10 images/sec per drone, transmit time alone is 2.9 seconds/sec (29x overload)
    • Fog processing too slow for real-time 500ms requirement
  4. Evaluate edge processing (on-drone):

    • NVIDIA Jetson Xavier NX: $400, 15W power
    • 21 TOPS INT8 inference
    • ResNet-50 inference: ~15ms per image
    • Power: 15W
    • Processing pipeline: Image capture (10ms) + Preprocessing (5ms) + Inference (15ms) + Decision logic (2ms) = 32ms
    • Throughput: 5 cameras x 2 fps = 10 images/sec
    • Required: 10 x 15ms = 150ms compute per second
    • Utilization: 15% (sustainable)
  5. Design hybrid edge-cloud architecture:

    • Real-time (on-drone): Disease detection inference, spray trigger decisions, flight path adjustments
    • Deferred (ground station buffer -> cloud overnight): Full-resolution image archival, detailed analysis for treatment planning, model retraining data collection
  6. Calculate costs:

    • Option A (Cloud - if possible): $3/hour x 2 hours/day x 20 drones x 365 = $43,800/year + bandwidth: 50.6 TB/day x $0.09/GB x 365 = $1,662,714/year = $1,706,514/year (IMPOSSIBLE anyway)
    • Option C (Edge + deferred cloud): Drone GPUs: $400 x 20 = $8,000 (one-time), Ground station: $15,000 (one-time), Cloud analysis: $3/hour x 4 hours/day x 365 = $4,380/year, Overnight bandwidth: ~100 GB/day x $0.09 x 365 = $3,285/year
    • Year 1: $8,000 + $15,000 + $4,380 + $3,285 = $30,665
    • Year 2+: $7,665/year
  7. Summary decision matrix:

    Factor Cloud Fog Edge
    Latency 13+ hours 2.9 sec/image 32ms
    Meets 500ms req? No No Yes
    Annual cost $1.7M N/A $7,665
    Works offline? No Partial Yes

Result: Edge processing is the only viable option for real-time crop disease detection. The hybrid architecture uses on-drone inference for immediate decisions (32ms latency) while deferring full-resolution uploads for overnight cloud analysis.

Key Insight: When bandwidth is the bottleneck (which it almost always is for high-resolution imagery), edge processing becomes mandatory regardless of cost. The compute offloading decision is often determined by physics (data size vs link capacity) rather than economics. In this case, even unlimited budget couldn’t make cloud processing work in real-time. Design for edge-first when dealing with high-bandwidth sensors (cameras, lidar, radar) and use fog/cloud for deferred analytics.

Common Pitfalls and Misconceptions
  • “Edge means no cloud”: A common mistake is treating edge and cloud as mutually exclusive. In practice, every production edge/fog deployment uses a hybrid architecture. Edge handles real-time decisions, but cloud remains essential for model training, long-term analytics, and fleet management. Removing the cloud tier from a smart factory design eliminates the ability to retrain ML models and improve anomaly detection over time.

  • “More edge compute is always better”: Adding powerful GPUs to every edge device increases cost, power consumption, and thermal challenges. An agricultural drone with a $400 Jetson module at 15W is viable; adding a $10,000 GPU server would exceed the drone’s weight and power budget. Right-size edge compute to the actual latency and throughput requirements rather than maximizing capability.

  • “Fog and edge are the same thing”: Fog computing specifically refers to an intermediate layer that aggregates and coordinates data from multiple edge devices. A single traffic camera doing local face detection is edge computing. A roadside unit coordinating 20 vehicles at an intersection is fog computing. Confusing these leads to architectures that miss the coordination benefits of the fog tier.

  • “Privacy is solved by keeping data in the EU”: Simply hosting cloud servers in the EU does not satisfy GDPR’s data minimization principle. Raw video with identifiable faces stored in an EU data center is still a privacy risk and a potential violation. True privacy-preserving architecture requires fog-level processing: blur faces, extract counts, and transmit only anonymized aggregates. Data minimization means collecting less, not just storing it closer.

  • “Latency calculations only need network round-trip time”: Real end-to-end latency includes sensor capture time, preprocessing, inference, decision logic, and actuation. A cloud service with 50ms network latency may have 200ms+ total when including serialization, queuing, and processing. For the autonomous vehicle use case, the 10ms budget includes 5ms sensor fusion + 3ms decision + 2ms actuation, leaving zero margin for network hops. Always calculate the full pipeline, not just the network segment.

Decision flowchart for choosing between edge, fog, and cloud computing. Starting with latency requirement check: under 50ms leads to edge mandatory, 50-500ms leads to fog recommended, over 500ms allows cloud. Then checks bandwidth constraints and privacy regulations to determine final architecture tier.

Edge-Fog-Cloud Decision Framework for IoT Use Cases

Scenario: Calculate monthly bandwidth costs for sending factory sensor data to cloud, comparing cloud-only vs fog-preprocessing approaches.

Factory specifications:

  • 200 temperature sensors: 1 reading/sec, 4 bytes each
  • 50 pressure sensors: 10 readings/sec, 4 bytes each
  • 10 vibration sensors: 1,000 readings/sec (1 kHz), 4 bytes each
  • 5 cameras: 1080p @ 30 fps (H.264 compressed to 4 Mbps each)

Step 1: Calculate raw data rates

Sensor Type Count Rate Bytes Per-Sensor Data Rate Total Data Rate
Temperature 200 1 Hz 4 4 bytes/sec 800 bytes/sec
Pressure 50 10 Hz 4 40 bytes/sec 2,000 bytes/sec
Vibration 10 1 kHz 4 4,000 bytes/sec 40,000 bytes/sec
Cameras 5 30 fps - 4 Mbps = 500 KB/sec 2,500 KB/sec

Total raw: 2,542.8 KB/sec ≈ 2.5 MB/sec

Step 2: Calculate cloud-only monthly costs

Data transfer per month:

2.5 MB/sec × 86,400 sec/day × 30 days = 6,480,000 MB = 6,480 GB ≈ 6.5 TB/month

Cloud bandwidth pricing (AWS example): - First 10 TB: $0.09/GB - Cost: 6,480 GB × $0.09 = $583.20/month

Step 3: Apply fog preprocessing

Data Type Fog Processing Reduction Output Rate
Temperature Send only changes >0.5°C 95% 40 bytes/sec
Pressure Send 1-second averages 90% 200 bytes/sec
Vibration FFT → send frequency spectrum 99% 400 bytes/sec
Cameras Motion detection → send only events 99.5% 12.5 KB/sec

Total with fog: 13.14 KB/sec ≈ 0.013 MB/sec

Step 4: Calculate fog-enabled monthly costs

0.013 MB/sec × 86,400 × 30 = 33,696 MB = 32.9 GB/month
32.9 GB × $0.09 = $2.96/month

Savings: $583.20 - $2.96 = $580.24/month = $6,963/year

Fog gateway hardware cost: $2,000 one-time Break-even: 2,000 / 580.24 = 3.4 months

Key insight: Fog preprocessing reduces bandwidth costs by 99.5% (2.5 MB/sec → 0.013 MB/sec). The fog gateway hardware pays for itself in under 4 months through bandwidth savings alone.

Use this framework to determine if your IoT deployment requires edge/fog or can use cloud-only:

Factor Cloud-Only Viable Edge/Fog Mandatory Decision Rule
Response time >200ms acceptable <100ms required Safety-critical systems need local processing
Bandwidth cost <10 GB/month per site >100 GB/month per site High-volume data (video, high-frequency sensors) needs local filtering
Connectivity 99.9%+ uptime, <50ms latency Unreliable, rural, or mobile Systems must work offline (factories, vehicles, remote monitoring)
Data sensitivity Public or anonymized PII, health, financial GDPR/HIPAA require local processing of regulated data
Device count <100 devices per site >500 devices per site Many devices benefit from local aggregation
Regulatory No data residency rules Data must stay in country/building Compliance drives architecture

Example scenarios:

Scenario A: Home weather station

  • Response time: Minutes acceptable
  • Bandwidth: 1 reading/min × 100 bytes = 432 KB/month
  • Connectivity: Residential Wi-Fi (reliable)
  • Sensitivity: Public weather data
  • Decision: Cloud-only ✓ (no edge/fog needed)

Scenario B: Hospital patient monitoring

  • Response time: <5 seconds for critical alerts
  • Bandwidth: 100 patients × 1 reading/sec × 50 bytes = 432 MB/day
  • Connectivity: Hospital network (reliable but privacy matters)
  • Sensitivity: HIPAA-protected health data
  • Decision: Fog mandatory (local processing for privacy + aggregation)

Scenario C: Autonomous delivery robot

  • Response time: <10ms for obstacle avoidance
  • Bandwidth: LIDAR + cameras = 50 Mbps continuous
  • Connectivity: Cellular (unreliable, high latency)
  • Sensitivity: Location tracking (privacy concern)
  • Decision: Edge mandatory (on-robot processing for all real-time decisions)

Rule of thumb: If 2+ factors indicate “Edge/Fog Mandatory,” cloud-only is insufficient. If all factors indicate “Cloud-Only Viable,” edge/fog adds unnecessary cost.

Common Mistake: Assuming “Fog” Means Single Gateway Per Site

The mistake: Deploying one fog gateway per factory/building and assuming it’s sufficient.

Why single-gateway fog fails:

Real scenario: Manufacturing plant with 1,000 sensors across 10 production lines, one fog gateway

Failure modes:

  1. Single point of failure:
    • Gateway hardware fails → all 1,000 sensors go dark
    • Gateway software crashes → entire factory loses visibility
    • Gateway power loss → no local processing during outage
  2. Capacity bottleneck:
    • Gateway handles 500 messages/sec maximum
    • During shift change, 800 devices report simultaneously → 300 messages dropped
    • Video analytics saturates gateway CPU → temperature alerts delayed
  3. Network topology limits:
    • Gateway 200 meters from far production line
    • Wi-Fi range issues cause packet loss
    • Wired Ethernet routing adds 50ms latency

How to design fog correctly:

Distributed fog architecture:

Factory floor layout:
├─ Zone A (Lines 1-3): Fog Gateway A
├─ Zone B (Lines 4-6): Fog Gateway B
├─ Zone C (Lines 7-8): Fog Gateway C
└─ Zone D (Lines 9-10): Fog Gateway D

Each zone gateway:
- Handles 250 sensors locally
- Provides N+1 redundancy (can take over adjacent zone)
- Connects to central coordinator via 10 GbE

Capacity planning:

  • Peak load per zone: 250 sensors × 2 messages/sec = 500 msg/sec
  • Gateway capacity: 1,000 msg/sec → 50% utilization (2× headroom)
  • Failover capacity: 500 sensors × 2 msg/sec = 1,000 msg/sec → still within capacity

Cost comparison:

  • Single gateway: $2,000 (single point of failure, insufficient capacity)
  • Four-gateway distributed: 4 × $2,000 = $8,000 (resilient, scales properly)
  • Extra cost: $6,000
  • Value: Eliminates downtime ($10,000/hour production loss × 4 hours/year MTBF = $40,000 annual risk avoided)

Rule of thumb: Deploy 1 fog gateway per 100-500 devices, with N+1 redundancy and cross-zone failover. Never rely on single gateway for production systems.

9.6 Summary and Key Takeaways

Edge and fog computing enable use cases that cloud-only architectures cannot support. From factory floors requiring sub-100ms anomaly detection to autonomous vehicles needing 10ms collision avoidance, local processing is often a fundamental requirement rather than an optimization.

Key takeaways:

  1. Smart factory predictive maintenance: Three-tier architecture achieves 99.9% bandwidth reduction by filtering 1kHz sensor data at edge, running ML anomaly detection at fog, and deferring model training to cloud. Continues operating during internet outages.
  2. Autonomous vehicles require edge processing: Safety-critical collision avoidance demands under 10ms total latency (5ms sensor fusion + 3ms decision + 2ms actuation). Cloud round-trip of 200ms+ means the collision happens before the response arrives. Edge is not optional here.
  3. Privacy compliance demands fog-level processing: GDPR’s data minimization principle requires processing sensitive data locally. Fog nodes anonymize video (face blur, people count) so only aggregate statistics reach the cloud, never raw personally identifiable information.
  4. Bandwidth physics often make the decision: Agricultural drones generating 360 MB/s per unit cannot transmit raw data over 25 Mbps links. When data volume exceeds available bandwidth by 100x+, edge processing is the only viable option regardless of cost considerations.
  5. Hybrid architectures are the production reality: No real-world deployment is purely edge, fog, or cloud. The winning pattern combines real-time edge decisions (under 50ms), regional fog coordination (under 500ms), and deferred cloud analytics (hours to days).

9.7 Knowledge Check

9.8 Concept Relationships

Concept Relates To Relationship Type Why It Matters
Predictive Maintenance (Use Case 1) Edge Anomaly Detection Application domain Demonstrates 99.9% data reduction (1kHz sensors → event summaries) and sub-100ms anomaly alerts at fog tier
Autonomous Vehicles (Use Case 2) Safety-Critical Latency Life-or-death constraint Shows why <10ms edge processing is mandatory - cloud’s 200ms+ round-trip means collision happens before response
Privacy-Preserving Architecture (Use Case 3) GDPR/HIPAA Compliance Regulatory driver Fog-based face blur and anonymization satisfy data minimization requirements cloud-only violates
Agricultural Drone Offloading (Worked Example) Bandwidth Physics Cost-benefit analysis Quantifies how 4 GB/s raw imagery makes cloud impossible, edge mandatory despite $8K hardware cost
Fog Gateway Redundancy System Availability Design pattern Single fog node = single point of failure - N+1 redundancy prevents $40K/year downtime risk
Hybrid Edge-Cloud Pattern Production Reality Architectural norm All three use cases combine edge (real-time), fog (coordination), cloud (analytics) - pure single-tier is anti-pattern

9.9 See Also

Explore related chapters to deepen your understanding of edge/fog use case patterns:

9.10 How It Works: Smart Factory Predictive Maintenance Pipeline

The Challenge: A CNC machine generates vibration sensor data at 10 kHz (10,000 readings/second). Raw data rate: 10,000 Hz × 4 bytes = 40 KB/s = 3.5 GB/day per machine. With 100 machines, that is 350 GB/day or about 10.5 TB/month, costing approximately $10,080/year in cloud bandwidth alone. How does fog computing make this viable?

Step-by-Step Architecture:

Step 1: Edge Tier (Machine PLC with Embedded Vibration Sensor)

Each CNC machine has a Programmable Logic Controller (PLC) with direct sensor connection:

# Edge processing: Fast Fourier Transform (FFT) on vibration data
import numpy as np

def edge_vibration_processing():
    # Collect 1-second window (10,000 samples)
    vibration_samples = read_sensor_burst(samples=10000, rate=10000)

    # Apply FFT to detect frequency anomalies
    fft_result = np.fft.fft(vibration_samples)
    frequencies = np.fft.fftfreq(10000, 1/10000)

    # Extract key frequency bands
    bearing_freq = get_amplitude_at(fft_result, frequencies, 1200)  # Bearing frequency
    gear_freq = get_amplitude_at(fft_result, frequencies, 3600)     # Gear mesh frequency

    # Local threshold detection (immediate action)
    if bearing_freq > BEARING_THRESHOLD:
        trigger_alarm()  # <10ms local response
        emergency_stop_if_critical()

    # Send only feature vectors to fog (not raw 10,000 samples)
    features = {
        'timestamp': now(),
        'bearing_amplitude': bearing_freq,
        'gear_amplitude': gear_freq,
        'overall_rms': calculate_rms(vibration_samples)
    }
    send_to_fog(features)  # 100 bytes vs 40 KB raw

# Execute every 1 second
while True:
    edge_vibration_processing()
    sleep(1)

Data reduction at edge: 40 KB raw → 100 bytes features = 99.75% reduction

Step 2: Fog Tier (Factory Floor Gateway - Industrial PC)

Fog gateway aggregates features from 100 machines and runs ML model:

# Fog processing: Machine learning inference for predictive maintenance
import joblib

# Load pre-trained anomaly detection model (trained in cloud, deployed to fog)
anomaly_model = joblib.load('bearing_failure_predictor.pkl')

def fog_aggregation_and_prediction(machine_features):
    # Aggregate last 60 seconds of features from this machine
    feature_window = buffer_last_60_seconds(machine_features)

    # Run ML inference (local, no cloud latency)
    failure_probability = anomaly_model.predict_proba(feature_window)[0][1]

    # Decision logic
    if failure_probability > 0.75:  # High risk of failure within 24 hours
        create_maintenance_work_order(machine_features['machine_id'])
        send_to_cloud({
            'alert_type': 'predictive_maintenance',
            'machine_id': machine_features['machine_id'],
            'failure_probability': failure_probability,
            'recommended_action': 'replace_bearing_before_shift_end'
        })

    # Aggregate hourly summaries for all 100 machines
    if time.minute == 0:  # Every hour
        hourly_summary = aggregate_all_machines(last_hour_data)
        send_to_cloud(hourly_summary)  # 10 KB for 100 machines

Data reduction at fog: 100 machines × 100 bytes/sec → 10 KB hourly summary = 96% further reduction

Step 3: Cloud Tier (AWS IoT + ML Training Pipeline)

Cloud receives only alerts and hourly summaries, performs long-term analytics:

# Cloud processing: Model retraining and fleet analytics
def cloud_analytics_pipeline():
    # Aggregate data from 10 factories (1,000 machines total)
    fleet_data = query_last_month_summaries()

    # Identify patterns
    failure_correlation = correlate_failures_with_operating_conditions(fleet_data)
    # Result: "Machines running at >80% capacity fail 3x more often"

    # Retrain anomaly detection model weekly
    new_training_data = collect_labeled_failure_events(last_3_months)
    improved_model = train_random_forest_classifier(new_training_data)

    # Deploy updated model to all fog gateways
    for factory in factories:
        deploy_model_to_fog(factory.gateway, improved_model)

Total Data Flow:

Tier Data Volume Reduction Processing
Edge (100 machines) 3.5 GB/day/machine × 100 = 350 GB/day - FFT feature extraction, threshold alerts
Fog aggregation 100 machines × 100 bytes/s = 10 KB/s = 864 MB/day 99.75% ML inference, work order generation
Cloud receives Hourly summaries: 24 hours × 10 KB = 240 KB/day 99.9999% Model retraining, fleet analytics

Cost Comparison:

Architecture Data Transmitted Monthly Bandwidth Cost Annual Cost
Cloud-only 350 GB/day × 30 = 10,500 GB/month $840/month @ $0.08/GB $10,080
Edge + Fog 240 KB/day × 30 = 7.2 MB/month ~$0.001/month ~$0.01
Savings 99.9999% reduction ~$840/month ~$10,080/year

Plus hardware costs:

  • Edge PLCs: Already installed (sunk cost)
  • Fog gateway: $15,000 one-time
  • Payback period: $15,000 / $840/month = 18 months

Key Insight: Edge FFT reduces raw 10kHz stream to 1Hz feature stream (99.75%), fog ML inference prevents failures weeks in advance (sub-100ms response), cloud retrains models to improve accuracy over time. No single tier could achieve this alone.

9.11 Try It Yourself

Exercise 1: Calculate Bandwidth Requirements for Your Use Case

Scenario: Design an edge/fog architecture for a logistics fleet.

Given:

  • 500 delivery trucks
  • Each truck has: 4 cameras (2 Mbps each), GPS (1 reading/second, 50 bytes), engine sensors (10 readings/second, 100 bytes/reading)
  • Cellular data cost: $5/GB

Your tasks:

  1. Calculate raw data per truck per day (no filtering):
    • Video: 4 cameras × 2 Mbps × 8 hours/day driving = _____ GB/day
    • GPS: 1 Hz × 50 bytes × 8 hours = _____ MB/day
    • Engine: 10 Hz × 100 bytes × 8 hours = _____ MB/day
    • Total per truck: _____ GB/day
  2. Calculate fleet monthly cost (cloud-only):
    • 500 trucks × [your answer] GB/day × 30 days × $5/GB = $_____ /month
  3. Design edge filtering:
    • Video: What events trigger saving clips? (Hint: hard braking, lane departure, near-miss)
    • GPS: Send every reading or only when deviating from route?
    • Engine: Send raw data or only anomalies?
  4. Estimate reduction ratio and new monthly cost

Bonus: What happens if a truck loses cellular connection for 2 hours? How does fog architecture help?

Exercise 2: Design a Privacy-Preserving Camera System

Scenario: Retail store wants to count customers and track foot traffic patterns, but GDPR prohibits storing identifiable faces.

Your tasks:

  1. Edge processing: What should the camera do locally before any data leaves the store?

    • Options: Face detection + blur? People counting? Object classification?
  2. Fog aggregation: What does the in-store gateway send to cloud?

    • Raw video with faces blurred?
    • Only metadata: “47 customers entered, average dwell time 12 minutes”?
    • Heatmap of movement patterns (anonymized)?
  3. Cloud analytics: What insights can cloud provide without receiving PII?

  4. Compliance check: Does your design satisfy GDPR’s data minimization principle?

Exercise 3: Compute Offloading ROI Calculator

Use the interactive calculator below to explore how different parameters affect the edge vs. cloud cost tradeoff:

Decision rule: If payback < 12 months, edge processing is justified.

Try it: Adjust the data reduction ratio slider to find the value needed for a 6-month payback period.

9.12 What’s Next

Topic Chapter Description
Common Pitfalls Edge-Fog Pitfalls Common mistakes and anti-patterns in edge/fog deployments, including resource management failures and security oversights
Hands-On Labs Edge-Fog Labs Practical exercises implementing edge and fog computing patterns with real hardware and simulators
Edge AI Applications Edge AI and ML Machine learning inference patterns at edge and fog tiers for real-time decision-making
Bandwidth Optimization Bandwidth Analysis Quantified cost analysis and data reduction strategies for edge-fog architectures