1320  Edge Review: Architecture and Reference Model

1320.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Explain the Edge-Fog-Cloud Continuum: Describe how data flows through multiple processing tiers with increasing latency but decreasing bandwidth requirements
  • Apply the Seven-Level IoT Reference Model: Map processing capabilities, latency characteristics, and use cases to each level
  • Identify Processing Trade-offs: Evaluate latency, bandwidth, processing power, and cost at each architectural tier
  • Design Tiered Architectures: Apply the golden rule of edge computing to determine optimal processing placement

1320.2 Prerequisites

Before studying this chapter, complete:

Think of edge computing like a postal system:

  • Edge (Level 1-3): Your local post office - handles urgent letters quickly, sorts mail before sending
  • Fog (Level 4-5): Regional distribution center - accumulates mail from multiple local offices, makes routing decisions
  • Cloud (Level 6-7): National headquarters - handles complex logistics, long-term records, cross-country coordination

Data flows the same way: urgent processing happens locally, while complex analysis travels to centralized systems.

1320.3 Edge-Fog-Cloud Architecture Overview

The following diagram illustrates the complete edge computing continuum, showing how data flows from sensors through multiple processing tiers to the cloud, with increasing latency but decreasing bandwidth requirements at each level.

Flowchart diagram

Flowchart diagram
Figure 1320.1: Edge-Fog-Cloud Computing Continuum showing data flow from sensors through multiple processing tiers. The diagram illustrates the four-layer architecture: Edge Layer (Level 1-2) with sub-millisecond latency handling 28.8 GB/hour raw sensor data from 500 devices; Edge Gateway Layer (Level 3) performing downsampling and aggregation with 1-10ms latency reducing data to 288 MB/hour; Fog Layer (Level 4-5) with 10-100ms latency conducting analytics and ML inference, further reducing data to 2 MB/hour; and Cloud Layer (Level 6-7) with 100-500ms latency for deep analytics and long-term storage. The diagram shows a 14,400x total data reduction from edge to cloud, demonstrating the fundamental trade-offs: latency increases from low to high, bandwidth requirements decrease from high to low, processing complexity increases from simple to complex, and cost structure shifts from distributed low-cost edge devices to centralized high-capability cloud infrastructure.

1320.4 Seven-Level IoT Reference Model

The following table summarizes the seven-level IoT reference model, mapping each level to its processing capabilities, latency characteristics, and typical use cases.

Level Name Processing Capabilities Latency Data Volume Typical Use Cases
Level 1 Physical Devices & Controllers Raw sensing, basic actuation, signal conditioning <1 ms Very High (GB/hour) Sensor sampling, emergency shutoffs, real-time control
Level 2 Connectivity Protocol translation, network routing, device addressing <1 ms Very High Data transmission, network protocols, device communication
Level 3 Edge Computing (Fog) Evaluation (filtering), Formatting (standardization), Distillation (aggregation), Assessment (thresholds) 1-10 ms High (MB/hour) Data reduction (100-1000x), downsampling, statistical aggregation, anomaly detection
Level 4 Data Accumulation Time-series storage, buffer management, data persistence 10-100 ms Medium Local databases, recent data cache, query processing
Level 5 Data Abstraction Data modeling, semantic integration, format conversion 10-100 ms Medium Data normalization, schema mapping, API abstraction
Level 6 Application Business logic, analytics, ML inference 100-500 ms Low (MB/day) Dashboards, reporting, predictive models
Level 7 Collaboration & Processes Cross-system integration, workflow automation, enterprise services 100-500 ms Very Low ERP integration, business processes, multi-tenant services

1320.4.1 Key Data Reduction Example

Scenario: 500 vibration sensors, 1 kHz sampling, 16-byte readings

Processing Stage Data Rate Reduction Operations Applied
Raw Sensors (Level 1) 28.8 GB/hour Baseline None
After Downsampling (Level 3) 288 MB/hour 100x Frequency: 1 kHz to 10 Hz
After Aggregation (Level 3) 2 MB/hour 14,400x Statistical summarization (100:1)

Cost Impact: $25,000/year savings in cloud ingress costs (@$0.10/GB)

1320.4.2 Processing Trade-off Summary

Factor Edge Layer Fog Layer Cloud Layer
Latency <1 ms (Best) 10-100 ms (Moderate) 100-500 ms (Highest)
Bandwidth Very High (Worst) Medium Low (Best)
Processing Power Limited Moderate Unlimited
Data Retention Seconds-Minutes Hours-Days Unlimited
Cost per Node Low Medium High (centralized)
Scalability Distributed Regional Global
Use Cases Real-time control, safety Analytics, ML inference Training, long-term storage

1320.5 Architecture Design Principle

The Golden Rule of Edge Computing: Process data as close to the source as possible, but only as close as necessary.

  • Edge (Level 1-3): Latency-critical operations, data reduction, real-time decisions
  • Fog (Level 4-5): Regional analytics, ML inference, medium-term storage
  • Cloud (Level 6-7): Deep analytics, model training, historical analysis, global coordination

1320.6 Knowledge Check: Architecture Concepts

Question: A factory deploys 500 vibration sensors sampling at 1 kHz (1000 Hz) with 16-byte readings. An edge gateway downsamples to 10 Hz and aggregates 100 sensors into summary statistics (200 bytes per aggregation). What is the data reduction from sensor to cloud per hour?

Explanation: Letโ€™s calculate the data reduction through Level 3 edge processing:

Raw Sensor Data (Level 1):

  • 500 sensors x 1000 Hz x 16 bytes = 8,000,000 bytes/second = 8 MB/s
  • Per hour: 8 MB/s x 3600 seconds = 28,800 MB/hour = 28.8 GB/hour

After Downsampling (Level 3 - Gateway):

  • Downsample 1000 Hz to 10 Hz (100x reduction in frequency)
  • 500 sensors x 10 Hz x 16 bytes = 80,000 bytes/second = 80 KB/s
  • Per hour: 80 KB/s x 3600 = 288,000 KB = 288 MB/hour

After Aggregation (Level 3 - Gateway):

  • Aggregate 100 sensors into 1 summary record (200 bytes)
  • Number of groups: 500 sensors / 100 = 5 groups
  • 5 groups x 10 Hz x 200 bytes = approximately 2 MB/hour

Data Reduction: 28,800 MB/hour / 2 MB/hour = 14,400x reduction

This demonstrates Level 3 Edge Computing capabilities:

  1. Distillation/Reduction: Downsample high-frequency data (1 kHz to 10 Hz)
  2. Aggregation: Combine multiple sensor streams into statistical summaries
  3. Formatting: Standardize output format for cloud consumption

Question: An edge gateway receives data from 200 sensors. Level 3 processing applies evaluation (filtering 20% of bad data), formatting (standardizing), and distillation (aggregating 10 readings into 1 summary). The gateway buffer holds 100 readings. What happens when the 101st reading arrives before aggregation runs?

Explanation: This demonstrates Level 3 Edge Computing buffer management using FIFO (First In, First Out).

When the buffer reaches capacity (100 readings):

  1. New reading (101st) arrives
  2. Oldest reading is removed from the buffer
  3. New reading is appended to buffer
  4. Buffer size remains at 100

Why FIFO is appropriate for Level 3 Edge:

  1. Recency priority: Recent data is more relevant for real-time analytics
  2. Graceful degradation: System continues operating under high load
  3. No retries needed: Avoids network congestion from retransmissions
  4. Memory bounded: Prevents memory exhaustion on resource-constrained edge devices

Mitigation strategies for high-velocity data:

  • Increase buffer size (1000 instead of 100)
  • Faster aggregation cycles
  • Multi-level buffering per sensor type
  • Backpressure signaling to sensors
  • Priority queueing for critical/anomaly data

1320.7 Common Misconception: Edge Equals Offline Processing

The Misconception: Many students believe edge computing means devices process data completely independently without cloud connectivity.

Reality - Hybrid Edge-Cloud Model:

Edge computing is about intelligent data reduction and latency-critical processing, NOT replacing cloud. The correct mental model:

  • Edge processing: 95% of data volume (filtered locally)
  • Cloud transmission: 5% of data (aggregated, important events)
  • Cloud computation: 80% of ML training (requires historical data)
  • Edge inference: 20% of ML (simple threshold-based decisions)

When to Use Each:

  • Edge: Real-time safety shutdowns (<10ms), data reduction (100-1000x), privacy filtering
  • Cloud: ML model training, historical analytics, cross-site correlation, firmware updates
  • Wrong approach: Trying to do all ML training on edge devices, or sending all raw sensor data to cloud

Cost Impact of Misconception: Companies over-investing in edge infrastructure waste $50,000-$200,000 per deployment site, while companies under-utilizing edge spend $25,000-$80,000/year in unnecessary cloud ingress costs.

1320.8 Chapter Summary

  • The Edge-Fog-Cloud Continuum provides progressive data processing where latency increases (under 1ms at edge to 100-500ms at cloud) while bandwidth requirements decrease dramatically through data reduction at each tier.

  • The Seven-Level Reference Model guides processing decisions: Levels 1-2 handle physical sensing and connectivity, Level 3 performs edge computing (filtering, aggregation, format standardization), Levels 4-5 provide fog-layer storage and abstraction, and Levels 6-7 enable cloud analytics and enterprise integration.

  • Processing trade-offs must balance latency requirements, bandwidth constraints, processing power availability, data retention needs, and cost considerations when determining where to place computation.

  • The Golden Rule states: process data as close to the source as possible, but only as close as necessary, based on latency requirements and processing complexity.

1320.9 Whatโ€™s Next

Continue to Edge Review: Data Reduction Calculations to learn how to calculate bandwidth savings and apply aggregation strategies for industrial IoT deployments.

Related chapters in this review series: