19 Edge, Fog, and Cloud: Summary
19.1 Learning Objectives
By the end of this chapter, you will be able to:
- Synthesize Tier Responsibilities: Explain when and why processing should happen at edge, fog, or cloud level using concrete latency, bandwidth, and cost criteria
- Diagnose Architecture Anti-Patterns: Identify at least five common mistakes in edge-fog-cloud designs and prescribe corrective actions
- Apply the Decision Framework: Use a structured process to assign workloads to the correct tier based on measurable requirements
- Evaluate Trade-Offs Quantitatively: Compare deployment architectures using specific metrics such as round-trip latency, monthly bandwidth cost, and uptime requirements
- Design a Migration Path: Outline the incremental steps to evolve a cloud-only architecture toward a hybrid edge-fog-cloud design, justifying the order of each transition
This summary brings together the key concepts about where to process IoT data: at the device (edge), at a nearby hub (fog), or in a remote data center (cloud). Think of it as the cliff notes version of a long book – it highlights the most important decisions you need to make and when each approach is the best fit.
- Previous: Edge-Fog-Cloud Advanced Topics - Worked examples
- Review Series:
- Introduction - Overview and tools
- Architecture - Layer details
- Devices and Integration - Device selection
- Advanced Topics - Misconceptions and examples
- Summary (this chapter) - Review and next steps
Explore Related Learning Resources:
- Knowledge Map - See how Edge/Fog/Cloud architecture connects to networking protocols, data analytics, and security concepts in the visual knowledge graph
- Quizzes Hub - Test your understanding with quizzes on “Architecture Foundations” and “Distributed & Specialized Architectures”
- Simulations Hub - Try the Edge vs Cloud Latency Explorer to visualize round-trip times and compare fog vs cloud costs
- Videos Hub - Watch “IoT Architecture Explained” and “Edge Computing Fundamentals” video tutorials
- Knowledge Gaps - Review common misconceptions about when to use edge vs fog vs cloud processing
19.2 Prerequisites
Before diving into this chapter, you should have completed:
- Edge-Fog-Cloud Introduction: Three-tier concept
- Edge-Fog-Cloud Architecture: Layer details
- Edge-Fog-Cloud Devices and Integration: Device patterns
- Edge-Fog-Cloud Advanced Topics: Worked examples
If you are short on time, focus on these three essential takeaways from the entire Edge-Fog-Cloud series:
The 50-500-5000 Rule: Safety-critical decisions requiring <50ms response must execute at the edge. Operational intelligence needing <500ms latency belongs at the fog. Analytical workloads tolerating seconds-to-minutes of latency run in the cloud. Mapping every workload to the correct tier is the single most important architectural decision.
Data Reduction is Non-Negotiable: Fog nodes must reduce upstream data volume by 90-99% before it reaches the cloud. A factory with 1,000 sensors generating 100 MB/s of raw data should send only 1-10 MB/s to the cloud. Failing to filter at the fog turns bandwidth costs into the dominant expense of the entire system.
Design for Offline-First: Every fog and edge node must continue functioning when the internet connection drops. If your architecture treats cloud connectivity as always-available, your system will fail at the worst possible moment – during the network outage when a safety event occurs. Store-and-forward, local decision rules, and graceful degradation are mandatory, not optional.
Buffer sizing for offline operation: \(buffer_{bytes} = rate_{bytes/s} \times duration_s\). Worked example: 1,000 sensors at 100 bytes/s each generate 100 KB/s. For 24-hour autonomy: \(100,000 \times 86,400 = 8.64\text{GB}\) required. ESP32 with 4 MB flash can buffer only 40 seconds; Raspberry Pi with 16 GB SD card handles 160,000 seconds (44 hours). Industrial gateway with 256 GB SSD: 2.56 million seconds (29 days). Choose hardware capacity to match your offline duration requirement.
Sammy the Sensor says: “We have learned so much about our three-level team! Let me tell you a story that puts it all together.”
The Big Day at the Smart Farm:
Sammy (the edge sensor) is out in the field measuring soil moisture. Suddenly, the reading drops to zero – the soil is bone dry!
Step 1 - Edge (Sammy): “ALERT! Moisture is critically low! I will turn on the water sprinkler RIGHT NOW – no time to wait!” Sammy activates the sprinkler in 10 milliseconds. This is the edge doing its job: instant reactions for urgent situations.
Step 2 - Fog (Bella the Gateway): Bella receives Sammy’s alert along with data from 20 other sensors across the farm. She notices that only Sammy’s field is dry – the others are fine. She decides: “I will run the sprinkler for exactly 15 minutes, then check again.” She also notices the weather forecast says rain is coming tomorrow. She adjusts: “Actually, only 10 minutes of watering needed.” This is the fog: making smart local decisions with nearby data.
Step 3 - Cloud (Max the Cloud): Max receives Bella’s summary report (not all the raw data – just the important stuff). Max looks at the last 3 years of farm data and discovers: “This field always dries out in January! We should install a drip irrigation system there.” This is the cloud: finding big patterns over long time periods.
The Lesson: Each level does what it does best! Sammy reacts instantly, Bella makes smart local choices, and Max plans for the future. Together, they are unstoppable!
19.3 Comprehensive Tier Review
This section consolidates the key knowledge from all five chapters in the Edge-Fog-Cloud series into a single reference.
19.3.1 Tier Comparison Matrix
| Dimension | Edge | Fog | Cloud |
|---|---|---|---|
| Latency | <10ms (on-device) | 10-100ms (local network) | 100-500ms (internet round-trip) |
| Compute Power | MCU/MPU (MHz-GHz, KB-MB RAM) | SBC/gateway (GHz, 1-8 GB RAM) | Datacenter (unlimited, elastic) |
| Storage | KB to MB (flash, EEPROM) | GB to TB (SSD, NAS) | PB+ (object storage, databases) |
| Bandwidth Cost | None (local) | Low (LAN) | High ($0.05-0.12/GB egress) |
| Reliability | Operates offline | Operates during WAN outages | Requires internet connectivity |
| Security Surface | Physical tampering, side-channel | Network attacks, gateway compromise | API abuse, data breaches |
| Typical Hardware | ESP32, STM32, Arduino | Raspberry Pi, Nvidia Jetson, industrial gateways | AWS, Azure, GCP |
| Example Workloads | Threshold alerts, sensor fusion, actuation | Aggregation, protocol translation, local ML inference | Training ML models, dashboards, long-term analytics |
19.3.2 Architecture Decision Flowchart
19.3.3 Data Flow Across Tiers
19.4 Common Anti-Patterns and Corrections
Understanding what NOT to do is just as important as knowing best practices. The following anti-patterns were collected from real-world IoT deployments.
The Mistake: Architects send all sensor data to the cloud by default, treating fog/edge as optional optimizations to add later.
Why It Happens: Cloud computing is familiar and well-documented. Teams assume cloud is the “safe choice” since it scales elastically. They plan to add edge/fog “if latency becomes a problem.”
The Fix: Start with a requirements analysis that explicitly evaluates latency (is <100ms needed?), bandwidth cost (will transmission costs exceed $1,000/month?), and reliability (must the system function during internet outages?). Design for the most demanding requirement first. If any of these constraints exist, architect fog/edge from day one – retrofitting distributed processing into a cloud-centric design is 3-5x more expensive than building it correctly initially.
The Mistake: Teams process all sensor readings through the same pipeline regardless of urgency – either sending everything to cloud, or processing everything locally.
Why It Happens: Building separate data paths for different urgency levels requires more architecture work. Teams simplify by using one pipeline, assuming “we can optimize later.”
The Fix: Classify data into three urgency tiers from project start: (1) Safety-critical requiring <50ms response stays at edge, (2) Operational data needing <500ms goes to fog for aggregation, (3) Analytical data tolerating seconds-to-minutes latency routes to cloud. Implement tiered routing in your message broker (e.g., MQTT topic hierarchy) so critical alerts bypass fog queues while bulk telemetry gets filtered. This 10% additional design effort prevents 90% of latency complaints in production.
The Mistake: The system is designed assuming constant cloud connectivity. When the internet drops, edge and fog devices stop functioning or lose data.
Why It Happens: Developers test in lab environments with reliable Wi-Fi. Production environments – remote farms, factory floors, moving vehicles – have intermittent connectivity that was never simulated.
The Fix: Implement store-and-forward at every tier. Edge devices buffer at least 24 hours of critical data locally. Fog nodes maintain a local decision ruleset that operates independently of cloud. Use message queues (MQTT with QoS 1 or 2) that guarantee delivery after reconnection. Test your system by physically disconnecting the WAN for 4 hours and verifying that safety-critical operations continue.
The Mistake: Teams apply the same security model to all tiers – for example, relying solely on TLS everywhere and assuming that protects the system.
Why It Happens: Security is treated as a single checkbox (“we use encryption”) rather than as a layered strategy adapted to each tier’s unique threat model.
The Fix: Apply defense-in-depth with tier-specific controls:
- Edge: Secure boot, firmware signing, hardware crypto (TPM/secure element), physical tamper detection
- Fog: Network segmentation, mutual TLS between fog and cloud, intrusion detection, encrypted local storage
- Cloud: API rate limiting, OAuth2/JWT authentication, data encryption at rest, audit logging, DDoS protection
Each tier faces different threats and requires different countermeasures.
The Mistake: A single fog node handles all gateway functions – protocol translation, data aggregation, local ML inference, device management, and security enforcement – in one tightly coupled application.
Why It Happens: Initial prototypes are simple. Teams deploy one application on the gateway and keep adding features without refactoring into modules.
The Fix: Containerize fog functions using Docker or similar technology. Run protocol translation, data processing, and ML inference as separate containers. This allows independent scaling (add more ML containers without affecting protocol translation), independent updates (patch one function without restarting others), and graceful degradation (if ML inference crashes, data forwarding continues).
19.5 Self-Assessment Checklist
Use this checklist to verify your understanding of the entire Edge-Fog-Cloud series. Each item corresponds to a key concept from one of the five chapters.
| # | Concept | Can You Explain It? | Source Chapter |
|---|---|---|---|
| 1 | Why a three-tier architecture is needed instead of cloud-only | Introduction | |
| 2 | The 50-500-5000 latency rule for tier selection | Introduction | |
| 3 | Specific hardware examples for each tier (MCU, SBC, datacenter) | Architecture | |
| 4 | How protocol translation works at the fog layer | Architecture | |
| 5 | Selection criteria for edge devices (power, cost, compute) | Devices & Integration | |
| 6 | When to use Raspberry Pi vs Nvidia Jetson as fog node | Devices & Integration | |
| 7 | How CAP theorem applies to edge-fog-cloud data synchronization | Advanced Topics | |
| 8 | Service discovery using mDNS for local and Consul for global | Advanced Topics | |
| 9 | How to calculate bandwidth savings from fog-level filtering | Summary (this chapter) | |
| 10 | Five common anti-patterns and their fixes | Summary (this chapter) |
19.6 Practice Questions
19.7 Question 1: Tier Selection
A factory has 500 vibration sensors sampling at 10 kHz. The system must detect bearing failures within 100ms to prevent equipment damage. Raw data volume is 50 MB/s. Where should bearing failure detection run?
- Cloud – it has the most compute power for ML-based anomaly detection
- Edge – each sensor processes its own data for sub-millisecond response
- Fog – aggregates multiple sensor streams and runs local ML inference within the latency budget
- Split equally: 50% edge, 50% cloud for redundancy
C) Fog is the correct answer.
- The 100ms latency requirement eliminates the cloud (100-500ms round-trip just for network transit)
- Individual edge sensors lack the compute power to run ML-based anomaly detection and cannot correlate across multiple sensors
- A fog gateway (e.g., Nvidia Jetson) can aggregate streams from multiple vibration sensors, run local ML inference, and detect bearing failures within the 100ms budget
- The fog also reduces the 50 MB/s raw data to summary reports sent to the cloud for long-term trend analysis
Why not B? While edge processing offers the lowest latency, bearing failure detection typically requires correlating data from multiple sensors on the same machine. A single edge sensor cannot see patterns across its neighbors. If the requirement were per-sensor threshold alerting (e.g., “vibration exceeds 5g”), then edge would be correct.
19.8 Question 2: Bandwidth Economics
An IoT deployment has 1,000 cameras generating 2 Mbps each. Cloud egress costs $0.09/GB. Without fog filtering, what is the approximate monthly cloud bandwidth cost?
- $5,900/month
- $59,000/month
- $590/month
- $590,000/month
B) $59,000/month is the correct answer.
Calculation:
- Total bandwidth: 1,000 cameras x 2 Mbps = 2,000 Mbps = 2 Gbps
- Per day: 2 Gbps x 86,400 seconds = 172,800 GB/day = ~21,600 GB/day (converting bits to bytes: 2 Gbps = 250 MB/s, x 86,400 = 21,600,000 MB = ~21,600 GB)
- Per month: 21,600 GB/day x 30 days = 648,000 GB/month
- Cost: 648,000 GB x $0.09/GB = $58,320/month (approximately $59,000)
This calculation demonstrates why fog-level filtering is non-negotiable for video IoT. If fog nodes reduce data by 95% (sending only motion-detected clips), the cost drops to ~$2,900/month – a savings of over $56,000 monthly.
19.9 Question 3: Offline Operation
A smart agriculture system monitors soil moisture across 200 hectares. The rural location has cellular connectivity that drops for 2-4 hours daily. Which architecture ensures irrigation continues during outages?
- Cloud-only with retry logic – the system queues irrigation commands and executes them when connectivity returns
- Edge-only – each sensor independently controls its nearest sprinkler valve based on local thresholds
- Fog-first with store-and-forward – a local gateway runs irrigation schedules independently, syncs with cloud when connected
- Dual-cloud with failover – primary cloud in Region A, backup in Region B for high availability
C) Fog-first with store-and-forward is the correct answer.
- Why not A? Cloud-only with retry means no irrigation for 2-4 hours daily. In hot weather, this could damage crops. The retry queue only helps after reconnection – it does not solve the immediate need.
- Why not B? Pure edge (per-sensor thresholds) works for emergency watering but cannot implement coordinated irrigation schedules, zone rotation, or weather-adjusted timing. Each sensor acts independently without awareness of the broader farm state.
- Why C? A fog gateway stores the irrigation schedule locally, reads sensor data over the local network (no internet needed), and makes intelligent decisions (adjusting for weather forecasts cached before the outage, coordinating zones to avoid water pressure drops). When connectivity returns, it syncs logs and updated schedules with the cloud.
- Why not D? Dual-cloud does not solve the problem – both cloud regions are equally unreachable when the local cellular connection drops. The bottleneck is the last-mile connectivity, not cloud availability.
19.10 Question 4: Security-in-Depth
Which of the following correctly applies tier-specific security controls?
- Edge: OAuth2 tokens, Fog: API rate limiting, Cloud: secure boot
- Edge: secure boot + hardware crypto, Fog: network segmentation + mutual TLS, Cloud: OAuth2 + audit logging
- All tiers: TLS 1.3 encryption – this single measure protects the entire system
- Edge: firewall, Fog: antivirus, Cloud: physical access controls
B) is the correct answer.
Each tier faces different threats and requires different countermeasures:
- Edge devices are physically accessible to attackers, so they need secure boot (preventing firmware tampering) and hardware crypto (TPM or secure element to protect keys). OAuth2 tokens (option A) are too resource-intensive for MCUs and assume network connectivity.
- Fog gateways sit at the network boundary, so they need network segmentation (isolating IoT traffic from corporate networks) and mutual TLS (verifying both fog and cloud identities). Antivirus (option D) is a desktop/server concept that does not map well to embedded Linux gateways.
- Cloud services are API-driven, so they need OAuth2/JWT authentication (controlling who can access APIs) and audit logging (tracking all access for compliance). Secure boot (option A reversed) is a device-level control, not a cloud service control.
Option C is the most dangerous misconception: TLS protects data in transit but does nothing against firmware tampering at the edge, lateral movement at the fog, or API abuse at the cloud. Security requires defense-in-depth.
19.11 Question 5: Data Lifecycle
In a three-tier IoT system monitoring 10,000 temperature sensors sampling once per second, which data lifecycle is most cost-effective?
- Edge: raw samples to cloud every second, Cloud: store everything in time-series database
- Edge: send raw data to fog, Fog: send raw data to cloud, Cloud: filter and store
- Edge: average 60 samples into 1-minute summaries, Fog: aggregate across sensor groups and detect anomalies, Cloud: store hourly summaries and run trend analysis
- Edge: store all data locally forever, Fog: backup edge data, Cloud: unused
C) is the correct answer.
Data volume analysis:
- Raw data: 10,000 sensors x 1 sample/sec x 8 bytes = 80 KB/s = ~200 GB/month
- Option A sends 200 GB/month to cloud: ~$18/month in bandwidth, but time-series DB storage costs escalate rapidly (billions of rows/month)
- Option B doubles the network traffic (edge-to-fog, then fog-to-cloud) with no data reduction – the worst of all worlds
- Option C achieves progressive reduction: edge sends 1/60th the data (3.3 GB/month to fog), fog sends hourly aggregates to cloud (~55 MB/month). Cloud storage and query costs drop by over 99%
- Option D fills edge flash storage in hours (most MCUs have <16 MB flash) and wastes cloud investment
The key principle: each tier should add value by reducing volume while preserving information. One-minute averages lose nothing meaningful for temperature monitoring but reduce volume 60x. Fog anomaly detection means the cloud only sees events worth analyzing, not routine readings.
19.12 Visual Reference Gallery
The three-tier IoT architecture showing the relationship between edge, fog, and cloud computing layers.
Key characteristics that distinguish fog computing from traditional cloud-centric architectures.
Comparison of deployment models and their suitability for different IoT application requirements.
The following AI-generated figures provide alternative visual representations of concepts covered in this chapter. These “phantom figures” offer different artistic interpretations to help reinforce understanding.
19.12.1 Additional Figures
Scenario: Manufacturing plant with 200 motors, each with 4 vibration sensors sampling at 10 kHz. Goal: Predict bearing failures 48 hours in advance.
Pure Cloud Approach:
- Raw data: 200 motors × 4 sensors × 10,000 samples/sec × 4 bytes = 32 MB/s = 2.76 TB/day
- Cloud ingestion: 2.76 TB × $0.09/GB × 30 = $7,452/month
- Cloud compute: Real-time FFT on 2.76 TB/day = 4× vCPUs × 24hr × $0.10/hour = $288/month
- Cloud storage: 2.76 TB × 30 days × $0.023/GB = $1,906/month (first month cumulative)
- Total first-year cost: $110,000 in cloud fees alone
Hybrid Edge-Fog-Cloud Approach:
- Edge (STM32 at each motor, $15 device): Runs FFT on 10kHz data, extracts 8 spectral features, detects threshold crossings → Reduces to 8 values/second = 32 bytes/s per motor (99.997% reduction)
- Fog (Raspberry Pi per production line, $75 device): Aggregates 50 motors, runs local anomaly detection model, sends only anomalies + hourly summaries to cloud → Reduces to 500 KB/day per line = 2 MB/day total
- Cloud: Receives 2 MB/day (vs. 2.76 TB/day), runs global ML model across all factories, updates edge models weekly
Cost Comparison:
- Edge hardware: 200 × $15 = $3,000 one-time
- Fog hardware: 4 × $75 = $300 one-time
- Cloud ingestion: 2 MB × $0.09/1024 × 30 = $0.005/month
- Cloud compute: Anomaly processing = 1 vCPU × 1hr/day × $0.10 = $3/month
- Cloud storage: 60 MB/month × $0.023/1024 = $0.001/month
Hybrid first-year cost: $3,300 upfront + $36 annual cloud = $3,336 vs. $110,000 pure cloud (97% savings)
Payback period: 1.1 months. After that, edge-fog-cloud saves $109,964/year.
Added benefit: Edge processing provides <1s local alarm (motor shuts down before catastrophic failure), while pure cloud has 100-500ms round-trip latency risk.
Determine optimal tier for each stage of your data lifecycle:
| Data Lifecycle Stage | Edge (Device) | Fog (Gateway) | Cloud (Platform) |
|---|---|---|---|
| Collection | All raw sensor readings | Aggregated from multiple devices | Summaries and events only |
| Filtering | Threshold checks, outlier removal | Statistical filtering, duplicate suppression | Compliance filtering, anonymization |
| Storage Duration | Hours (buffer for connectivity loss) | Days to weeks (local cache) | Months to years (historical archive) |
| Analytics | Simple threshold logic | Real-time aggregation, local ML inference | Complex ML training, cross-site analytics |
| Retention Policy | Circular buffer (oldest data dropped) | LRU cache (least recently used evicted) | Tiered storage (hot → warm → cold → archive) |
| Access Pattern | High frequency (milliseconds) | Medium frequency (seconds to minutes) | Low frequency (hours to days) |
Example - Smart City Traffic:
- Edge: Traffic camera processes 30 fps video, detects vehicles, drops raw frames → Keeps last 5 minutes for incidents
- Fog: Aggregates 20 cameras, counts vehicles/minute, identifies congestion patterns → Keeps 7 days for local traffic optimization
- Cloud: Receives hourly traffic summaries, trains citywide optimization models → Keeps 5 years for trend analysis and planning
Rule of thumb: Data should age up through the tiers (raw at edge, aggregated at fog, summarized in cloud) while retention duration increases at each level.
The Mistake: Architects secure edge devices (device certificates, secure boot) and cloud (TLS, API auth), but treat fog gateways as trusted internal infrastructure without defense-in-depth.
Real Attack Vector: Attackers compromise a fog gateway (Raspberry Pi with default SSH credentials in factory closet) and gain: - Access to 500 edge device messages (sensor data, control commands) - Ability to inject false data upstream to cloud (spoofed alerts) - Lateral movement to edge devices via local network (no TLS on LAN assumed secure) - Persistent access (gateway has no intrusion detection)
Impact: One compromised fog gateway = entire factory floor under attacker control.
The Numbers:
- Average time to detect IoT gateway compromise: 197 days (Ponemon Institute)
- Cost of industrial IoT security incident: $5.4M average (IBM)
- Percentage of fog gateways with default credentials in 2025 survey: 34%
Defense-in-Depth Requirements:
| Security Layer | Edge | Fog | Cloud |
|---|---|---|---|
| Authentication | Device certificates | Mutual TLS + strong passwords | OAuth2 + MFA |
| Authorization | Per-device policies | Role-based access control | API scopes + rate limiting |
| Network | Encrypted even on LAN | Network segmentation (VLAN isolation) | DDoS protection, WAF |
| Monitoring | Anomaly detection | IDS/IPS, log aggregation | SIEM, threat intelligence |
| Updates | OTA with signature verification | Automated patching + rollback | Continuous deployment + canary |
Correct Approach: Treat fog gateways as exposed attack surface. Use mutual TLS between edge-fog and fog-cloud, segment fog network from corporate network, enable logging and monitoring, apply security patches within 48 hours of release.
Implementation cost: $200/gateway for hardening (secure OS image, monitoring agent, managed firewall) vs. $5.4M average breach cost = 0.004% investment for 100× risk reduction.
19.13 Summary
This chapter consolidated the key concepts from the entire Edge-Fog-Cloud series into a comprehensive review:
Tier Responsibilities: Edge handles instant reactions (<10ms, on-device), fog provides local intelligence (<100ms, gateway-level aggregation and protocol translation), and cloud delivers global wisdom (analytics, ML training, long-term storage). Assigning workloads to the wrong tier is the root cause of most IoT architecture failures.
Data Reduction Pipeline: Data volume must decrease at each tier. Raw sensor data (100%) is filtered at the edge (keeping 10%), aggregated at the fog (keeping 1%), and only summaries reach the cloud. Failing to implement this cascade turns bandwidth costs into the dominant expense.
Offline-First Design: Every tier must operate independently during network partitions. Edge devices continue actuation with local rules, fog nodes maintain cached schedules and local ML models, and store-and-forward guarantees no data loss during outages.
Security-in-Depth: Each tier requires tier-specific security controls because each faces different threats. Edge needs hardware-rooted trust (secure boot, TPM). Fog needs network segmentation and mutual TLS. Cloud needs API authentication and audit logging.
Five Anti-Patterns to Avoid: (1) Cloud-first by default, (2) Treating all data equally, (3) Ignoring offline operation, (4) Symmetric security across tiers, (5) Monolithic fog design. Recognizing these patterns early saves 3-5x in remediation costs.
Decision Framework: Use latency requirements as the primary classifier (50ms = edge, 500ms = fog, seconds+ = cloud), then refine with bandwidth cost, offline requirements, and compute complexity.
19.14 Knowledge Check
Common Pitfalls
After completing this chapter series, the most common failure is memorizing facts without internalizing the decision framework. The purpose of edge-fog-cloud learning is to build the judgment to classify any workload into the right tier. Test this: can you explain why a specific factory sensor should process locally versus why a predictive model should run in the cloud?
Chapter summaries present clean three-tier architectures. Real deployments involve mixed protocol stacks (MQTT, OPC-UA, Modbus, HTTP), legacy hardware with proprietary APIs, intermittent connectivity, and security constraints that weren’t considered in the idealized model. Plan 30-50% of project time for integration work beyond the core architecture.
Architects often choose edge or fog because it is technically superior without calculating whether the cost difference versus cloud is justified. A deployment with 50 devices generating 1 KB/minute costs $0.25/month in cloud egress — edge hardware costs $500+. Cloud-only may be the right answer below a data volume threshold that you must calculate for your specific deployment.
19.15 What’s Next
The next chapter explores Fog Computing Fundamentals in depth, covering fog node architecture, placement strategies, and implementation patterns for distributed edge processing.
| Topic | Chapter | Description |
|---|---|---|
| Edge Compute Patterns | Edge Compute Patterns | Detailed patterns for distributing computation across edge and fog tiers |
| Edge Comprehensive Review | Edge Comprehensive Review | Assessment questions and worked examples for edge computing concepts |
| IoT Reference Models | IoT Reference Models | How the three-tier architecture maps to formal IoT reference frameworks |