18 Edge-Fog-Cloud Tools
Interactive tool efficiency can be tracked as successful outcomes per minute:
\[ \eta = \frac{N_{\text{successful-attempts}}}{T_{\text{minutes}}} \]
Worked example: If users complete 48 successful interactions in 30 minutes, efficiency is \(48/30=1.6\) successful interactions/minute. This metric helps compare alternative game or wizard designs objectively.
18.1 Learning Objectives
By the end of this chapter, you will be able to:
- Configure the Compute Placement Calculator: Analyze IoT requirements to determine optimal Edge/Fog/Cloud distribution
- Apply Decision Criteria: Evaluate latency, bandwidth, privacy, connectivity, and cost factors for tier selection
- Justify Scenario-Based Decisions: Defend workload placement choices across 15 real-world scenarios with evidence
- Diagnose Common Placement Mistakes: Explain why frequent tier selection errors lead to system failures and propose corrections
- Compare Trade-Off Alternatives: Balance competing requirements (cost vs. latency, privacy vs. capability) to reach and defend optimal decisions
If you only remember three things from this chapter, make it these:
- Five factors drive every placement decision: Latency, bandwidth, privacy, connectivity, and cost must all be evaluated together – a single-factor decision (e.g., “cloud has more compute power”) is the most common source of architecture failures in IoT projects.
- Safety-critical workloads must always run at the edge: Any function that requires sub-50ms response or must operate during network outages (collision avoidance, valve shutoff, arrhythmia alerts) cannot depend on cloud or even fog connectivity. Place these at the edge first, then decide what can tolerate higher latency tiers.
- Fog preprocessing reduces cloud costs by 80-95% at scale: Sending raw sensor data to the cloud works at prototype scale (10 devices) but becomes unsustainable at production scale (10,000+ devices). Fog gateways that filter, aggregate, and compress data before cloud upload are essential for cost-effective scaling.
- Previous: Edge-Fog-Cloud Introduction - Core concepts and 50-500-5000 rule
- Next: Edge-Fog-Cloud Architecture - Detailed layer analysis
- Series Index: Edge-Fog-Cloud Overview - Complete chapter series
18.2 Prerequisites
Before using these tools, you should understand:
- Edge-Fog-Cloud Introduction: The 50-500-5000 latency rule
- Basic IoT concepts: Sensor data, cloud connectivity, latency requirements
Hey there, future IoT architect! The Sensor Squad needs your help figuring out where to process their data!
Sammy the Sensor says: “I collect SO much data every second. But where should I send it all?”
Think of it like sorting your homework:
- Easy math problems (Edge) – You can do these in your head RIGHT NOW. Like adding 2+2. No need to ask anyone!
- Medium problems (Fog) – You need your calculator nearby. Like long division. Your desk buddy (the fog gateway) can help without going to the teacher.
- Really hard problems (Cloud) – You need to ask the teacher (the cloud). Like solving a giant equation that takes the whole class working together.
Lila the LED asks: “But what happens when you can’t reach the teacher?”
Great question, Lila! That is why you NEVER put easy problems in the teacher’s pile:
- If the teacher is sick (internet is down), you can still do easy math!
- If everyone sends ALL their problems to the teacher, the teacher gets overwhelmed and EVERYONE waits forever!
Max the Motor has a rule of thumb:
- Must happen NOW (like stopping a robot arm) –> Edge (do it yourself!)
- Need some help (like noticing a pattern in temperatures) –> Fog (ask your desk buddy)
- Really, really complex (like predicting weather for next month) –> Cloud (ask the teacher and the whole school library)
Fun Activity: Look at your smart home devices. For each one, ask: “Does this NEED the internet to do its basic job?” A smoke alarm should work without Wi-Fi (edge!). A weather forecast needs data from everywhere (cloud!). A smart thermostat that reads multiple room sensors is somewhere in the middle (fog!).
When an IoT device collects data (like a temperature reading or a camera image), something needs to process that data – check if it is normal, combine it with other readings, or run a machine learning model on it. “Compute placement” simply means deciding where that processing happens.
There are three main locations:
- Edge – processing happens right on the device itself or a tiny computer sitting next to it. This is the fastest option because the data does not travel anywhere. Example: a smoke detector that decides locally whether to sound the alarm.
- Fog – processing happens on a nearby gateway or small server, usually in the same building or campus. This is a middle ground: faster than the cloud, more powerful than a single sensor. Example: a factory floor computer that watches video feeds from 10 cameras and flags defective parts.
- Cloud – processing happens in a large data center far away, accessed over the internet. This offers massive computing power but adds delay. Example: analyzing six months of sensor data from 1,000 buildings to predict equipment failures.
The key insight is that no single location is best for everything. A well-designed IoT system uses all three tiers, placing each task where it makes the most sense based on how fast it needs a response, how much data is involved, and whether the data is sensitive.
This chapter gives you interactive tools to practice making those decisions so you can build good instincts before working on real projects.
18.3 Why Compute Placement Matters
Selecting the right compute tier is one of the most consequential decisions in IoT system design. The wrong choice does not just reduce efficiency – it can cause system failure, regulatory violations, or unsustainable costs.
18.3.1 The Cost of Wrong Placement
18.3.2 Real-World Consequences of Wrong Placement
| Mistake | Impact | Real Example |
|---|---|---|
| Safety in Cloud | System failure during outage | Factory robot continues moving when network drops – worker injury |
| Raw data upload | Unsustainable bandwidth costs | 100 cameras x 5Mbps = 500Mbps continuous upload = $40K+/month |
| All processing at Edge | No cross-device intelligence | Each thermostat optimizes independently, wasting building-wide energy |
| Privacy data in Cloud | Regulatory violation | Patient health data sent to cloud without consent violates GDPR/HIPAA |
| Ignoring connectivity | Frequent data loss | Remote agriculture sensors on satellite lose 30% of cloud transmissions |
Understanding these consequences is why systematic analysis – not gut feeling – should drive every placement decision. The interactive tools below help you develop this systematic approach.
18.4 The Decision Framework
Before using the interactive tools, understand the five-factor decision framework that drives compute placement:
18.4.1 Factor Details
| Factor | Edge Signal | Fog Signal | Cloud Signal |
|---|---|---|---|
| Latency | < 50ms required (safety, real-time control) | 50-500ms acceptable (local analytics) | > 500ms tolerable (batch analytics, reporting) |
| Bandwidth | < 1 KB/s per device (simple sensors) | 1 KB/s - 1 MB/s (aggregated streams) | > 1 MB/s OK if budget allows |
| Privacy | Highly regulated, cannot leave premises | Regional compliance, anonymize before cloud | Public or non-sensitive data |
| Connectivity | Intermittent or unavailable (remote sites) | Mostly reliable with occasional gaps | Always-on broadband required |
| Cost | Minimize per-device cost (high device count) | Balance device + cloud costs | Minimize device cost, pay for cloud compute |
Many architects make the mistake of basing placement on a single factor (usually latency). Always evaluate all five factors. A low-latency application that also generates massive data volumes needs a different architecture than one with low latency but minimal data. The calculator below helps you consider all factors simultaneously.
18.5 Interactive Tool: Edge vs Cloud Placement Calculator
Use this tool to determine where to process your IoT data based on your specific requirements. The calculator recommends an optimal split between Edge, Fog, and Cloud processing.
How to Use This Tool:
- Latency Requirement: How fast must the system respond? Safety systems need <50ms
- Data Volume: How much data per device per day? High volumes favor fog preprocessing
- Privacy Sensitivity: Can raw data leave the device/premises?
- Connectivity Reliability: How often is internet available?
- Cost Priority: What matters most - device cost, cloud cost, or performance?
- Device Count: Scale affects cost calculations
Key Decision Factors:
- <50ms latency = Edge processing mandatory (cloud round-trip is 100-500ms)
- >1 GB/day/device = Fog preprocessing critical (reduces cloud costs 80-95%)
- Intermittent connectivity = Local autonomy required (fog/edge must function offline)
- Critical privacy = Process locally, send only anonymized summaries to cloud
18.5.1 Try These Scenarios in the Calculator
To build your intuition, try entering these real-world scenarios:
| Scenario | Latency | Data Volume | Privacy | Connectivity | Expected Result |
|---|---|---|---|---|---|
| Factory robot arm | < 10ms | 500 MB/day | Medium | Reliable LAN | Heavy Edge |
| Smart city cameras | < 1s | 50 GB/day | High | Broadband | Fog-heavy hybrid |
| Soil moisture farm | Minutes OK | 10 KB/day | Low | Satellite (spotty) | Edge with store-and-forward |
| Fleet vehicle tracking | < 5s | 100 MB/day | Medium | Cellular (variable) | Fog + Cloud |
| Hospital patient monitoring | < 100ms | 1 GB/day | Critical | Hospital Wi-Fi | Edge + Fog (no raw cloud) |
18.6 Knowledge Check: Placement Fundamentals
An IoT application has the following requirements: latency < 20ms, data volume 100 KB/day, low privacy sensitivity, reliable connectivity, and flexible budget. Which single factor makes Edge processing mandatory?
- Low privacy sensitivity
- Reliable connectivity
- Latency requirement of < 20ms
- Low data volume
C) Latency requirement of < 20ms
Cloud round-trip latency is typically 100-500ms, and fog latency is 50-200ms. A 20ms requirement can only be met by Edge processing, which operates with sub-millisecond to low-millisecond latency since processing happens directly on or very near the device. The other factors (low privacy, reliable connectivity, low data volume) would actually favor cloud or fog, but the latency constraint overrides them all.
A smart building has 200 environmental sensors, each producing 50 MB of data per day. The total daily data is 10 GB. Cloud costs include storage ($0.023/GB) and ingestion ($0.01/GB), totaling $0.033/GB. If fog preprocessing reduces data by 90%, how much does fog save per month on cloud costs?
- $0.99/month
- $8.91/month
- $29.70/month
- $297.00/month
B) $8.91/month
Calculation:
- Total data: 200 sensors x 50 MB/day = 10 GB/day = 300 GB/month
- Without fog: 300 GB x $0.033/GB = $9.90/month
- With fog (90% reduction): 30 GB x $0.033/GB = $0.99/month
- Monthly savings: $9.90 - $0.99 = $8.91/month
At this small scale (200 sensors, low data rates), savings are modest. But the principle scales linearly: at 10,000 sensors producing 500 GB/day (15 TB/month), savings reach $445.50/month. For video streams (50 GB/day per camera x 200 cameras = 10 TB/day), fog preprocessing saves over $8,900/month. The key insight is that fog savings scale linearly with data volume and device count – fog preprocessing becomes essential at production scale, not prototype scale.
A remote oil pipeline monitoring system operates in a desert location with satellite internet that is available only 60% of the time. The system must detect pressure anomalies and shut valves within 2 seconds to prevent leaks. Which architecture is correct?
- Cloud-only: Send all data to cloud for ML-based anomaly detection
- Fog-only: Process at regional gateway with cellular backup
- Edge-primary: Local anomaly detection with store-and-forward to cloud during connectivity windows
- Hybrid: Edge for critical safety, cloud for historical analysis, no fog needed
C) Edge-primary: Local anomaly detection with store-and-forward to cloud during connectivity windows
This scenario has two critical constraints: (1) 2-second response time for safety valve control, and (2) only 60% connectivity reliability. Option A fails because the cloud is unavailable 40% of the time. Option B fails because cellular backup is unlikely in a remote desert. Option D is partially correct but misses the store-and-forward requirement for data that cannot be uploaded immediately.
Option C is optimal because:
- Edge handles real-time anomaly detection and valve control (meets 2-second requirement regardless of connectivity)
- Store-and-forward buffers data locally during the 40% offline periods
- Cloud upload happens during connectivity windows for historical analysis, trend detection, and model updates
- The system never depends on network availability for its safety-critical function
A healthcare wearable monitors a patient’s ECG continuously. Requirements: latency < 200ms for arrhythmia alerts, 500 MB/day data volume, HIPAA-regulated data, reliable hospital Wi-Fi when indoors but cellular when outdoors. Where should arrhythmia detection run?
- Cloud – best ML models for accurate detection
- Edge – on the wearable device itself
- Fog – on the patient’s smartphone
- Either B or C depending on the wearable’s compute capability
D) Either B or C depending on the wearable’s compute capability
This is a multi-factor decision:
- Latency (< 200ms): Both Edge (wearable) and Fog (smartphone) can meet this. Cloud might not meet it reliably, especially on cellular.
- Privacy (HIPAA): Raw ECG data should not leave the patient’s devices without proper authorization. Edge and Fog both keep data local.
- Connectivity (variable): When outdoors on cellular, cloud latency increases and reliability drops. Edge/Fog remain available.
- Compute: If the wearable has sufficient processing power (modern wearables often do for simple arrhythmia detection), Edge is ideal. If the detection algorithm requires more compute, the smartphone (Fog) is the next best option.
The cloud should receive only anonymized summaries, alerts, and periodic model updates – not raw ECG streams. This satisfies HIPAA requirements while still enabling cloud-based analytics for long-term health trend analysis.
18.7 Interactive Game: Compute Placement Challenge
Test your understanding of edge, fog, and cloud computing by placing IoT workloads in the optimal processing tier. Make decisions based on latency, bandwidth, compute requirements, and cost constraints.
How to Play:
- Read each workload scenario carefully
- Consider the requirements (latency, bandwidth, compute, storage)
- Choose Edge, Fog, or Cloud processing
- See immediate feedback on your choice
- Progress through 3 levels of increasing complexity
- Track your efficiency score and cost optimization
Interactive Animation: This animation is under development.
18.7.1 How to Play
- Read the Scenario: Each workload describes an IoT application with specific requirements
- Analyze Requirements: Consider latency, bandwidth, compute, storage, and connectivity needs
- Make Your Decision: Choose Edge, Fog, Cloud, or Hybrid processing
- Learn from Feedback: See why your choice was optimal, acceptable, or suboptimal
- Track Progress: Your efficiency score and cost optimization are tracked across all levels
18.7.2 Level Progression
| Level | Focus | Scenarios | Complexity |
|---|---|---|---|
| Level 1 | Fundamentals | 5 | Clear-cut decisions (safety vs analytics) |
| Level 2 | Trade-offs | 5 | Cost vs performance, privacy vs capability |
| Level 3 | Advanced | 5 | Hybrid architectures, multi-tier solutions |
18.7.3 Decision Framework Reference
Use this quick reference while playing:
| Requirement | Edge | Fog | Cloud |
|---|---|---|---|
| Latency <50ms | Required | Too slow | Too slow |
| Latency <500ms | Optional | Ideal | Possible |
| Offline operation | Required | Helpful | Impossible |
| Heavy ML training | Impossible | Limited | Ideal |
| Multi-site aggregation | Impossible | Regional | Global |
| Protocol translation | N/A | Required | N/A |
18.8 Common Placement Mistakes and How to Avoid Them
After using the tools above, be aware of these frequently observed mistakes in real-world projects:
18.8.1 Detailed Mistake Analysis
What happens: Teams default to cloud for all processing because cloud platforms are familiar and powerful.
Why it fails: A cloud-first approach for an autonomous vehicle means collision avoidance depends on a 200ms network round-trip. At 60 mph, a car travels 5.4 meters in 200ms – the difference between stopping safely and a collision.
Fix: Start with the most time-critical function and work outward. Place the fastest-response workloads at the edge first, then decide what can tolerate fog or cloud latency.
What happens: Architects design at prototype scale (10 devices) and assume costs stay proportional.
Why it fails: At 10 devices sending 1 GB/day each, cloud ingestion costs $3/month. At 10,000 devices, it becomes $3,000/month. But a fog gateway that reduces data by 90% costs $500 one-time and reduces ongoing costs to $300/month.
Fix: Always model costs at your target scale (not prototype scale). Include a cost projection for 10x, 100x, and 1000x device counts. Use the calculator above to see how fog filtering changes the economics.
What happens: The architecture assumes always-on connectivity. When the network fails, safety-critical functions stop working.
Why it fails: Even “99.99% uptime” internet means 52 minutes of downtime per year – enough for a factory floor incident if safety logic runs in the cloud. Cellular networks in rural areas may have far lower reliability.
Fix: For every workload, ask: “What happens if the network is down for 1 hour?” Any workload that cannot tolerate that outage must run at Edge or Fog.
What happens: Teams plan cloud processing first, then try to “add security” through encryption.
Why it fails: GDPR’s data minimization principle requires that you collect and transmit only the minimum necessary data. Encrypting a raw video feed still means the raw video leaves the premises. Processing at fog or edge and sending only metadata or aggregated results is both more secure and more compliant.
Fix: Map each data flow to its privacy classification before choosing tiers. Data classified as “sensitive” or “regulated” should be processed at Edge or Fog whenever possible, with only anonymized summaries sent to Cloud.
18.9 Knowledge Check: Advanced Placement
A smart factory has three types of workloads: (1) robotic arm collision avoidance (< 5ms), (2) production line quality inspection using computer vision (< 1s), and (3) monthly production optimization using 6 months of historical data. Which placement is correct?
- All three at Edge for lowest latency
- All three in Cloud for best compute resources
- Edge, (2) Fog, (3) Cloud
- Edge, (2) Cloud, (3) Cloud
C) (1) Edge, (2) Fog, (3) Cloud
This is a classic hybrid architecture:
- (1) Collision avoidance at Edge: 5ms latency requires on-device processing. Even LAN-connected fog adds too much latency. The edge controller must have pre-loaded safety rules.
- (2) Quality inspection at Fog: Computer vision for defect detection needs more compute than an edge sensor but must respond within 1 second. A fog server with GPU on the factory floor can run inference models locally, avoiding cloud latency and bandwidth costs of streaming video.
- (3) Production optimization in Cloud: Monthly optimization using 6 months of data requires massive compute and storage – perfect for cloud. Latency is irrelevant since this runs as a batch job.
Option D is incorrect because streaming raw video to the cloud for quality inspection wastes bandwidth and adds unnecessary latency. Fog-based inference is the standard pattern for real-time video analytics in manufacturing.
A city deploys 5,000 air quality sensors. Each produces 10 MB/day. Cloud processing costs $0.05/GB. A fog gateway costs $2,000 and reduces data by 95%. How many fog gateways (each handling 100 sensors) are needed, and what is the 3-year cost comparison?
- 50 gateways; fog saves $10,000 over 3 years
- 50 gateways; fog saves $70,000 over 3 years
- 50 gateways; fog costs more due to gateway purchase price
- 500 gateways; fog saves $5,000 over 3 years
B) 50 gateways; fog saves approximately $70,000 over 3 years
Calculation:
- Gateways needed: 5,000 sensors / 100 per gateway = 50 gateways
- Gateway cost: 50 x $2,000 = $100,000 one-time
Without fog (3 years):
- Daily data: 5,000 x 10 MB = 50 GB/day
- Monthly: 50 x 30 = 1,500 GB/month
- Monthly cloud cost: 1,500 x $0.05 = $75/month
- 3-year total: $75 x 36 = $2,700
With fog (3 years):
- Daily data after 95% reduction: 50 GB x 0.05 = 2.5 GB/day
- Monthly: 2.5 x 30 = 75 GB/month
- Monthly cloud cost: 75 x $0.05 = $3.75/month
- 3-year cloud cost: $3.75 x 36 = $135
- 3-year total (gateways + cloud): $100,000 + $135 = $100,135
Wait – at $0.05/GB, the cloud costs are so low that fog gateways actually cost MORE. This reveals an important insight: fog preprocessing is NOT always cost-effective. It depends on the per-GB cloud cost and data volume. At $0.05/GB, fog gateways are a net loss. But at enterprise cloud pricing with compute costs ($0.50-$5.00/GB for processing + storage + analytics), fog becomes highly economical.
The real lesson: Always calculate total cost of ownership, not just one cost component. The correct answer depends on your actual cloud pricing tier.
18.10 Worked Example: Designing a Smart Hospital Architecture
Let us walk through a complete placement decision for a smart hospital with multiple workload types.
18.10.1 Requirements
| Workload | Latency | Data Volume | Privacy | Connectivity | Devices |
|---|---|---|---|---|---|
| Patient vital signs alert | < 100ms | 200 KB/day per patient | HIPAA critical | Hospital Wi-Fi (reliable) | 500 beds |
| Nurse call system | < 500ms | 1 KB/event | Medium | Hospital Wi-Fi | 500 beds |
| Medical imaging analysis | < 30 min | 500 MB/scan | HIPAA critical | LAN (reliable) | 10 scanners |
| Environmental monitoring | < 5 min | 50 KB/day per room | Low | Wi-Fi | 300 rooms |
| Inventory tracking (RFID) | < 2s | 10 MB/day total | Low | Wi-Fi + BLE | 1,000 items |
18.10.2 Analysis Using the Five-Factor Framework
Workload 1 – Patient vital signs alert:
- Latency (< 100ms) –> Edge or Fog
- Privacy (HIPAA critical) –> Must not send raw data to external cloud
- Decision: Edge (bedside monitor) for real-time alerting, Fog (floor server) for cross-patient pattern detection, Private cloud (on-premises) for historical analysis
Workload 2 – Nurse call system:
- Latency (< 500ms) –> Fog is sufficient
- Low data volume, medium privacy
- Decision: Fog (floor gateway) routes alerts to nearest available nurse
Workload 3 – Medical imaging:
- Latency (30 minutes OK) –> Cloud is fine
- But HIPAA critical –> Cannot use public cloud without BAA (Business Associate Agreement)
- Decision: Private cloud (on-premises data center) or HIPAA-compliant public cloud (AWS GovCloud, Azure Healthcare)
Workload 4 – Environmental monitoring:
- No urgent latency, low privacy, minimal data
- Decision: Cloud – simple and cost-effective. Direct sensor-to-cloud via MQTT.
Workload 5 – Inventory tracking:
- Moderate latency (2s), low privacy, moderate data
- Decision: Fog – BLE gateway aggregates RFID reads, filters duplicates, sends location updates to cloud inventory system
18.10.3 Architecture Diagram
Key design decisions in this architecture:
- Patient vital signs never leave the hospital network as raw data – only anonymized summaries reach the cloud
- Environmental sensors go directly to cloud because they have no latency or privacy constraints worth the cost of fog infrastructure
- RFID uses fog to deduplicate thousands of reads per second down to meaningful location updates
- Medical imaging uses private cloud to maintain HIPAA compliance while leveraging cloud-scale compute
18.11 Summary
This chapter provided interactive tools and systematic frameworks for making Edge-Fog-Cloud placement decisions:
18.11.1 Key Concepts
- Five-Factor Decision Framework: Every placement decision should evaluate latency, bandwidth, privacy, connectivity, and cost simultaneously – never just one factor
- Compute Placement Calculator: Analyzes 6 input parameters (latency, data volume, privacy, connectivity, cost priority, device count) to recommend optimal tier distribution
- Placement Game: 15 scenarios across 3 difficulty levels testing real-world architecture decisions
18.11.2 Critical Decision Rules
| Rule | Threshold | Action |
|---|---|---|
| Latency < 50ms | Safety-critical response time | Edge processing mandatory |
| Data > 1 GB/day/device | High bandwidth cost | Fog preprocessing critical (reduces cloud costs 80-95%) |
| Connectivity < 95% | Unreliable network | Local autonomy required at Edge or Fog |
| HIPAA / GDPR regulated | Privacy-critical data | Process locally, send only anonymized summaries |
| ML model training | High compute demand | Cloud processing (Edge/Fog for inference only) |
18.11.3 Common Mistakes to Avoid
- Cloud-first bias – Always start with the most time-critical workload and work outward
- Ignoring scale economics – Model costs at target scale (not prototype)
- Forgetting failure modes – Ask “What happens when the network is down?”
- Privacy as afterthought – Map data flows to privacy classifications before choosing tiers
18.11.4 Practical Takeaway
The best IoT architectures are hybrid by design: Edge for instant reactions, Fog for local intelligence, Cloud for global wisdom. Use the calculator and game in this chapter to develop your intuition for these placement decisions.
18.12 Knowledge Check
Common Pitfalls
Interactive simulators use simplified models of latency, bandwidth, and cost. Real deployments encounter multipath interference, TCP retransmission, JVM JIT compilation, and cloud provider throttling that simulators omit. Always validate simulator predictions against at least one real-hardware pilot before committing to architecture based solely on simulation.
Interactive tools default to round numbers (100ms latency, 10 Mbps bandwidth) for usability. Real IoT deployments span Zigbee at 250 kbps with 50ms jitter to 5G at 1 Gbps with 5ms latency. Configure tool parameters with measured values from your target deployment environment before using results for architectural decisions.
Most interactive cost/latency calculators display average latency. For safety-critical IoT applications, P99 or maximum latency matters — a 20ms average with 500ms P99 is unacceptable for a motor control system. When evaluating tools, look for percentile latency outputs or compute variance separately to understand worst-case behavior.
18.13 What’s Next
| Topic | Chapter | Description |
|---|---|---|
| Architecture | Edge-Fog-Cloud Architecture | Detailed analysis of the three-layer architecture with protocol translation patterns |
| Devices and Integration | Edge-Fog-Cloud Devices and Integration | Device selection criteria and integration patterns |
| Advanced Topics | Edge-Fog-Cloud Advanced Topics | Worked examples, Kubernetes at the edge, and serverless fog patterns |