53  Edge Computing: Practical Guide

In 60 Seconds

Edge computing practical decisions depend on quantifiable trade-offs: latency budgets, bandwidth costs, and hardware investments. Interactive tools and worked examples help you calculate when edge processing saves money (factory monitoring saves 41% over 3 years) versus when cloud-only is cheaper, while common pitfalls like underestimating device failures and ignoring clock drift can derail real deployments.

53.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Use Interactive Latency Tools: Calculate end-to-end latency for different processing locations (device, edge, cloud)
  • Apply Worked Examples: Follow step-by-step calculations for real-world edge vs cloud decisions
  • Avoid Common Pitfalls: Recognize and prevent typical edge deployment mistakes
  • Make Informed Architecture Decisions: Select appropriate processing locations based on latency, bandwidth, and cost requirements

Key Concepts

  • Pattern instantiation: The process of taking an abstract edge pattern (e.g., aggregate-at-fog) and translating it into a concrete implementation with specific hardware, protocols, and code, appropriate for a given IoT use case.
  • Resource-constrained optimisation: Adapting algorithms and data structures to operate within the tight memory, CPU, and energy budgets of edge hardware, often requiring significant trade-offs compared to cloud implementations.
  • Incremental feature rollout: Deploying a simple edge pattern first (threshold detection), validating it in production, and then layering more sophisticated patterns (ML inference) on top once the simpler baseline is stable.
  • Edge pattern composition: Combining multiple edge patterns in sequence (filter → aggregate → compress → transmit) to achieve the required data reduction while maintaining the information content needed for analytics.
  • Real-world calibration offset: The gap between how an edge pattern performs on clean simulated data versus how it performs with real sensor noise, device-to-device variability, and environmental interference.

Practical edge processing patterns are proven recipes for handling IoT data at the source. Like cooking recipes, each solves a specific problem: one filters noisy readings, another aggregates data to reduce uploads, and a third makes quick local decisions. Learning these patterns saves you from reinventing solutions that experienced engineers have already refined.

53.2 Prerequisites

Before diving into this chapter, you should be familiar with:

53.3 Interactive: Edge vs Cloud Latency Explorer

Use this explorer to compare end-to-end latency for different placement options (device-only, edge gateway, cloud) under configurable network conditions. This helps you make informed decisions about where to process data in your IoT architecture.

Latency Calculator

Interactive Animation: This visualization is under development.

How to Use This Explorer

Network Condition Presets:

  • LAN/Wi-Fi: Edge latency 1-5ms, Cloud latency 10-30ms
  • Cellular (4G/5G): Edge latency 10-20ms, Cloud latency 50-150ms
  • Satellite/Rural: Edge latency 20-50ms, Cloud latency 200-600ms

Real-World Scenarios:

Use Case Max Latency Recommended Architecture
Factory emergency shutdown 10-20ms Device-local processing
Smart home automation 50-100ms Edge gateway
Environmental monitoring 1000ms+ Cloud processing acceptable
Autonomous vehicle control 1-5ms Device-local with edge backup
Smart building HVAC 500-1000ms Edge or cloud (either works)

Key Insights:

  • WAN latency dominates: The round-trip to cloud (2x uplink) often exceeds total edge processing time
  • Safety-critical systems: Must process locally – network failures cannot prevent emergency responses
  • Hybrid is best: Edge for time-critical decisions, cloud for analytics and ML training
  • Cost vs. Latency trade-off: Cloud processing may be cheaper but slower; edge requires hardware investment

Try These Experiments:

  1. Set cloud latency to 200ms (satellite link) – see how edge becomes essential
  2. Set latency requirement to 20ms – observe that only local processing works
  3. Compare bandwidth costs: Edge reduces data transmission by 90% through local aggregation

53.5 Worked Examples

Scenario: A manufacturing plant deploys 500 vibration sensors on critical machinery. Each sensor generates 1,000 samples/second at 16-bit resolution. Management needs to decide between edge processing (local analysis with alerts) versus cloud processing (centralized analysis). The plant has a 100 Mbps internet connection.

Given:

  • 500 sensors, each producing 1,000 samples/second
  • Sample size: 16 bits (2 bytes) per sample
  • Internet bandwidth: 100 Mbps
  • Cloud storage cost: $0.023/GB/month
  • Cloud compute cost: $0.05/hour for ML inference
  • Edge gateway cost: $500 per unit (supports 50 sensors each)
  • Latency requirement for safety alerts: < 50ms
  • Cloud round-trip latency: ~150ms

Solution:

Step 1: Calculate raw data bandwidth requirements

Per sensor: 1,000 samples/sec x 2 bytes = 2,000 bytes/sec = 16 kbps Total for 500 sensors: 500 x 16 kbps = 8,000 kbps = 8 Mbps

With protocol overhead (~20%): 8 x 1.2 = 9.6 Mbps (fits within 100 Mbps)

Step 2: Calculate monthly data volume and cloud storage cost

Daily data: 8 Mbps x 86,400 sec/day / 8 bits/byte = 86.4 GB/day Monthly data: 86.4 x 30 = 2,592 GB/month Cloud storage cost: 2,592 x $0.023 = $59.62/month

Step 3: Evaluate latency requirement

Cloud path: Sensor to Network to Cloud to Analysis to Response = 150ms minimum Edge path: Sensor to Local Gateway to Analysis to Response = ~10-20ms

Verdict: Cloud path fails the 50ms safety requirement.

Step 4: Design hybrid architecture

Edge processing (10 gateways x $500 = $5,000 upfront): - Real-time vibration analysis at edge - Anomaly detection with immediate alerts (< 20ms) - 95% data reduction through FFT features (send frequency spectrum, not raw samples)

Reduced cloud data: 2,592 GB x 5% = 129.6 GB/month Reduced storage cost: 129.6 x $0.023 = $2.98/month

Step 5: Calculate total cost comparison over 3 years

Cloud-Only Approach:

  • Storage: $59.62/month x 36 = $2,146
  • Bandwidth: 2,592 GB x $0.09/GB x 36 = $8,398
  • Compute (24/7 ML): $0.05/hour x 8,760 x 3 = $1,314
  • Total: $11,858 (and fails latency requirement!)

Edge-Hybrid Approach:

  • Edge hardware: $5,000 (one-time)
  • Reduced storage: $2.98 x 36 = $107
  • Reduced bandwidth: 129.6 GB x $0.09 x 36 = $420
  • Cloud compute (periodic training only): $500/year x 3 = $1,500
  • Total: $7,027 (and meets latency requirement!)

Result: Edge-hybrid architecture saves $4,831 (41%) over 3 years while meeting the critical 50ms latency requirement that pure cloud cannot achieve.

Let’s verify the bandwidth savings from edge FFT processing:

Raw vibration data (500 sensors x 1,000 Hz x 2 bytes): \[\text{Data rate} = 500 \times 1{,}000\text{ Hz} \times 2\text{ bytes} = 1{,}000{,}000\text{ bytes/s} = 976.6\text{ KB/s}\] \[\text{Monthly bandwidth} = 976.6\text{ KB/s} \times 2{,}592{,}000\text{ s} = 2{,}531\text{ GB}\]

Edge-processed FFT features (512-point FFT every 51.2 ms, transmit top 20 peaks as 40 bytes): \[\text{Transmission rate} = 500\text{ sensors} \times \frac{40\text{ bytes}}{0.0512\text{ s}} = 390{,}625\text{ bytes/s} = 381.5\text{ KB/s}\] \[\text{Monthly bandwidth} = 381.5\text{ KB/s} \times 2{,}592{,}000\text{ s} = 989\text{ GB}\] \[\text{Reduction ratio} = \frac{2{,}531 - 989}{2{,}531} = 60.9\% \text{ reduction}\]

Actually, with additional on-edge anomaly filtering (transmit only when vibration exceeds threshold), real-world deployments achieve 95% reduction -> 127 GB/month. The key: edge transforms high-frequency raw samples into low-frequency feature vectors plus anomaly alerts.

Key Insight: Edge processing isn’t just about cost – it’s often the only viable option for latency-critical industrial applications. The 95% data reduction from edge FFT analysis also dramatically reduces both bandwidth and storage costs. Always evaluate latency requirements first; they often make the architecture decision for you.

Worked Example: Edge ML Inference Latency Budget for Autonomous Drone Landing

Scenario: A drone delivery company needs to deploy a landing zone detection model on their drones. The model must run on-device to ensure safe landing even when cellular connectivity is lost. They need to calculate if edge inference meets the strict latency requirements.

Given:

  • Landing zone detection model: MobileNetV2-SSD, 6.2 MB INT8 quantized
  • Hardware: NVIDIA Jetson Nano (128 CUDA cores, 4 GB RAM, 10W power budget)
  • Total latency budget: 200 ms (from camera frame capture to landing decision)
  • Camera resolution: 1280x720 at 30 fps
  • Safety requirement: Must process at least 5 fps for smooth landing approach
  • Minimum detection confidence: 85% for valid landing zones

Steps:

  1. Break down the latency budget:
    • Camera capture + buffer: 33 ms (1 frame at 30 fps)
    • Image preprocessing (resize to 300x300, normalize): 8 ms
    • Model inference: ? ms (to be calculated)
    • Post-processing (NMS, confidence filtering): 5 ms
    • Decision logic + motor command: 10 ms
    • Safety margin (10%): 20 ms
    • Available for inference: 200 - 33 - 8 - 5 - 10 - 20 = 124 ms
  2. Benchmark inference on Jetson Nano:
    • MobileNetV2-SSD (float32): 180 ms per frame (too slow, only 5.5 fps)
    • MobileNetV2-SSD (INT8 with TensorRT): 65 ms per frame (15.4 fps)
    • MobileNetV2-SSD (INT8 + FP16 mixed): 72 ms per frame (13.9 fps)
    • Selected: INT8 with TensorRT at 65 ms (fits 124 ms budget)
  3. Calculate actual frame rate:
    • Total pipeline latency: 33 + 8 + 65 + 5 + 10 = 121 ms
    • Theoretical max fps: 1000 / 121 = 8.3 fps
    • With 20 ms safety margin: 1000 / 141 = 7.1 fps
    • Meets 5 fps minimum requirement
  4. Power consumption analysis:
    • Jetson Nano at full load: 10W
    • Drone battery: 5000 mAh at 14.8V = 74 Wh
    • Jetson runtime on dedicated battery: 74 Wh / 10W = 7.4 hours
    • But inference is not continuous – only during landing (~30 seconds)
    • Energy per landing: 10W x 0.5 min = 0.083 Wh (negligible)
  5. Validate accuracy at edge:
    • Cloud model (ResNet-50): 94.2% mAP for landing zone detection
    • Edge model (MobileNetV2-SSD INT8): 89.7% mAP
    • At 85% confidence threshold: 91.3% precision, 87.8% recall
    • Meets 85% confidence requirement for safe operation

Result:

  • Edge inference time: 65 ms (52% of 124 ms budget)
  • Total pipeline latency: 121 ms (60% of 200 ms budget)
  • Achieved frame rate: 7.1 fps (142% of 5 fps requirement)
  • Detection accuracy: 89.7% mAP (acceptable for safety-critical landing)
  • Power consumption: < 0.1 Wh per landing (negligible impact on flight time)

Key Insight: When calculating edge ML latency budgets, always account for the full pipeline (capture, preprocess, inference, postprocess, action) not just inference time. In this example, inference was only 54% of the total pipeline latency. TensorRT INT8 optimization provided a 2.8x speedup over float32, making the difference between a viable product and a failed prototype. For safety-critical applications, build in explicit safety margins (we used 10%) to handle worst-case variations.

Worked Example: Memory Optimization for Multi-Model Edge Gateway

Scenario: A smart building company deploys edge gateways that must run three ML models simultaneously: occupancy detection (from PIR sensors), HVAC anomaly detection (from temperature/humidity), and air quality prediction (from CO2/VOC sensors). The gateway has limited RAM and must optimize memory allocation.

Given:

  • Hardware: Raspberry Pi 4 (4 GB RAM, 1.5 GHz quad-core Cortex-A72)
  • OS and services overhead: 800 MB RAM
  • Available for ML workloads: 3.2 GB RAM
  • Model requirements:
    • Occupancy model: 45 MB weights, 120 MB peak activation memory, 50 ms inference
    • HVAC anomaly model: 28 MB weights, 85 MB peak activation memory, 35 ms inference
    • Air quality model: 18 MB weights, 45 MB peak activation memory, 25 ms inference
  • Inference schedule: Occupancy every 1s, HVAC every 5s, Air quality every 10s
  • Constraint: Must handle worst-case where all three models run simultaneously

Steps:

  1. Calculate baseline memory requirements:
    • Model weights (always loaded): 45 + 28 + 18 = 91 MB
    • Peak simultaneous activations: 120 + 85 + 45 = 250 MB
    • TensorFlow Lite runtime overhead: ~50 MB
    • Input/output buffers: ~20 MB
    • Baseline total: 91 + 250 + 50 + 20 = 411 MB (fits in 3.2 GB easily)
  2. Evaluate memory sharing opportunity:
    • Models rarely run simultaneously (different schedules)
    • Occupancy: 1/1 = 100% duty cycle
    • HVAC: 1/5 = 20% duty cycle
    • Air quality: 1/10 = 10% duty cycle
    • Probability all three overlap: 100% x 20% x 10% = 2%
    • Optimization opportunity: Share activation memory between models
  3. Implement activation memory pooling:
    • Allocate single shared tensor arena: max(120, 85, 45) = 120 MB
    • Each model borrows arena during inference, releases after
    • New memory requirement: 91 (weights) + 120 (shared arena) + 50 + 20 = 281 MB
    • Savings: 411 - 281 = 130 MB (31% reduction)
  4. Apply INT8 quantization to reduce weight memory:
    • Occupancy (INT8): 45 MB / 4 = 11.25 MB weights, 30 MB activations
    • HVAC (INT8): 28 MB / 4 = 7 MB weights, 21 MB activations
    • Air quality (INT8): 18 MB / 4 = 4.5 MB weights, 11 MB activations
    • Total weights: 11.25 + 7 + 4.5 = 22.75 MB
    • Shared arena (INT8): max(30, 21, 11) = 30 MB
    • New total: 22.75 + 30 + 50 + 20 = 122.75 MB
  5. Handle worst-case simultaneous inference:
    • If models must run together, use sequential execution with shared arena
    • Time budget: 1 second (occupancy schedule)
    • Sequential time: 50 + 35 + 25 = 110 ms (11% of 1 second)
    • Conclusion: Sequential execution with shared memory is viable
  6. Final memory allocation:
    • Model weights (INT8, always resident): 23 MB
    • Shared tensor arena: 30 MB
    • TensorFlow Lite runtime: 50 MB
    • I/O buffers: 20 MB
    • Safety margin (20%): 25 MB
    • Total ML allocation: 148 MB (4.6% of available 3.2 GB)

Result:

  • Memory usage: 148 MB (reduced from 411 MB baseline, 64% savings)
  • Weight memory: 23 MB (reduced from 91 MB with INT8, 75% savings)
  • Activation memory: 30 MB shared (reduced from 250 MB peak, 88% savings)
  • Remaining RAM for future models: 3.2 GB - 148 MB = 3.05 GB (95% available)
  • Inference latency: 110 ms sequential (meets 1 second requirement)

Key Insight: Multi-model edge deployments benefit enormously from activation memory sharing because most models spend <10% of their time in active inference. The 88% activation memory savings came from recognizing that models rarely overlap and can share a single arena. Combined with INT8 quantization for weight reduction, the total memory footprint dropped from 411 MB to 148 MB. This pattern scales to 10+ models on edge gateways with proper scheduling. Always analyze model duty cycles before allocating dedicated memory per model.

53.6 Interactive: Edge vs Cloud Cost Calculator

Use this calculator to estimate whether edge processing or cloud-only architecture is more cost-effective for your deployment.

53.7 Interactive: Edge ML Latency Budget Calculator

Use this tool to break down your edge ML inference pipeline and check if it fits within your latency budget.

53.8 Common Pitfalls

Pitfall: Underestimating Edge Device Failures

The mistake: Designing edge deployments assuming gateway hardware will run reliably for years without intervention, leading to catastrophic data loss and blind spots when devices inevitably fail.

Why it happens: Lab testing doesn’t replicate harsh field conditions (temperature extremes, humidity, power fluctuations). IT teams accustomed to data center reliability metrics don’t account for industrial/outdoor environments. Budget pressure leads to selecting consumer-grade hardware for industrial applications.

The fix: Design for failure from day one. Implement heartbeat monitoring with automatic alerts when edge devices go silent. Deploy redundant gateways for critical processes (N+1 minimum). Use industrial-grade hardware rated for your environment (IP67 for outdoor, wide temperature range for factories). Budget for 5-10% annual hardware replacement. Implement store-and-forward on sensors so data survives gateway failures.

Pitfall: Over-Processing at the Edge

The mistake: Running complex ML models or heavy analytics at the edge “because we can,” consuming battery and compute resources faster than necessary while providing marginal improvement over simple threshold checks.

Why it happens: Engineers excited by edge AI capabilities deploy sophisticated models without measuring actual benefit. Marketing claims about “AI at the edge” drive technical decisions. No baseline comparison between simple rules and ML approaches. Premature optimization before understanding actual requirements.

The fix: Start with simple threshold-based filtering (if temp > 80C, alert). Measure the false positive/negative rate. Only upgrade to ML when simple rules prove inadequate. Always A/B test: deploy ML on 10% of devices, compare accuracy and resource usage. Set explicit power/compute budgets before selecting algorithms. A 95% accurate simple filter running 10x longer on battery often beats 99% ML that drains devices in days.

Pitfall: Ignoring Clock Drift and Time Sync

The mistake: Assuming edge devices maintain accurate timestamps without explicit synchronization, leading to out-of-order events, incorrect correlations, and analytics anomalies that are nearly impossible to debug.

Why it happens: Cheap RTCs (real-time clocks) drift minutes per week. Network outages prevent NTP synchronization. Developers test in labs with always-connected devices. Time zones and daylight saving transitions not handled consistently across device fleets.

The fix: Use NTP or PTP (Precision Time Protocol) for all edge devices, with local fallback when network unavailable. Record both local and synchronized timestamps. Implement monotonic sequence numbers alongside timestamps for event ordering. Design analytics to tolerate 5-30 second timestamp uncertainty. Alert on devices with >1 minute drift. Store timestamps in UTC only – convert for display, never for storage.

Pitfall: No OTA Update Strategy for Edge Software

The mistake: Deploying edge devices with firmware/software that cannot be remotely updated, leaving the fleet stuck on buggy or insecure versions, requiring expensive truck rolls to fix issues that could be patched remotely.

Why it happens: Initial deployment focuses on getting devices working, not long-term maintenance. OTA (Over-The-Air) update infrastructure seems complex and unnecessary at launch. Security implications of remote code execution are underestimated. “We’ll add that later” becomes “we shipped 10,000 devices without it.”

The fix: Build OTA update capability from day one as a non-negotiable requirement. Implement A/B partitions for rollback safety (if update fails, device boots previous working version). Use cryptographic signing for all firmware to prevent tampering. Deploy updates in staged rollouts (1% to 10% to 50% to 100%) with automatic rollback on failure metrics. Budget for update infrastructure (bandwidth, servers) as part of device deployment cost. Test update process monthly even when no updates are needed to ensure it still works.

Pitfall: Assuming Edge Means Always Disconnected

The mistake: Designing edge systems that operate in complete isolation from cloud services, missing opportunities for remote monitoring, centralized management, and hybrid processing that combines edge speed with cloud intelligence.

Why it happens: Edge computing marketing emphasizes offline operation and low latency. Teams interpret “process locally” as “never connect to cloud.” Network security concerns lead to air-gapped designs. Initial requirements focus on worst-case (no connectivity) without considering normal operation (intermittent or continuous connectivity).

The fix: Design for the spectrum of connectivity states, not just offline. Implement store-and-forward for essential cloud sync during connectivity windows. Use cloud for what it does best: centralized dashboards, fleet-wide analytics, ML model training, configuration management. Keep time-critical decisions at edge while sending summaries/alerts to cloud. Implement graceful degradation: full functionality with connectivity, core safety functions without. Test explicitly for all connectivity states (always-on, intermittent, fully offline) during development.

Edge Computing Patterns Summary

Based on this analysis, edge computing excels when:

  • Latency matters: Sub-100ms response times required
  • Bandwidth is limited: Cellular, satellite, or metered connections
  • Reliability is critical: Must operate during network outages
  • Privacy is required: Data cannot leave local premises
  • Cost optimization: Reduce cloud egress charges (often $0.09/GB)

Cloud computing excels when:

  • Complex processing: ML model training, big data analytics
  • Global coordination: Multi-site data aggregation
  • Scalability: Elastic compute for variable workloads
  • Centralized management: Single pane of glass for all devices
  • Historical analysis: Long-term data storage and queries

53.9 Summary

  • Interactive latency calculators help quantify the impact of processing location on end-to-end response time
  • Factory monitoring example shows edge-hybrid architecture saves 41% over 3 years while meeting 50ms latency requirements
  • Drone landing example demonstrates full pipeline analysis: inference accounts for about 54% of total pipeline latency, not the entirety
  • Memory optimization example shows 64% savings through activation sharing and INT8 quantization
  • Five common pitfalls include underestimating failures, over-processing, clock drift, missing OTA updates, and assuming always-disconnected operation
  • Edge excels for latency, bandwidth, reliability, and privacy; cloud excels for complex analytics and centralized management

53.10 Concept Check

Quick Knowledge Check

Q: When does edge computing save money versus cloud-only architecture?

Edge computing saves money when: (1) data volumes exceed 10 GB/day per site, (2) data reduction factors exceed 100x, (3) bandwidth costs more than $0.10/GB, or (4) latency requirements under 1 second justify hardware investment. For low-volume deployments (<1 GB/day), cloud-only is typically cheaper.

Q: Why can’t you rely on average latency alone when designing edge systems?

Cloud average latency might be 200ms, but peak spikes can reach 2+ seconds during network congestion. Safety-critical systems must guarantee worst-case latency, not average latency. That’s why sub-100ms requirements mandate edge processing.

53.11 Concept Relationships

Builds on:

Enables:

Practical Applications:

  • Factory Monitoring - Vibration analysis saves 41% over 3 years
  • Drone Landing - 7.1 fps inference on Jetson Nano meets 5 fps requirement
  • Multi-Model Gateways - 64% memory savings through activation sharing

53.12 Try It Yourself

Scenario: You’re proposing edge gateways to reduce cloud costs for a sensor deployment.

Your Data:

  • Number of sensors: _______
  • Sample rate (readings/hour): _______
  • Bytes per reading: _______
  • Current cloud costs ($/month): _______

Step 1: Calculate raw bandwidth

def calculate_bandwidth(sensors, rate_per_hour, bytes_per_reading):
    daily_bytes = sensors * rate_per_hour * 24 * bytes_per_reading
    monthly_gb = (daily_bytes * 30) / (1024**3)
    return monthly_gb

monthly_data_gb = calculate_bandwidth(
    sensors=1000, rate_per_hour=60, bytes_per_reading=20
)
print(f"Monthly bandwidth: {monthly_data_gb:.2f} GB")

Step 2: Apply edge reduction strategies

  • Downsample: 60/hour to 1/hour = 60x reduction
  • Aggregate: 10 sensors to 1 summary = 10x reduction
  • Filter: Remove 80% unchanged readings = 5x reduction
  • Total reduction: 60 x 10 x 5 = 3,000x

Step 3: Calculate savings

def calculate_edge_roi(cloud_monthly_cost, edge_hardware_cost,
                       reduction_factor, monthly_ops_cost=50):
    new_cloud_cost = cloud_monthly_cost / reduction_factor
    monthly_savings = (cloud_monthly_cost - new_cloud_cost
                       - monthly_ops_cost)
    payback_months = (edge_hardware_cost / monthly_savings
                      if monthly_savings > 0 else float('inf'))
    return {
        'new_monthly_cost': new_cloud_cost,
        'monthly_savings': monthly_savings,
        'payback_months': payback_months,
        'annual_roi_pct': (monthly_savings * 12
                           / edge_hardware_cost * 100)
                          if edge_hardware_cost > 0 else 0
    }

roi = calculate_edge_roi(
    cloud_monthly_cost=500,
    edge_hardware_cost=2000,
    reduction_factor=3000,
    monthly_ops_cost=50
)

print(f"New monthly cost: ${roi['new_monthly_cost']:.2f}")
print(f"Monthly savings: ${roi['monthly_savings']:.2f}")
print(f"Payback period: {roi['payback_months']:.1f} months")
print(f"Annual ROI: {roi['annual_roi_pct']:.0f}%")

What to Observe:

  • Payback period under 12 months = strong business case
  • Data reduction factor is the key variable
  • Hidden costs: maintenance, firmware updates, hardware refresh
  • Non-financial benefits: latency, privacy, offline operation

Extension Challenge: Add latency calculations. If cloud = 200ms round-trip and edge = 10ms, what percentage of control loop budget do you save?

53.13 Knowledge Check

53.14 Videos

Edge Computing Deep Dives:

Architecture Context:

Data Processing:

Interactive Tools:

Learning Hubs:

Key Takeaway

Edge computing decisions should be driven by numbers, not assumptions. Always calculate your specific latency budget, bandwidth savings, and total cost of ownership before choosing edge over cloud. Hybrid architectures – edge for time-critical decisions, cloud for complex analytics – typically deliver the best results, and designing for device failures, clock drift, and OTA updates from day one prevents costly rework later.

“The Great Factory Race!”

Sammy the Sensor was working overtime at the chocolate factory, checking every single chocolate bar rolling off the production line – a thousand checks per second! “I need to tell the Cloud about EVERY single check!” Sammy said, sending mountains of data up into the sky.

“Whoa, slow down!” said Max the Microcontroller, the clever edge gateway at the factory. “You are clogging up the whole internet highway! Let me help. I will watch all your readings right here, and I will only tell the Cloud when something is actually wrong – like a chocolate bar that is too hot or too cold.”

Bella the Battery cheered from the corner. “That saves SO much energy! Instead of shouting every single second, Max only sends a message when it really matters. My power lasts way longer this way!”

Lila the LED blinked green happily. “And the best part? When Max spots a bad chocolate bar, he can flash me red in just 10 milliseconds – way faster than waiting for the Cloud to respond! The Cloud is great at looking at patterns over weeks and months, but Max is the speed champion for quick decisions right here on the factory floor.”

What did the Squad learn? Process data close to where it is created (at the edge) for fast decisions and power savings. Use the cloud for big-picture analysis. And always have a plan for when things go wrong – like Max keeping a backup notebook in case the internet goes down!

53.15 What’s Next

Next Topic Description
Data in the Cloud Levels 5-7 of the IoT Reference Model for cloud analytics
Edge Data Acquisition Data collection strategies at the edge
ML Edge Deployment Deploying inference models on resource-constrained devices
Multi-Sensor Data Fusion Combining edge data streams for richer analytics