Edge computing moves processing closer to where data is created—like having a mini-computer at each sensor location.
Why not just send everything to the cloud?
Cloud-Only
Edge Computing
50-200ms latency (round trip)
<10ms latency (local)
Requires constant internet
Works offline
High bandwidth costs
Filters data locally
Privacy concerns (data leaves site)
Sensitive data stays local
Real-world example: A factory camera inspects products for defects: - Cloud approach: Send video (10 Mbps) to cloud, wait for response, reject defective product → Too slow, product already packaged - Edge approach: Run ML model on local edge device, detect defect in 5ms, reject immediately → Product rejected in real-time
Three popular edge platforms:
Platform
Best For
Runs On
AWS Greengrass
AWS users, Lambda functions
Industrial PCs, Raspberry Pi
Azure IoT Edge
Azure users, Docker containers
Any Linux device
EdgeX Foundry
Vendor-neutral, industrial
Any hardware
When to use edge computing:
Real-time decisions needed (<100ms)
Unreliable or expensive connectivity
Privacy requirements (data can’t leave premises)
High data volume (video, sensor streams)
Sensor Squad: Computing at the Edge
“Why send data all the way to the cloud when you can process it right here?” asked Max the Microcontroller, pointing to a small computer near the sensors. “Edge computing puts a mini-brain next to the sensors. It makes fast decisions locally and only sends summaries to the cloud.”
Sammy the Sensor gave an example. “In a factory, I detect 100 vibration readings per second from a motor. Sending all that data to the cloud would cost a fortune in bandwidth. Instead, the edge computer analyzes the vibrations locally and only sends an alert if the motor sounds like it is about to fail. 99% of the data stays local.”
Lila the LED highlighted another advantage. “Edge computing works even when the internet is down! If the cloud connection drops, the factory keeps running because decisions are made locally. When the connection returns, the edge device syncs any pending data.” Bella the Battery added, “And for privacy-sensitive applications like security cameras, edge processing means video never leaves the building. The AI runs on a local edge device, detects people or vehicles, and only sends metadata to the cloud – no actual video streams. This satisfies privacy regulations while still providing smart monitoring.”
Key Concepts
Microcontroller Unit (MCU): Integrated circuit combining CPU, RAM, flash, and peripherals optimised for embedded control applications.
Microprocessor Unit (MPU): High-performance processor requiring external RAM, storage, and peripherals, used in Linux-based IoT devices like Raspberry Pi.
Schematic: Electrical diagram showing component connections using standardised symbols, used to guide PCB layout.
PCB (Printed Circuit Board): Fiberglass substrate with etched copper traces connecting electronic components into a permanent assembly.
ESD Protection: Diodes and resistors protecting sensitive IC pins from electrostatic discharge during handling and in-field use.
Decoupling Capacitor: Small capacitor placed close to IC power pins to suppress high-frequency noise on the supply rail.
Design Rule Check (DRC): Automated PCB verification ensuring trace widths, clearances, and drill sizes meet the fabrication process constraints.
33.3 Introduction
Edge computing platforms extend cloud capabilities to devices at the network edge, enabling local processing, reduced latency, and offline operation. While cloud platforms excel at centralized analytics, storage, and global coordination, edge platforms handle time-sensitive decisions, bandwidth optimization, and privacy-sensitive processing. Modern IoT architectures typically combine both: edge for immediate actions, cloud for long-term analytics and coordination.
33.4 AWS IoT Greengrass
Description: Edge runtime that extends AWS capabilities to edge devices.
33.4.1 Key Features
Local Lambda functions
ML inference (SageMaker models)
Local messaging (MQTT)
Automatic sync with AWS IoT Core
Stream manager for data buffering
33.4.2 Architecture Overview
33.4.3 Example Greengrass Lambda
import greengrasssdkimport jsonclient = greengrasssdk.client('iot-data')def lambda_handler(event, context):# Process locally on edge temperature = event['temperature']if temperature >30:# Publish alert locally payload = json.dumps({'alert': 'High temperature', 'value': temperature}) client.publish(topic='alerts/temperature', payload=payload)return {'statusCode': 200}
Putting Numbers to It
Edge Computing Latency vs Cloud: Smart factory anomaly detection with 100 sensors at 10Hz:
# Deploy EdgeX with Docker Composegit clone https://github.com/edgexfoundry/edgex-compose.gitcd edgex-compose# Start core servicesdocker-compose-f docker-compose-no-secty.yml up -d# Add Modbus device servicedocker-compose-f docker-compose-no-secty.yml \-f docker-compose-device-modbus.yml up -d
33.6.5 Strengths and Limitations
Strengths:
Vendor-neutral
Extensive protocol support
Microservices flexibility
Commercial support available
Limitations:
Complex architecture
Steep learning curve
Resource intensive
Typical Use Cases:
Industrial edge computing
Building automation
Protocol aggregation
Multi-vendor environments
33.7 Edge Platform Comparison
Feature
AWS Greengrass
Azure IoT Edge
EdgeX Foundry
Vendor
Amazon
Microsoft
Linux Foundation
Architecture
Lambda functions
Docker containers
Microservices
Cloud Integration
AWS only
Azure only
Any (agnostic)
ML Support
SageMaker models
ONNX, Custom Vision
External
Protocol Support
Limited
Moderate
Extensive
Offline Mode
Yes
Yes
Yes
License
Commercial
Commercial
Apache 2.0
Best For
AWS shops
Azure shops
Multi-vendor
33.8 Worked Example: Choosing an Edge Platform for a Cold Chain Monitoring System
Scenario: A seafood distributor ships 200 refrigerated containers daily across three states. Each container has 4 temperature sensors (door, floor, ceiling, product core) and a GPS tracker. Regulatory compliance requires continuous temperature logging with 15-minute resolution and immediate alerts if temperature exceeds -18C for more than 30 minutes. Cellular connectivity is intermittent in rural transit corridors (40% of route has no signal).
Requirements Analysis:
Requirement
Implication
200 containers x 4 sensors = 800 data points per reading
Moderate write volume
15-minute logging for compliance
Must store locally during connectivity gaps
Alert within 30 minutes of threshold breach
Cannot depend on cloud for alerting
40% route with no connectivity
Edge must operate autonomously for hours
Regulatory audit trail
Tamper-evident logs with timestamps
Existing fleet management in Azure
Integration preference
Platform Evaluation:
Criterion
AWS Greengrass
Azure IoT Edge
EdgeX Foundry
Offline alerting
Lambda functions run locally
Modules run locally
Rules engine runs locally
Cloud integration
AWS IoT Core
Azure IoT Hub (existing)
Any (agnostic)
Container support
Limited
Docker-native
Docker-native
ML inference
SageMaker models
ONNX models
External
Cellular optimization
Stream Manager buffers
Store-and-forward
Custom
Fleet-wide OTA
Yes
Yes (layered deployments)
Manual
Team expertise
None
Some (existing Azure)
None
Decision: Azure IoT Edge wins because the distributor already uses Azure for fleet management, and the Docker-based module architecture lets them deploy a custom alert module alongside the built-in Store and Forward module for connectivity gaps.
Local rule: IF any_sensor > -18C FOR 30min THEN trigger_local_alarm AND queue_cloud_alert
Offline behavior: Store up to 72 hours of readings locally (128MB SD card partition), sync to Azure IoT Hub when cellular reconnects
Cloud: Azure IoT Hub receives telemetry, Time Series Insights for compliance dashboards
Cost per container (monthly): Edge hardware $75 (one-time, amortized $2.08/mo over 3 years) + cellular data $8 + Azure IoT Hub $0.50 + storage $0.30 = $10.88/month
Result after 6 months: Zero regulatory violations (previously 12/year at $5,000 each = $60,000 saved). 3 spoilage events caught by local alerts during rural transit that cloud-only architecture would have missed entirely, saving $45,000 in product loss.
33.9 Knowledge Check
Quiz: Edge Computing Platform Selection
Matching Quiz: Edge Platform Features
Ordering Quiz: Edge Computing Data Flow
Common Pitfalls
1. Publishing All Sensor Data at Maximum Rate Without Filtering
Sending raw sensor readings at maximum frequency consumes unnecessary bandwidth and storage, and can cause broker backpressure that drops messages from other devices. Apply edge-side dead-band filtering and send only meaningful updates; reserve full-rate data for local debug sessions.
2. Using a Single MQTT Topic for All Device Data
Publishing all sensor types to one flat topic prevents selective subscriptions, makes stream processing complex, and inhibits per-sensor access control. Use a structured topic hierarchy (e.g. iot/{deviceId}/{sensorType}) that enables selective consumption and per-topic ACL policies.
3. Not Designing for Device Shadow Conflicts
When multiple clients update the device shadow simultaneously without conflict resolution, the device receives contradictory desired-state updates and enters an oscillating state. Implement optimistic locking with version numbers on shadow updates and define a clear precedence order for conflicting state sources.
Label the Diagram
33.10 Summary
AWS IoT Greengrass extends AWS Lambda to edge devices with local MQTT messaging, ML inference using SageMaker models, and automatic synchronization with AWS IoT Core when connectivity returns
Azure IoT Edge uses Docker containers as modules, enabling any language or framework at the edge with strong integration into Azure Functions, Stream Analytics, and machine learning services
EdgeX Foundry provides vendor-neutral edge computing with extensive protocol support (Modbus, BLE, OPC UA) and microservices architecture under Apache 2.0 license
Edge computing benefits include sub-100ms latency for real-time control, operation during connectivity loss, bandwidth optimization by processing locally, and privacy protection by keeping sensitive data on-premises
Hybrid architectures combine edge platforms for immediate actions with cloud platforms for long-term analytics, coordination, and model training
Platform selection depends on existing cloud investments (AWS vs. Azure), vendor neutrality requirements, protocol needs, and available technical expertise
Edge computing platforms extend cloud capabilities to local devices through three core mechanisms:
Local Lambda/Function Execution (AWS Greengrass, Azure IoT Edge): 1. Developer writes Lambda function or containerized module in cloud IDE 2. Platform packages function with runtime dependencies 3. Edge device downloads function bundle, unpacks to local storage 4. Local runtime executes functions on device triggers (MQTT message, timer, sensor event) 5. Functions access local resources (GPIO, serial ports, databases) without cloud latency
Intelligent Sync and Offline Operation:
Edge device maintains local MQTT broker and data buffer
When connected, syncs telemetry to cloud and receives function updates
When disconnected, continues processing locally using last-known functions
On reconnection, replays buffered data to cloud (with deduplication)
Stream manager prioritizes critical data over bulk data during limited bandwidth
Edge Machine Learning Inference:
Cloud trains ML model (TensorFlow, PyTorch) on historical data
Model is optimized for edge hardware (quantization, pruning)
Edge device downloads model artifact (ONNX, TensorFlow Lite)
Local inference engine runs predictions on sensor data in <100ms
Results used for immediate control decisions (reject defective product, trigger alert)
The power of edge platforms comes from balancing cloud intelligence (model training, fleet coordination) with edge autonomy (local decisions, offline operation).
33.12 Concept Relationships
Understanding edge computing platforms connects to several IoT architectural concepts:
Cloud IoT Platforms provide the cloud half - edge platforms like AWS Greengrass and Azure IoT Edge are extensions of cloud platforms (AWS IoT Core, Azure IoT Hub), not replacements; hybrid architectures use both
Edge Fog Computing defines architectural layers - edge platforms implement the “edge tier” (device-adjacent processing) between devices and cloud, enabling fog computing patterns
Application Frameworks can run on edge - Node-RED and Home Assistant deploy to edge devices running on Greengrass or IoT Edge, combining visual programming with edge autonomy
Device Management handles edge deployments - updating edge functions and ML models requires OTA capabilities and staged rollouts just like firmware updates
Edge AI and TinyML enables intelligent edge - ML models deployed via edge platforms perform inference locally, critical for real-time computer vision and predictive maintenance
Edge platforms shift the cloud-edge boundary based on application needs - latency-critical decisions move to edge, long-term analytics stay in cloud.
This chapter covers edge computing platforms, explaining the core concepts, practical design decisions, and common pitfalls that IoT practitioners need to build effective, reliable connected systems.
33.14 What’s Next
If you want to…
Read this
Learn about data visualisation for connected devices