33  Edge Computing Platforms

33.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Deploy AWS IoT Greengrass: Configure edge Lambda functions for local processing and ML inference
  • Configure Azure IoT Edge: Set up containerized modules with Docker for edge computing workloads
  • Implement EdgeX Foundry: Build vendor-neutral edge solutions with microservices architecture
  • Design Hybrid Architectures: Combine edge processing for real-time decisions with cloud analytics for long-term insights
  • Select Edge Platforms: Choose between Greengrass, IoT Edge, and EdgeX based on latency, vendor, and protocol needs
  • Build Offline-Capable Systems: Implement store-and-forward patterns that sync when connectivity returns
  • Deploy Edge ML Models: Run TensorFlow Lite and ONNX models locally for sub-100ms inference

33.2 Prerequisites

Before diving into this chapter, you should be familiar with:

Edge computing moves processing closer to where data is created—like having a mini-computer at each sensor location.

Why not just send everything to the cloud?

Cloud-Only Edge Computing
50-200ms latency (round trip) <10ms latency (local)
Requires constant internet Works offline
High bandwidth costs Filters data locally
Privacy concerns (data leaves site) Sensitive data stays local

Real-world example: A factory camera inspects products for defects: - Cloud approach: Send video (10 Mbps) to cloud, wait for response, reject defective product → Too slow, product already packaged - Edge approach: Run ML model on local edge device, detect defect in 5ms, reject immediately → Product rejected in real-time

Three popular edge platforms:

Platform Best For Runs On
AWS Greengrass AWS users, Lambda functions Industrial PCs, Raspberry Pi
Azure IoT Edge Azure users, Docker containers Any Linux device
EdgeX Foundry Vendor-neutral, industrial Any hardware

When to use edge computing:

  • Real-time decisions needed (<100ms)
  • Unreliable or expensive connectivity
  • Privacy requirements (data can’t leave premises)
  • High data volume (video, sensor streams)

“Why send data all the way to the cloud when you can process it right here?” asked Max the Microcontroller, pointing to a small computer near the sensors. “Edge computing puts a mini-brain next to the sensors. It makes fast decisions locally and only sends summaries to the cloud.”

Sammy the Sensor gave an example. “In a factory, I detect 100 vibration readings per second from a motor. Sending all that data to the cloud would cost a fortune in bandwidth. Instead, the edge computer analyzes the vibrations locally and only sends an alert if the motor sounds like it is about to fail. 99% of the data stays local.”

Lila the LED highlighted another advantage. “Edge computing works even when the internet is down! If the cloud connection drops, the factory keeps running because decisions are made locally. When the connection returns, the edge device syncs any pending data.” Bella the Battery added, “And for privacy-sensitive applications like security cameras, edge processing means video never leaves the building. The AI runs on a local edge device, detects people or vehicles, and only sends metadata to the cloud – no actual video streams. This satisfies privacy regulations while still providing smart monitoring.”

Key Concepts

  • Microcontroller Unit (MCU): Integrated circuit combining CPU, RAM, flash, and peripherals optimised for embedded control applications.
  • Microprocessor Unit (MPU): High-performance processor requiring external RAM, storage, and peripherals, used in Linux-based IoT devices like Raspberry Pi.
  • Schematic: Electrical diagram showing component connections using standardised symbols, used to guide PCB layout.
  • PCB (Printed Circuit Board): Fiberglass substrate with etched copper traces connecting electronic components into a permanent assembly.
  • ESD Protection: Diodes and resistors protecting sensitive IC pins from electrostatic discharge during handling and in-field use.
  • Decoupling Capacitor: Small capacitor placed close to IC power pins to suppress high-frequency noise on the supply rail.
  • Design Rule Check (DRC): Automated PCB verification ensuring trace widths, clearances, and drill sizes meet the fabrication process constraints.

33.3 Introduction

Edge computing platforms extend cloud capabilities to devices at the network edge, enabling local processing, reduced latency, and offline operation. While cloud platforms excel at centralized analytics, storage, and global coordination, edge platforms handle time-sensitive decisions, bandwidth optimization, and privacy-sensitive processing. Modern IoT architectures typically combine both: edge for immediate actions, cloud for long-term analytics and coordination.

33.4 AWS IoT Greengrass

Description: Edge runtime that extends AWS capabilities to edge devices.

33.4.1 Key Features

  • Local Lambda functions
  • ML inference (SageMaker models)
  • Local messaging (MQTT)
  • Automatic sync with AWS IoT Core
  • Stream manager for data buffering

33.4.2 Architecture Overview

AWS IoT Greengrass architecture showing edge device with local Lambda functions, ML inference, and MQTT messaging, connected to AWS IoT Core cloud services with automatic synchronization

33.4.3 Example Greengrass Lambda

import greengrasssdk
import json

client = greengrasssdk.client('iot-data')

def lambda_handler(event, context):
    # Process locally on edge
    temperature = event['temperature']

    if temperature > 30:
        # Publish alert locally
        payload = json.dumps({'alert': 'High temperature', 'value': temperature})
        client.publish(topic='alerts/temperature', payload=payload)

    return {'statusCode': 200}

Edge Computing Latency vs Cloud: Smart factory anomaly detection with 100 sensors at 10Hz:

Cloud processing (200ms round-trip latency): \[\text{Detection delay} = 100ms \text{ (sensing)} + 200ms \text{ (cloud)} + 50ms \text{ (action)} = 350ms\]

Edge processing (local inference <10ms): \[\text{Detection delay} = 100ms \text{ (sensing)} + 10ms \text{ (local)} + 50ms \text{ (action)} = 160ms\]

Latency improvement: \(350 - 160 = 190ms\) (54% faster)

Data cost savings (at 1KB/reading):

  • Cloud: \(100 \times 10Hz \times 86,400s \times 1KB = 82.4GB/\text{day}\)
  • Edge (only alerts, 1% of data): \(0.824GB/\text{day}\)

Edge reduces daily cellular data by 99% (\(\$25/day\)\(\$0.25/day\) at typical IoT data rates), critical for remote deployments.

Explore how edge computing affects latency and bandwidth costs for different IoT scenarios.

33.4.4 Greengrass Components

Greengrass Core:

  • Manages local Lambda deployments
  • Handles device certificates and authentication
  • Coordinates cloud synchronization

Local Lambda Functions:

  • Run Python, Node.js, Java, or C/C++
  • Execute on edge triggers (MQTT messages, timers)
  • Access local resources (GPIO, serial ports)

ML Inference:

  • Deploy SageMaker trained models
  • Run TensorFlow, MXNet, or custom models
  • Optimize for edge hardware (ARM, GPU)

Stream Manager:

  • Buffer data during connectivity loss
  • Automatic retry with exponential backoff
  • Prioritize critical vs. bulk data

33.4.5 Strengths and Limitations

Strengths:

  • Seamless AWS integration
  • Offline operation
  • ML at edge
  • Device management

Limitations:

  • AWS lock-in
  • Complex deployment
  • Licensing costs

Typical Use Cases:

  • Industrial automation
  • Remote facilities
  • Predictive maintenance
  • Local decision-making

33.5 Azure IoT Edge

Description: Container-based edge computing platform.

33.5.1 Key Features

  • Docker containers as modules
  • Azure Functions at edge
  • Custom ML models
  • IoT Hub integration
  • Offline scenarios

33.5.2 Architecture Overview

Azure IoT Edge architecture showing IoT Edge runtime with Docker container modules (custom modules, Azure Functions, Stream Analytics, ML models), IoT Hub integration, and offline capability with store-and-forward

33.5.3 Example Deployment Manifest

{
  "modulesContent": {
    "$edgeAgent": {
      "properties.desired": {
        "modules": {
          "tempSensor": {
            "version": "1.0",
            "type": "docker",
            "status": "running",
            "restartPolicy": "always",
            "settings": {
              "image": "mcr.microsoft.com/azureiotedge-simulated-temperature-sensor:1.0",
              "createOptions": "{}"
            }
          }
        }
      }
    }
  }
}

33.5.4 Module Types

Custom Modules:

  • Any Docker container (Python, Node.js, C#, Go)
  • Full flexibility for custom logic
  • Access to host resources

Azure Functions:

  • Serverless functions at edge
  • Event-driven processing
  • Easy deployment from Azure portal

Azure Stream Analytics:

  • Real-time data processing
  • SQL-like query language
  • Windowing and aggregation

Machine Learning:

  • ONNX model deployment
  • Azure Custom Vision models
  • Real-time inference

33.5.5 Strengths and Limitations

Strengths:

  • Container-based (flexible)
  • Any language support
  • Strong offline capabilities
  • Azure ecosystem

Limitations:

  • Requires understanding of containers
  • Azure dependency
  • Resource requirements

Typical Use Cases:

  • Retail edge analytics
  • Manufacturing quality control
  • Video analytics
  • Hybrid cloud scenarios

33.6 EdgeX Foundry

Description: Open-source, vendor-neutral edge framework (Linux Foundation).

33.6.1 Key Features

  • Microservices architecture
  • Protocol abstraction
  • North/south bound connectors
  • Rule engine
  • Container deployment (Docker/Kubernetes)

33.6.2 Architecture Layers

EdgeX Foundry architecture layers showing device services (south bound) with protocol connectors, core services (data, metadata, command, registry), and application services (north bound) for cloud export and rules processing

33.6.3 Service Components

Device Services (South Bound):

  • Protocol-specific connectors
  • Modbus, BLE, MQTT, OPC UA, GPIO
  • Abstract device communication

Core Services:

  • Core Data: Event/reading storage
  • Core Metadata: Device registry
  • Core Command: Device control
  • Registry: Service discovery

Application Services (North Bound):

  • Export to cloud platforms
  • Rules engine processing
  • Data transformation

33.6.4 Deployment Example

# Deploy EdgeX with Docker Compose
git clone https://github.com/edgexfoundry/edgex-compose.git
cd edgex-compose

# Start core services
docker-compose -f docker-compose-no-secty.yml up -d

# Add Modbus device service
docker-compose -f docker-compose-no-secty.yml \
  -f docker-compose-device-modbus.yml up -d

33.6.5 Strengths and Limitations

Strengths:

  • Vendor-neutral
  • Extensive protocol support
  • Microservices flexibility
  • Commercial support available

Limitations:

  • Complex architecture
  • Steep learning curve
  • Resource intensive

Typical Use Cases:

  • Industrial edge computing
  • Building automation
  • Protocol aggregation
  • Multi-vendor environments

33.7 Edge Platform Comparison

Feature AWS Greengrass Azure IoT Edge EdgeX Foundry
Vendor Amazon Microsoft Linux Foundation
Architecture Lambda functions Docker containers Microservices
Cloud Integration AWS only Azure only Any (agnostic)
ML Support SageMaker models ONNX, Custom Vision External
Protocol Support Limited Moderate Extensive
Offline Mode Yes Yes Yes
License Commercial Commercial Apache 2.0
Best For AWS shops Azure shops Multi-vendor

33.8 Worked Example: Choosing an Edge Platform for a Cold Chain Monitoring System

Scenario: A seafood distributor ships 200 refrigerated containers daily across three states. Each container has 4 temperature sensors (door, floor, ceiling, product core) and a GPS tracker. Regulatory compliance requires continuous temperature logging with 15-minute resolution and immediate alerts if temperature exceeds -18C for more than 30 minutes. Cellular connectivity is intermittent in rural transit corridors (40% of route has no signal).

Requirements Analysis:

Requirement Implication
200 containers x 4 sensors = 800 data points per reading Moderate write volume
15-minute logging for compliance Must store locally during connectivity gaps
Alert within 30 minutes of threshold breach Cannot depend on cloud for alerting
40% route with no connectivity Edge must operate autonomously for hours
Regulatory audit trail Tamper-evident logs with timestamps
Existing fleet management in Azure Integration preference

Platform Evaluation:

Criterion AWS Greengrass Azure IoT Edge EdgeX Foundry
Offline alerting Lambda functions run locally Modules run locally Rules engine runs locally
Cloud integration AWS IoT Core Azure IoT Hub (existing) Any (agnostic)
Container support Limited Docker-native Docker-native
ML inference SageMaker models ONNX models External
Cellular optimization Stream Manager buffers Store-and-forward Custom
Fleet-wide OTA Yes Yes (layered deployments) Manual
Team expertise None Some (existing Azure) None

Decision: Azure IoT Edge wins because the distributor already uses Azure for fleet management, and the Docker-based module architecture lets them deploy a custom alert module alongside the built-in Store and Forward module for connectivity gaps.

Implementation Architecture:

  • Edge device: Raspberry Pi 4 per container ($75)
  • Edge modules: Temperature monitor (custom Python container), GPS tracker, Store-and-Forward (built-in), Stream Analytics (SQL-like filtering)
  • Local rule: IF any_sensor > -18C FOR 30min THEN trigger_local_alarm AND queue_cloud_alert
  • Offline behavior: Store up to 72 hours of readings locally (128MB SD card partition), sync to Azure IoT Hub when cellular reconnects
  • Cloud: Azure IoT Hub receives telemetry, Time Series Insights for compliance dashboards

Cost per container (monthly): Edge hardware $75 (one-time, amortized $2.08/mo over 3 years) + cellular data $8 + Azure IoT Hub $0.50 + storage $0.30 = $10.88/month

Result after 6 months: Zero regulatory violations (previously 12/year at $5,000 each = $60,000 saved). 3 spoilage events caught by local alerts during rural transit that cloud-only architecture would have missed entirely, saving $45,000 in product loss.

33.9 Knowledge Check

Common Pitfalls

Sending raw sensor readings at maximum frequency consumes unnecessary bandwidth and storage, and can cause broker backpressure that drops messages from other devices. Apply edge-side dead-band filtering and send only meaningful updates; reserve full-rate data for local debug sessions.

Publishing all sensor types to one flat topic prevents selective subscriptions, makes stream processing complex, and inhibits per-sensor access control. Use a structured topic hierarchy (e.g. iot/{deviceId}/{sensorType}) that enables selective consumption and per-topic ACL policies.

When multiple clients update the device shadow simultaneously without conflict resolution, the device receives contradictory desired-state updates and enters an oscillating state. Implement optimistic locking with version numbers on shadow updates and define a clear precedence order for conflicting state sources.

33.10 Summary

  • AWS IoT Greengrass extends AWS Lambda to edge devices with local MQTT messaging, ML inference using SageMaker models, and automatic synchronization with AWS IoT Core when connectivity returns
  • Azure IoT Edge uses Docker containers as modules, enabling any language or framework at the edge with strong integration into Azure Functions, Stream Analytics, and machine learning services
  • EdgeX Foundry provides vendor-neutral edge computing with extensive protocol support (Modbus, BLE, OPC UA) and microservices architecture under Apache 2.0 license
  • Edge computing benefits include sub-100ms latency for real-time control, operation during connectivity loss, bandwidth optimization by processing locally, and privacy protection by keeping sensitive data on-premises
  • Hybrid architectures combine edge platforms for immediate actions with cloud platforms for long-term analytics, coordination, and model training
  • Platform selection depends on existing cloud investments (AWS vs. Azure), vendor neutrality requirements, protocol needs, and available technical expertise

Platform Deep Dives:

Architecture:

Machine Learning:

33.11 How It Works

Edge computing platforms extend cloud capabilities to local devices through three core mechanisms:

Local Lambda/Function Execution (AWS Greengrass, Azure IoT Edge): 1. Developer writes Lambda function or containerized module in cloud IDE 2. Platform packages function with runtime dependencies 3. Edge device downloads function bundle, unpacks to local storage 4. Local runtime executes functions on device triggers (MQTT message, timer, sensor event) 5. Functions access local resources (GPIO, serial ports, databases) without cloud latency

Intelligent Sync and Offline Operation:

  1. Edge device maintains local MQTT broker and data buffer
  2. When connected, syncs telemetry to cloud and receives function updates
  3. When disconnected, continues processing locally using last-known functions
  4. On reconnection, replays buffered data to cloud (with deduplication)
  5. Stream manager prioritizes critical data over bulk data during limited bandwidth

Edge Machine Learning Inference:

  1. Cloud trains ML model (TensorFlow, PyTorch) on historical data
  2. Model is optimized for edge hardware (quantization, pruning)
  3. Edge device downloads model artifact (ONNX, TensorFlow Lite)
  4. Local inference engine runs predictions on sensor data in <100ms
  5. Results used for immediate control decisions (reject defective product, trigger alert)

The power of edge platforms comes from balancing cloud intelligence (model training, fleet coordination) with edge autonomy (local decisions, offline operation).

33.12 Concept Relationships

Understanding edge computing platforms connects to several IoT architectural concepts:

  • Cloud IoT Platforms provide the cloud half - edge platforms like AWS Greengrass and Azure IoT Edge are extensions of cloud platforms (AWS IoT Core, Azure IoT Hub), not replacements; hybrid architectures use both
  • Edge Fog Computing defines architectural layers - edge platforms implement the “edge tier” (device-adjacent processing) between devices and cloud, enabling fog computing patterns
  • Application Frameworks can run on edge - Node-RED and Home Assistant deploy to edge devices running on Greengrass or IoT Edge, combining visual programming with edge autonomy
  • Device Management handles edge deployments - updating edge functions and ML models requires OTA capabilities and staged rollouts just like firmware updates
  • Edge AI and TinyML enables intelligent edge - ML models deployed via edge platforms perform inference locally, critical for real-time computer vision and predictive maintenance

Edge platforms shift the cloud-edge boundary based on application needs - latency-critical decisions move to edge, long-term analytics stay in cloud.

33.13 See Also

In 60 Seconds

This chapter covers edge computing platforms, explaining the core concepts, practical design decisions, and common pitfalls that IoT practitioners need to build effective, reliable connected systems.

33.14 What’s Next

If you want to… Read this
Learn about data visualisation for connected devices Data Visualisation Dashboards
Understand security for cloud-connected prototypes Privacy and Compliance
Explore full-stack IoT architecture patterns Application Domains Overview