46  CoAP Observe Extension: Server Push

coap
protocols
observe
server-push
In 60 Seconds

The CoAP Observe extension (RFC 7641) enables server-push notifications by letting a client register interest in a resource once, then automatically receiving updates whenever the value changes. This eliminates polling overhead – instead of 8,640 GET requests per day for 10-second updates, the server pushes only on change, reducing traffic by 90%+ for slowly-changing sensor data.

46.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Implement Server Push: Configure and build CoAP Observe for real-time notifications using RFC 7641
  • Design Observer Registries: Construct server-side data structures to manage registration, notifications, and deregistration lifecycle
  • Calculate Bandwidth Savings: Apply formulas to quantify Observe benefits over traditional polling for specific IoT scenarios
  • Diagnose Edge Cases: Identify and resolve NAT timeout, token reuse, and ghost observer problems in deployed systems
  • Evaluate Protocol Trade-offs: Justify selecting Observe versus polling versus MQTT based on resource change rate and energy constraints
  • Apply Rate Limiting: Configure change-threshold and interval parameters to prevent notification floods on rapidly-changing resources

46.2 Prerequisites

Before diving into this chapter, you should be familiar with:

46.3 The Observe Extension (RFC 7641)

Minimum Viable Understanding: CoAP Observe

Core Concept: Observe transforms CoAP from pure request-response into a publish-subscribe pattern. A client registers interest in a resource once, then receives automatic notifications whenever the resource changes - no repeated polling needed.

Why It Matters: Polling wastes energy and bandwidth. If you need temperature updates every 10 seconds, polling requires 8,640 GET requests/day. With Observe, the server pushes only when values change, potentially reducing traffic by 90%+ for slowly-changing resources.

Key Takeaway: Register with Observe: 0 in your GET request, receive notifications with incrementing sequence numbers, and deregister with Observe: 1 or RST when done.

Meet our friends: Sammy the Sensor, Lila the Light, and Max the Microcontroller!

Sammy says: “Imagine you want to know the temperature in your room. You could keep asking me every minute - ‘What’s the temperature? What’s the temperature?’ - but that’s SO tiring for both of us!”

Lila explains: “With CoAP Observe, it’s like subscribing to a YouTube channel! You subscribe ONCE, and then you automatically get notified whenever there’s a new video - or in our case, a new temperature reading!”

Real-world example: Think about getting text messages from your favorite pizza place. You don’t call them every 5 minutes asking “Is my pizza ready?” - that would be annoying! Instead, they TEXT YOU when it’s ready. That’s exactly what Observe does for IoT devices!

Max’s tip: “Observe is like a magic subscription service: 1. Subscribe once (I want to know about temperature) 2. Relax while the sensor does its job 3. Get notified only when something changes 4. Unsubscribe when you’re done - no more messages!”

Why it’s awesome: Less talking = more battery life! Your smart devices can last much longer because they’re not constantly asking “Any updates? Any updates?” - they just wait patiently for news!

What is polling? Polling is like repeatedly asking “Are we there yet?” on a road trip. You keep asking the same question over and over, even if the answer hasn’t changed.

What is Observe? Observe is like asking your parent to tell you when you arrive. You ask once, then relax - they’ll let you know when something changes.

Why does this matter for IoT?

  • Battery life: Every time a device sends a message, it uses power. Fewer messages = longer battery life.
  • Network traffic: If 1,000 sensors all poll every second, that’s 1,000 messages per second! With Observe, you might only send 10 messages when values actually change.
  • Speed: With polling, you might not know about changes for up to your polling interval. With Observe, you know immediately.

Simple rule: Use Observe when you want real-time updates without wasting energy.

46.3.1 Traditional Polling vs. Observe

Sequence diagram comparing CoAP polling vs Observe pattern showing reduced message exchange with Observe
Figure 46.1: Sequence diagram comparing CoAP polling (repeated GET requests) versus Observe pattern (single registration with automatic push notifications), showing reduced message exchange with Observe

Traffic comparison for 24 hours of temperature monitoring:

Approach Messages Bandwidth Battery Impact
Polling (every 60s) 1,440 requests + 1,440 responses ~62 KB High
Observe (10 changes/hour) 1 registration + 240 notifications ~6.6 KB 89% less

We can quantify the exact bandwidth savings. For polling every 60 seconds over 24 hours:

\[\text{Poll Messages} = \frac{24 \times 3{,}600 \text{ s}}{60 \text{ s}} = 1{,}440 \text{ requests}\]

With CoAP request (16 bytes) + response (28 bytes) = 44 bytes per exchange:

\[\text{Poll Bandwidth} = 1{,}440 \times 44 = 63{,}360 \text{ bytes} \approx 61.9 \text{ KB}\]

For Observe with 10 changes/hour over 24 hours:

\[\text{Observe Messages} = 1 \text{ (registration)} + (10 \times 24) = 241 \text{ total}\]

\[\text{Observe Bandwidth} = 44 + (240 \times 28) = 6{,}764 \text{ bytes} \approx 6.6 \text{ KB}\]

The bandwidth reduction is:

\[\text{Savings} = \frac{61.9 - 6.6}{61.9} \approx 0.893 = 89.3\% \text{ reduction}\]

46.3.2 Observer Architecture Overview

Architecture diagram showing multiple clients observing a single CoAP resource with server-side observer registry and automatic notification distribution
Figure 46.2: Architecture diagram showing multiple CoAP clients observing a single server resource, with the server maintaining an observer registry and automatically pushing notifications to all registered observers when the resource value changes

This architecture shows how multiple clients can observe a single resource. When the sensor value changes, the server automatically notifies all registered observers.

How to use: Adjust the sliders to match your scenario. The calculator shows total messages, bandwidth, and savings percentage. Use this to decide whether Observe is worth implementing for your specific use case.

46.4 Observe Protocol Flow

46.4.1 Registration

Client sends GET with Observe: 0 to register:

Client -> Server: GET coap://sensor.local/temperature
                  Token: 0xAB12
                  Observe: 0  (register)
                  Accept: text/plain

Server -> Client: 2.05 Content
                  Token: 0xAB12
                  Observe: 1  (sequence number)
                  Max-Age: 60
                  Payload: "23.5"

46.4.2 Notifications

Server pushes updates when resource changes:

State diagram showing CoAP Observe lifecycle from unregistered through registered state with notification events and deregistration transitions
Figure 46.3: State diagram showing CoAP Observe lifecycle with three states: unregistered, registered, and terminated. Shows transitions including registration (GET with Observe:0), notifications with incrementing sequence numbers, and deregistration paths (explicit GET with Observe:1, RST response, or timeout)

46.4.3 Deregistration

Three ways to stop receiving notifications:

1. Explicit deregistration (GET with Observe: 1):

Client -> Server: GET /temperature
                  Token: 0xAB12
                  Observe: 1  (deregister)

2. RST response to unwanted notification:

Server -> Client: NON 2.05 Content (notification)
Client -> Server: RST (I don't want this anymore)

3. Timeout (Max-Age expiration):

If client doesn't refresh observation within Max-Age,
server removes observer from list.
Try It: Deregistration Method Advisor

How to use: Select a scenario to see the recommended deregistration method, the protocol exchange, and the energy cost. Toggle between CON and NON to see how notification reliability affects cleanup behavior.

46.5 Observer Management Implementation

46.5.1 Server-Side Observer Registry

from collections import defaultdict
import time

class ObserverRegistry:
    def __init__(self):
        # Map: resource_uri -> list of Observer objects
        self.observers = defaultdict(list)
        self.observer_timeout = 86400  # 24 hours default

    def register_observer(self, resource_uri, client_addr, token, max_age=None):
        observer = Observer(
            client_addr=client_addr,
            token=token,
            registered_at=time.time(),
            last_notification=time.time(),
            timeout=max_age or self.observer_timeout
        )
        self.observers[resource_uri].append(observer)
        return observer

    def notify_all(self, resource_uri, value, content_format):
        """Send notification to all observers of a resource"""
        expired = []

        for observer in self.observers[resource_uri]:
            # Check if observation expired
            if time.time() - observer.registered_at > observer.timeout:
                expired.append(observer)
                continue

            # Send notification
            self.send_notification(observer, value, content_format)

        # Clean up expired observers
        for observer in expired:
            self.observers[resource_uri].remove(observer)

    def remove_observer(self, resource_uri, client_addr, token):
        """Remove observer on explicit deregistration or RST received"""
        self.observers[resource_uri] = [
            o for o in self.observers[resource_uri]
            if not (o.client_addr == client_addr and o.token == token)
        ]

Try It: Observer Registry Simulator

46.5.2 Notification Rate Limiting

Prevent notification floods on rapidly-changing resources:

class RateLimitedResource:
    def __init__(self, min_interval=1.0, change_threshold=0.5):
        self.min_interval = min_interval    # Minimum seconds between notifications
        self.change_threshold = change_threshold  # Minimum change to trigger notification
        self.last_notify_time = {}          # Per-observer last notification time
        self.last_notified_value = {}       # Per-observer last sent value

    def on_value_change(self, new_value):
        now = time.time()

        for observer in self.observers:
            last_time = self.last_notify_time.get(observer.token, 0)
            last_value = self.last_notified_value.get(observer.token, None)

            # Check if we should notify
            should_notify = (
                last_value is None or
                abs(new_value - last_value) >= self.change_threshold or
                (now - last_time) >= self.min_interval
            )

            if should_notify:
                self.send_notification(observer, new_value)
                self.last_notify_time[observer.token] = now
                self.last_notified_value[observer.token] = new_value

The rate limiting logic above combines two thresholds to prevent notification storms. For a sensor with value \(v(t)\) at time \(t\), a notification is sent when:

\[\text{notify} = |\Delta v| \geq \theta_v \text{ OR } \Delta t \geq \theta_t\]

where \(\Delta v = v_{\text{new}} - v_{\text{last}}\) is the value change and \(\Delta t = t_{\text{now}} - t_{\text{last}}\) is time since last notification.

For example, a temperature sensor monitoring a boiler room: - Change threshold: \(\theta_v = 0.5°\text{C}\) - Time threshold: \(\theta_t = 60\text{s}\)

If temperature jumps from 80°C to 82°C in 10 seconds: \[|\Delta v| = |82 - 80| = 2°\text{C} \geq 0.5°\text{C} \Rightarrow \text{notify immediately}\]

If temperature drifts slowly from 80.0°C to 80.3°C over 65 seconds: \[|\Delta v| = 0.3°\text{C} < 0.5°\text{C} \text{ BUT } \Delta t = 65\text{s} \geq 60\text{s} \Rightarrow \text{notify}\]

This dual-threshold approach prevents both change-based flooding (rapid fluctuations) and staleness (no updates for too long).

How to use: Configure your sensor’s update rate and desired notification thresholds. The calculator shows how many notifications will actually be sent and the reduction factor achieved by rate limiting.

46.6 Deep Dive: Observe Internals

The Observe option value is a sequence number that helps clients detect: 1. Out-of-order notifications (UDP doesn’t guarantee ordering) 2. Notification freshness (which update is newer)

Sequence Number Rules (RFC 7641 Section 4.4):

def is_notification_fresh(current_seq, new_seq):
    """
    Determine if new notification is fresher than current.
    Handles 24-bit wraparound.
    """
    # Sequence numbers are 24-bit (0 to 16,777,215)
    MAX_SEQ = (1 << 24) - 1

    # Calculate difference handling wraparound
    diff = (new_seq - current_seq) % (MAX_SEQ + 1)

    # If diff < 2^23, new is fresher (forward direction)
    # If diff >= 2^23, new is older (backward direction - out of order)
    return diff < (1 << 23)

Example scenario:

Notification 1: Observe=100, temp=22.5
Notification 2: Observe=102, temp=23.0  (arrived out of order)
Notification 3: Observe=101, temp=22.8

Client receives: 100 -> 102 -> 101
Client should display: 22.5 -> 23.0 (ignore 101, it's older than 102)

How to use: Enter current and new sequence numbers to see if the notification should be accepted or discarded. This tool implements RFC 7641 Section 4.4 sequence number comparison logic with 24-bit wraparound handling.

Automatic observer removal triggers:

  1. RST received: Client sends RST in response to notification
  2. Timeout: No activity within observation lifetime
  3. CON notification fails: After 4 retransmissions without ACK
  4. Resource deleted: Server removes all observers when resource gone

Retransmission behavior for CON notifications:

Server sends CON notification
Wait 2 seconds for ACK
Retransmit with same Message ID
Wait 4 seconds (exponential backoff)
Retransmit
Wait 8 seconds
Retransmit
Wait 16 seconds
Final attempt
After 4 failures -> Remove observer
Try It: CON Retransmission Timeline Simulator

How to use: Adjust the timeout parameters and backoff multiplier to see how the retransmission timeline changes. Set “ACK arrives after attempt #” to simulate scenarios where the client eventually responds. With the default settings, it takes about 30 seconds to detect an unreachable client.

46.7 Edge Cases and Gotchas

46.7.1 Token Reuse After Client Restart

Problem:

- Client registers observation with Token=0x42
- Client crashes and restarts
- Server sends notification with Token=0x42
- Client doesn't recognize token (state lost) -> sends RST
- Server removes observer

Solutions:

  1. Server MUST remove observer when RST received
  2. Client should re-register observations after restart
  3. Consider persisting observation state to flash

46.7.2 NAT Timeout Issue

Problem:

UDP NAT mappings expire (typically 30-60 seconds)
- Client behind NAT registers observation
- Server tries to push notification 5 minutes later
- NAT mapping expired -> notification never reaches client

Solutions:

  1. Server sends periodic keep-alive NON notifications (every 30 sec)
  2. Client sends periodic re-registration (GET with Observe=0)
  3. Use Max-Age option to set notification frequency
  4. Consider CoAP over TCP for NAT-hostile networks

How to use: Enter your NAT timeout and number of observers. The calculator recommends a keep-alive interval with safety margin and shows the bandwidth overhead. Use this to decide if keep-alives are practical or if you should switch to CoAP over TCP.

Pitfall: Mismanaging Observe Tokens Across Client Restarts

The Mistake: Clients generate new random tokens after reboot without deregistering previous observations, causing “ghost subscriptions” where the server continues sending notifications to tokens the client no longer recognizes.

Why It Happens: The Observe pattern uses tokens to match notifications to subscriptions. When a client reboots, it loses its token-to-subscription mapping but the server still has the observer registered.

The Fix: Implement proper token lifecycle management:

  1. Persist tokens across reboots: Store active observation tokens in EEPROM/Flash
  2. Use deterministic token generation: Generate from device ID + resource URI hash
  3. Handle orphaned notifications gracefully: When receiving notification with unknown token, send RST
  4. Server-side timeout: Configure observer timeout (Max-Age option)

46.8 Bandwidth Savings Calculation

Example: Temperature sensor with 100 observers

Polling approach (GET every 10 seconds):

Per request:
- Request: 14 bytes (header + token + Uri-Path)
- Response: 16 bytes (header + token + payload)
TOTAL: 30 bytes x 100 clients x 6/min x 60 min = 1.08 MB/hour

Observe approach (notify on change, avg 6 changes/hour):

Per notification:
- CoAP header: 4 bytes
- Token: 2 bytes
- Observe option: 3 bytes
- Content-Format: 2 bytes
- Payload marker: 1 byte
- Payload: 6 bytes ("22.5")
TOTAL: 18 bytes x 100 clients x 6 changes = 10.8 KB/hour

Savings: 99% bandwidth reduction (1.08 MB vs 10.8 KB)

How to use: Enter your battery specifications and usage patterns. The calculator shows battery life for both polling and Observe approaches, plus the improvement factor. This helps justify the engineering effort of implementing Observe.

Scenario: A manufacturing plant monitors vibration levels on 100 motors using ESP32 sensors with accelerometers. Maintenance dashboard needs real-time updates when vibration exceeds thresholds (normal: <2.5 mm/s, warning: 2.5-4.5 mm/s, critical: >4.5 mm/s).

Comparing polling vs CoAP Observe:

Option A: Polling (GET every 5 seconds):

Client polls: GET coap://motor42.local/vibration every 5 seconds
  Request: 16 bytes (CoAP header + token + URI)
  Response: 20 bytes (CoAP header + token + payload "2.3")

Per sensor traffic (24 hours):
  17,280 requests × 16 bytes = 276,480 bytes
  17,280 responses × 20 bytes = 345,600 bytes
  Total: 622,080 bytes/day per sensor

Fleet traffic: 622,080 × 100 = 62.2 MB/day
Energy per sensor: 17,280 × 3.0 mJ (CON request-response) = 51.8 J/day
Battery life (18650, 3.7V, 3,000 mAh): 40 kJ ÷ 51.8 J/day = ~772 days

Option B: CoAP Observe (notify on change, max 1/minute):

Client registers: GET /vibration, Observe: 0 (once at startup)
  Registration: 18 bytes (request) + 22 bytes (response with Observe: 1)

Server notifies only when vibration crosses thresholds:
  Typical motor: 3 threshold crossings/day (normal ↔ warning ↔ critical)
  Notifications: 3 × 20 bytes = 60 bytes/day
  Plus max-rate limit: If vibrating continuously, max 1,440 notifications/day

Per sensor traffic (stable operation, 3 changes/day):
  Registration: 40 bytes (one-time)
  Notifications: 60 bytes/day
  Total: ~100 bytes/day

Fleet traffic: 100 × 100 sensors = 10 KB/day (vs 62.2 MB with polling)
Energy: 3 notifications × 1.5 mJ = 4.5 mJ/day
Battery life: 40 kJ ÷ 0.0045 kJ/day = ~24,000 days (limited by battery shelf life)

Bandwidth savings: (62.2 MB - 0.01 MB) / 62.2 MB = 99.98%
Battery life improvement: 24,000 / 772 = 31× longer

Implementation with rate limiting:

class RateLimitedVibrationResource:
    def __init__(self):
        self.min_notify_interval = 60  # seconds (max 1/minute)
        self.threshold_change = 0.5     # mm/s (notify if change ≥ 0.5)
        self.last_notify_time = {}
        self.last_notified_value = {}

    async def notify_observers(self, new_value):
        now = time.time()
        for observer in self.observers:
            last_time = self.last_notify_time.get(observer.token, 0)
            last_value = self.last_notified_value.get(observer.token, None)

            # Notify if: threshold crossed OR min interval passed
            threshold_crossed = (
                last_value is not None and
                abs(new_value - last_value) >= self.threshold_change
            )
            interval_passed = (now - last_time) >= self.min_notify_interval

            if threshold_crossed or interval_passed:
                await self.send_notification(observer, new_value)
                self.last_notify_time[observer.token] = now
                self.last_notified_value[observer.token] = new_value

Decision: CoAP Observe

Reasoning:

  • 99.98% bandwidth reduction (critical for cellular backhaul)
  • 31× battery life extension (2 years → 65 years, limited by battery shelf life)
  • Real-time alerts (no polling delay)
  • Server-side rate limiting prevents notification floods

Use this flowchart to decide if Observe is appropriate for your application:

Requirement Use Observe Use Polling Use Neither (MQTT)
Update frequency Event-driven, infrequent changes Periodic, consistent intervals Continuous streaming
Number of observers 1-10 per resource 1-3 per resource 10+ observers (broker scales better)
Change rate <10% of polling rate >50% of polling rate Constant changes
Battery constraints Critical (coin cell, multi-year) Moderate (rechargeable) Mains-powered
Network reliability Stable (LAN, Wi-Fi) Lossy (cellular) Stable with fallback
Observer lifecycle Long-lived (hours to days) Short-lived (seconds to minutes) Permanent subscriptions

Decision tree:

  1. Does the resource change less often than you would poll it? → No: Use polling (Observe overhead not justified) → Yes: Continue

  2. Do you need notifications from more than 10 resources per client? → Yes: Consider MQTT (broker-based pub-sub scales better) → No: Continue

  3. Are clients behind NAT/firewall without port forwarding? → Yes: Problem - server can’t push to client. Options:

    • CoAP over TCP (RFC 8323) for NAT traversal
    • Long polling instead of Observe
    • MQTT instead → No: Continue
  4. Is battery life critical (>1 year target on coin cell)? → Yes: Use CoAP Observe (eliminate polling overhead) → No: Polling is acceptable, but Observe still beneficial

Example decisions:

Application Observe? Reasoning
Temperature sensor (changes every 30 min) Yes Polling every 5 min wastes 5× bandwidth
Stock price ticker (changes every second) No Polling every second = always changing, no savings
Door sensor (changes 10×/day) Yes Massive savings (99% fewer messages)
Accelerometer (100 Hz continuous) No Use MQTT or streaming protocol
Smart meter (reads every 15 min) Depends If value changes every time, polling OK. If often unchanged, Observe better
Common Mistake: Forgetting to Handle Observer Cleanup After Client Crashes

The Error: Server registers Observe subscriptions but never removes them when clients crash or restart, leading to “ghost observers” that consume memory and network bandwidth sending notifications to unreachable clients.

Why It Happens: CoAP Observe uses tokens to match notifications to requests. When a client crashes and restarts, it generates a new random token. The server continues sending notifications to the old token, which are now unrecognized and trigger RST responses.

Real-World Impact: An industrial monitoring system with 500 sensors and 20 dashboard clients (web browsers):

Without observer cleanup:

Each browser refresh creates new Observe subscription (new token)
Each old subscription remains active (server doesn't know browser closed)

After 1 week (browsers refresh ~50 times each):
  Ghost observers: 20 clients × 50 refreshes = 1,000 stale subscriptions
  Active observers: 20 clients = 20 valid subscriptions
  Ratio: 1,000 / 20 = 50:1 ghost to valid

Notification overhead:
  500 sensors × 10 changes/hour × 1,020 observers = 5.1M notifications/hour
  Wasted: 5M going to ghost observers (98% waste)
  Server CPU: 45% spent serializing notifications for dead clients
  Network: 850 MB/day of wasted traffic (RST responses)

The Fix:

1. Max-Age timeout (automatic cleanup):

# Server sets observation lifetime
response.opt.max_age = 3600  # Expire after 1 hour

# Observer must re-register before expiration or be removed
def cleanup_expired_observers(self):
    now = time.time()
    for resource_uri, observers in self.observers.items():
        self.observers[resource_uri] = [
            o for o in observers
            if now - o.registered_at < o.timeout
        ]

2. RST detection (immediate cleanup):

def on_rst_received(self, client_addr, message_id):
    """Remove observer when client sends RST to notification"""
    for resource_uri, observers in self.observers.items():
        self.observers[resource_uri] = [
            o for o in observers
            if not (o.client_addr == client_addr and o.last_mid == message_id)
        ]
    logging.info(f"Removed observer {client_addr} after RST")

3. CON notification failures (retry limit):

async def send_notification(self, observer, value):
    """Send CON notification, remove observer after 4 failed retries"""
    msg = Message(code=CONTENT, token=observer.token, payload=value)
    msg.mtype = CON  # Confirmable - requires ACK

    for attempt in range(4):
        try:
            await self.send_message(observer.client_addr, msg)
            ack = await self.wait_for_ack(msg.mid, timeout=2 * (2 ** attempt))
            return  # Success
        except TimeoutError:
            logging.warning(f"Notification attempt {attempt+1} failed for {observer}")

    # 4 failures - remove observer
    self.remove_observer(resource_uri, observer)
    logging.info(f"Removed unresponsive observer {observer.client_addr}")

Results after implementing cleanup:

Ghost observers after 1 week: 0 (all cleaned up within 1 hour)
Notification waste: 0% (only sending to active clients)
Server CPU: 5% (down from 45%)
Network traffic: 8.5 MB/day (down from 858 MB/day)

Prevention checklist:

Common Pitfalls

CON messages require an ACK roundtrip — on lossy networks with 20% packet loss, a 4-attempt retry with exponential backoff can delay responses by 45 seconds. Use NON for periodic telemetry where data freshness matters more than guaranteed delivery; reserve CON for actuation commands.

CoAP proxies cache GET responses based on Max-Age option — a sensor returning temperature with Max-Age=60 will serve cached values for 60 seconds even if the physical reading changes. Set Max-Age to match your data freshness requirement, not the default 60 seconds.

DTLS handshake (6-8 roundtrips) dominates latency for short-lived CoAP connections — repeatedly creating new DTLS sessions for each request adds 500-2000ms overhead. Use DTLS session resumption (RFC 5077) to reduce reconnection to 1 roundtrip after the initial handshake.

46.9 Practice Exercises

Hands-On Practice

Exercise 1: Observer Registry Design Design an observer registry that supports: - Maximum 50 observers per resource - Automatic cleanup of stale observers (> 24 hours) - Rate limiting: max 1 notification per second per observer

Exercise 2: Bandwidth Calculation A smart building has 200 temperature sensors, each observed by 3 clients (dashboard, HVAC controller, alarm system). Sensors report every time temperature changes by 0.5°C. Calculate: 1. Estimated notifications per hour (assuming 2 significant changes per sensor per hour) 2. Bandwidth usage with 20-byte notification payloads 3. Savings compared to polling every 30 seconds

Exercise 3: NAT Keep-Alive Strategy Design a keep-alive strategy for a CoAP server where: - NAT timeout is 45 seconds - Server has 1,000 active observers - Network bandwidth is limited to 10 Kbps for keep-alives

What keep-alive interval would you choose and why?

46.10 Concept Relationships

How CoAP Observe connects to broader IoT patterns and protocols:

Observe builds on:

  • CoAP Message Types - Uses CON/NON for notifications with sequence number tracking
  • UDP Transport - Stateless protocol requiring application-layer state management

Similar patterns in other protocols:

  • MQTT Subscriptions - Topic-based pub/sub vs URI-based observe
  • WebSocket Server Push - Full-duplex vs observe’s asymmetric push
  • Server-Sent Events - HTTP push mechanism for comparison

Observe enables:

  • Event-Driven IoT - React to changes instead of polling
  • Real-Time Monitoring - Dashboard updates without refresh
  • Smart Home Automation - Light sensors triggering actuators instantly

Implementation challenges:

  • NAT Traversal - UDP mapping timeouts requiring keepalives
  • Sequence Number Management - Detecting out-of-order delivery
  • State Synchronization - Client-server observation lifecycle

Performance considerations:

  • Energy Optimization - NON vs CON for battery life (99% savings)
  • Bandwidth Management - Push vs poll bandwidth comparison
  • Scalability Patterns - Server memory per observer (50-100 bytes)

46.11 See Also

CoAP Core Topics:

Implementation Guides:

  • CoAP Observe Implementation - Python and ESP32 code examples
  • Observer Registry Design - Server-side state management
  • Rate Limiting Strategies - Preventing notification floods

Protocol Comparisons:

  • MQTT vs CoAP Observe - Topic subscriptions vs resource observation
  • Long Polling vs Observe - HTTP alternative to push notifications
  • gRPC Streaming - Modern RPC with bidirectional streaming

Real-World Applications:

  • Industrial IoT Monitoring - Vibration sensors with conditional observe
  • Smart Agriculture - Soil moisture notifications on threshold changes
  • Building Automation - Temperature updates to thermostats

Specifications & RFCs:

Debugging & Tools:

46.12 What’s Next

Now that you understand CoAP server push and the Observe extension, these chapters build directly on your knowledge:

Chapter Focus Why Read It
CoAP Advanced Features Block-wise transfer and large payload handling Extend your Observe knowledge to handle firmware-update notifications that exceed a single UDP packet
CoAP API Design RESTful URI patterns and resource modeling Design well-structured observable resources that follow CoAP best practices for naming and content formats
CoAP Security Applications Securing CoAP with DTLS and OSCORE Protect observer registration and notification streams from eavesdropping and injection attacks
CoAP Implementation Labs Python and ESP32 hands-on examples Apply the registry and rate-limiting patterns from this chapter in working code on real hardware
CoAP Message Types CON vs NON reliability and ACK/RST mechanics Understand the reliability layer underpinning Observe notifications and how CON retransmission affects observer cleanup
MQTT Broker and Topics Topic-based publish-subscribe with a broker Compare Observe’s direct server-push model against MQTT’s broker-mediated fan-out for high-subscriber scenarios