The CoAP Observe extension (RFC 7641) enables server-push notifications by letting a client register interest in a resource once, then automatically receiving updates whenever the value changes. This eliminates polling overhead – instead of 8,640 GET requests per day for 10-second updates, the server pushes only on change, reducing traffic by 90%+ for slowly-changing sensor data.
46.1 Learning Objectives
By the end of this chapter, you will be able to:
Implement Server Push: Configure and build CoAP Observe for real-time notifications using RFC 7641
Design Observer Registries: Construct server-side data structures to manage registration, notifications, and deregistration lifecycle
Calculate Bandwidth Savings: Apply formulas to quantify Observe benefits over traditional polling for specific IoT scenarios
Diagnose Edge Cases: Identify and resolve NAT timeout, token reuse, and ghost observer problems in deployed systems
Evaluate Protocol Trade-offs: Justify selecting Observe versus polling versus MQTT based on resource change rate and energy constraints
Apply Rate Limiting: Configure change-threshold and interval parameters to prevent notification floods on rapidly-changing resources
46.2 Prerequisites
Before diving into this chapter, you should be familiar with:
Core Concept: Observe transforms CoAP from pure request-response into a publish-subscribe pattern. A client registers interest in a resource once, then receives automatic notifications whenever the resource changes - no repeated polling needed.
Why It Matters: Polling wastes energy and bandwidth. If you need temperature updates every 10 seconds, polling requires 8,640 GET requests/day. With Observe, the server pushes only when values change, potentially reducing traffic by 90%+ for slowly-changing resources.
Key Takeaway: Register with Observe: 0 in your GET request, receive notifications with incrementing sequence numbers, and deregister with Observe: 1 or RST when done.
Sensor Squad: Smart Updates Without Asking!
Meet our friends: Sammy the Sensor, Lila the Light, and Max the Microcontroller!
Sammy says: “Imagine you want to know the temperature in your room. You could keep asking me every minute - ‘What’s the temperature? What’s the temperature?’ - but that’s SO tiring for both of us!”
Lila explains: “With CoAP Observe, it’s like subscribing to a YouTube channel! You subscribe ONCE, and then you automatically get notified whenever there’s a new video - or in our case, a new temperature reading!”
Real-world example: Think about getting text messages from your favorite pizza place. You don’t call them every 5 minutes asking “Is my pizza ready?” - that would be annoying! Instead, they TEXT YOU when it’s ready. That’s exactly what Observe does for IoT devices!
Max’s tip: “Observe is like a magic subscription service: 1. Subscribe once (I want to know about temperature) 2. Relax while the sensor does its job 3. Get notified only when something changes 4. Unsubscribe when you’re done - no more messages!”
Why it’s awesome: Less talking = more battery life! Your smart devices can last much longer because they’re not constantly asking “Any updates? Any updates?” - they just wait patiently for news!
For Beginners: Understanding Observe vs Polling
What is polling? Polling is like repeatedly asking “Are we there yet?” on a road trip. You keep asking the same question over and over, even if the answer hasn’t changed.
What is Observe? Observe is like asking your parent to tell you when you arrive. You ask once, then relax - they’ll let you know when something changes.
Why does this matter for IoT?
Battery life: Every time a device sends a message, it uses power. Fewer messages = longer battery life.
Network traffic: If 1,000 sensors all poll every second, that’s 1,000 messages per second! With Observe, you might only send 10 messages when values actually change.
Speed: With polling, you might not know about changes for up to your polling interval. With Observe, you know immediately.
Simple rule: Use Observe when you want real-time updates without wasting energy.
46.3.1 Traditional Polling vs. Observe
Figure 46.1: Sequence diagram comparing CoAP polling (repeated GET requests) versus Observe pattern (single registration with automatic push notifications), showing reduced message exchange with Observe
Traffic comparison for 24 hours of temperature monitoring:
Approach
Messages
Bandwidth
Battery Impact
Polling (every 60s)
1,440 requests + 1,440 responses
~62 KB
High
Observe (10 changes/hour)
1 registration + 240 notifications
~6.6 KB
89% less
Putting Numbers to It
We can quantify the exact bandwidth savings. For polling every 60 seconds over 24 hours:
Figure 46.2: Architecture diagram showing multiple CoAP clients observing a single server resource, with the server maintaining an observer registry and automatically pushing notifications to all registered observers when the resource value changes
This architecture shows how multiple clients can observe a single resource. When the sensor value changes, the server automatically notifies all registered observers.
Interactive Calculator: Polling vs Observe Bandwidth
How to use: Adjust the sliders to match your scenario. The calculator shows total messages, bandwidth, and savings percentage. Use this to decide whether Observe is worth implementing for your specific use case.
Figure 46.3: State diagram showing CoAP Observe lifecycle with three states: unregistered, registered, and terminated. Shows transitions including registration (GET with Observe:0), notifications with incrementing sequence numbers, and deregistration paths (explicit GET with Observe:1, RST response, or timeout)
46.4.3 Deregistration
Three ways to stop receiving notifications:
1. Explicit deregistration (GET with Observe: 1):
Client -> Server: GET /temperature
Token: 0xAB12
Observe: 1 (deregister)
2. RST response to unwanted notification:
Server -> Client: NON 2.05 Content (notification)
Client -> Server: RST (I don't want this anymore)
3. Timeout (Max-Age expiration):
If client doesn't refresh observation within Max-Age,
server removes observer from list.
Try It: Deregistration Method Advisor
Show code
viewof deregScenario = Inputs.select(["Client is shutting down gracefully","Client received unwanted notification","Client crashed unexpectedly","Client lost network connectivity","Server resource was deleted","Battery-constrained device entering deep sleep"], {value:"Client is shutting down gracefully",label:"Scenario:"})viewof deregMaxAge = Inputs.range([60,86400], {value:3600,step:60,label:"Max-Age timeout (seconds):"})viewof deregReliability = Inputs.radio(["CON (reliable)","NON (best-effort)"], {value:"CON (reliable)",label:"Notification type:"})
Show code
deregAdvice = {const s = deregScenario;const isCON = deregReliability ==="CON (reliable)";if (s ==="Client is shutting down gracefully") {return {method:"Explicit GET with Observe: 1",color:"#16A085",explanation:"The client is still operational and can send a proper deregistration request. This is the cleanest approach -- the server immediately removes the observer with no wasted notifications.",protocol:"Client -> Server: GET /resource, Token: 0xAB12, Observe: 1\nServer -> Client: 2.05 Content (final value, no Observe option)",energyCost:"1 request + 1 response = minimal"}; } elseif (s ==="Client received unwanted notification") {return {method:"RST response to notification",color:"#E67E22",explanation:"The client receives a notification it no longer wants. Instead of ignoring it, send RST so the server knows to stop. This is efficient because it piggybacks on the unwanted message.",protocol:"Server -> Client: "+ (isCON ?"CON":"NON") +" 2.05 Content (notification)\nClient -> Server: RST (Message ID matches notification)",energyCost:"0 extra messages -- RST replaces the ACK"}; } elseif (s ==="Client crashed unexpectedly") {return {method:"Timeout (Max-Age expiration) + RST on first notification after reboot",color:"#E74C3C",explanation:"The crashed client cannot send any deregistration. The server will continue sending notifications until Max-Age expires ("+ deregMaxAge +"s = "+ (deregMaxAge/60).toFixed(0) +" min). After reboot, the client won't recognize the token and should send RST.",protocol:"Server -> Client: notification (no ACK received if CON)\nServer retries up to 4 times, then removes observer\nOR: Max-Age of "+ deregMaxAge +"s expires -> server removes observer",energyCost:"Up to 4 wasted retransmissions per "+ (isCON ?"CON":"NON -- no retries for NON, only timeout cleans up")}; } elseif (s ==="Client lost network connectivity") {return {method:"CON failure detection (4 retries) or Max-Age timeout",color:"#9B59B6",explanation: isCON ?"With CON notifications, the server detects the unreachable client after 4 failed retransmissions (2+4+8+16 = 30s). This is faster than waiting for Max-Age timeout.":"With NON notifications, the server has NO way to detect the client is gone. It relies entirely on Max-Age timeout ("+ deregMaxAge +"s). Consider using CON for critical resources.",protocol: isCON ?"Server -> Client: CON notification -> timeout\nRetry 1 (2s), Retry 2 (4s), Retry 3 (8s), Retry 4 (16s)\nAll failed -> remove observer (~30s total)":"Server -> Client: NON notification -> silently lost\n... continues for "+ deregMaxAge +"s until Max-Age expires",energyCost: isCON ?"4 wasted retransmissions, then clean removal":"Notifications wasted for up to "+ (deregMaxAge/60).toFixed(0) +" minutes"}; } elseif (s ==="Server resource was deleted") {return {method:"Server sends 4.04 Not Found to all observers",color:"#3498DB",explanation:"When a resource is deleted, the server MUST notify all observers with a 4.04 (Not Found) response code. This tells clients the resource no longer exists, and they should remove their observation state.",protocol:"Server -> All observers: "+ (isCON ?"CON":"NON") +" 4.04 Not Found\n Token: (each observer's token)\n Observe: (incremented sequence number)\nClients remove observation state",energyCost:"1 notification per observer -- then all cleaned up"}; } else {return {method:"Explicit deregistration BEFORE sleep, re-register on wake",color:"#2C3E50",explanation:"For battery-constrained devices entering deep sleep, always deregister first. Otherwise the server wastes energy sending notifications to a sleeping device. On wake, re-register with GET + Observe: 0. Consider setting short Max-Age ("+Math.min(deregMaxAge,300) +"s) as a safety net.",protocol:"Client -> Server: GET /resource, Observe: 1 (deregister)\n... device enters deep sleep ...\n... device wakes up ...\nClient -> Server: GET /resource, Observe: 0 (re-register)",energyCost:"2 extra messages total -- saves all notifications during sleep"}; }}
How to use: Select a scenario to see the recommended deregistration method, the protocol exchange, and the energy cost. Toggle between CON and NON to see how notification reliability affects cleanup behavior.
46.5 Observer Management Implementation
46.5.1 Server-Side Observer Registry
from collections import defaultdictimport timeclass ObserverRegistry:def__init__(self):# Map: resource_uri -> list of Observer objectsself.observers = defaultdict(list)self.observer_timeout =86400# 24 hours defaultdef register_observer(self, resource_uri, client_addr, token, max_age=None): observer = Observer( client_addr=client_addr, token=token, registered_at=time.time(), last_notification=time.time(), timeout=max_age orself.observer_timeout )self.observers[resource_uri].append(observer)return observerdef notify_all(self, resource_uri, value, content_format):"""Send notification to all observers of a resource""" expired = []for observer inself.observers[resource_uri]:# Check if observation expiredif time.time() - observer.registered_at > observer.timeout: expired.append(observer)continue# Send notificationself.send_notification(observer, value, content_format)# Clean up expired observersfor observer in expired:self.observers[resource_uri].remove(observer)def remove_observer(self, resource_uri, client_addr, token):"""Remove observer on explicit deregistration or RST received"""self.observers[resource_uri] = [ o for o inself.observers[resource_uri]ifnot (o.client_addr == client_addr and o.token == token) ]
Prevent notification floods on rapidly-changing resources:
class RateLimitedResource:def__init__(self, min_interval=1.0, change_threshold=0.5):self.min_interval = min_interval # Minimum seconds between notificationsself.change_threshold = change_threshold # Minimum change to trigger notificationself.last_notify_time = {} # Per-observer last notification timeself.last_notified_value = {} # Per-observer last sent valuedef on_value_change(self, new_value): now = time.time()for observer inself.observers: last_time =self.last_notify_time.get(observer.token, 0) last_value =self.last_notified_value.get(observer.token, None)# Check if we should notify should_notify = ( last_value isNoneorabs(new_value - last_value) >=self.change_threshold or (now - last_time) >=self.min_interval )if should_notify:self.send_notification(observer, new_value)self.last_notify_time[observer.token] = nowself.last_notified_value[observer.token] = new_value
Putting Numbers to It
The rate limiting logic above combines two thresholds to prevent notification storms. For a sensor with value \(v(t)\) at time \(t\), a notification is sent when:
\[\text{notify} = |\Delta v| \geq \theta_v \text{ OR } \Delta t \geq \theta_t\]
where \(\Delta v = v_{\text{new}} - v_{\text{last}}\) is the value change and \(\Delta t = t_{\text{now}} - t_{\text{last}}\) is time since last notification.
For example, a temperature sensor monitoring a boiler room: - Change threshold: \(\theta_v = 0.5°\text{C}\) - Time threshold: \(\theta_t = 60\text{s}\)
If temperature jumps from 80°C to 82°C in 10 seconds: \[|\Delta v| = |82 - 80| = 2°\text{C} \geq 0.5°\text{C} \Rightarrow \text{notify immediately}\]
If temperature drifts slowly from 80.0°C to 80.3°C over 65 seconds: \[|\Delta v| = 0.3°\text{C} < 0.5°\text{C} \text{ BUT } \Delta t = 65\text{s} \geq 60\text{s} \Rightarrow \text{notify}\]
This dual-threshold approach prevents both change-based flooding (rapid fluctuations) and staleness (no updates for too long).
How to use: Configure your sensor’s update rate and desired notification thresholds. The calculator shows how many notifications will actually be sent and the reduction factor achieved by rate limiting.
46.6 Deep Dive: Observe Internals
Deep Dive: Observe Sequence Numbers and Ordering
The Observe option value is a sequence number that helps clients detect: 1. Out-of-order notifications (UDP doesn’t guarantee ordering) 2. Notification freshness (which update is newer)
Sequence Number Rules (RFC 7641 Section 4.4):
def is_notification_fresh(current_seq, new_seq):""" Determine if new notification is fresher than current. Handles 24-bit wraparound. """# Sequence numbers are 24-bit (0 to 16,777,215) MAX_SEQ = (1<<24) -1# Calculate difference handling wraparound diff = (new_seq - current_seq) % (MAX_SEQ +1)# If diff < 2^23, new is fresher (forward direction)# If diff >= 2^23, new is older (backward direction - out of order)return diff < (1<<23)
Example scenario:
Notification 1: Observe=100, temp=22.5
Notification 2: Observe=102, temp=23.0 (arrived out of order)
Notification 3: Observe=101, temp=22.8
Client receives: 100 -> 102 -> 101
Client should display: 22.5 -> 23.0 (ignore 101, it's older than 102)
Interactive Tool: Sequence Number Freshness Checker
html`<div style="background: ${isFresh ?'#d4edda':'#f8d7da'}; border-left: 4px solid ${isFresh ?'#16A085':'#E74C3C'}; padding: 15px; margin: 10px 0; border-radius: 4px;"> <h4 style="color: #2C3E50; margin-top: 0;">Sequence Number Analysis</h4> <table style="width: 100%; border-collapse: collapse;"> <tr style="background: #fff;"> <th style="padding: 8px; text-align: left; border-bottom: 2px solid ${isFresh ?'#16A085':'#E74C3C'};">Property</th> <th style="padding: 8px; text-align: right; border-bottom: 2px solid ${isFresh ?'#16A085':'#E74C3C'};">Value</th> </tr> <tr> <td style="padding: 8px; border-bottom: 1px solid #ddd;">Sequence difference</td> <td style="padding: 8px; text-align: right; border-bottom: 1px solid #ddd;">${seqDiff.toLocaleString()}</td> </tr> <tr> <td style="padding: 8px; border-bottom: 1px solid #ddd;">Wraparound detected?</td> <td style="padding: 8px; text-align: right; border-bottom: 1px solid #ddd;">${isWraparound ?'Yes':'No'}</td> </tr> <tr> <td style="padding: 8px; border-bottom: 1px solid #ddd;">Result</td> <td style="padding: 8px; text-align: right; border-bottom: 1px solid #ddd; font-weight: bold; color: ${isFresh ?'#16A085':'#E74C3C'};">${isFresh ?'ACCEPT (fresher)':'DISCARD (stale)'} </td> </tr> </table> <p style="margin: 10px 0 0 0; color: #7F8C8D; font-size: 0.9em;"> <strong>Explanation:</strong> ${isFresh ?'The new sequence number is greater than the current (in 24-bit wraparound arithmetic), so this notification is fresher.':'The new sequence number appears older than the current (out-of-order delivery). Client should ignore this notification.'} </p></div>`
How to use: Enter current and new sequence numbers to see if the notification should be accepted or discarded. This tool implements RFC 7641 Section 4.4 sequence number comparison logic with 24-bit wraparound handling.
Deep Dive: Observer Removal Conditions
Automatic observer removal triggers:
RST received: Client sends RST in response to notification
Timeout: No activity within observation lifetime
CON notification fails: After 4 retransmissions without ACK
Resource deleted: Server removes all observers when resource gone
Retransmission behavior for CON notifications:
Server sends CON notification
Wait 2 seconds for ACK
Retransmit with same Message ID
Wait 4 seconds (exponential backoff)
Retransmit
Wait 8 seconds
Retransmit
Wait 16 seconds
Final attempt
After 4 failures -> Remove observer
How to use: Adjust the timeout parameters and backoff multiplier to see how the retransmission timeline changes. Set “ACK arrives after attempt #” to simulate scenarios where the client eventually responds. With the default settings, it takes about 30 seconds to detect an unreachable client.
46.7 Edge Cases and Gotchas
46.7.1 Token Reuse After Client Restart
Problem:
- Client registers observation with Token=0x42
- Client crashes and restarts
- Server sends notification with Token=0x42
- Client doesn't recognize token (state lost) -> sends RST
- Server removes observer
Solutions:
Server MUST remove observer when RST received
Client should re-register observations after restart
Consider persisting observation state to flash
46.7.2 NAT Timeout Issue
Problem:
UDP NAT mappings expire (typically 30-60 seconds)
- Client behind NAT registers observation
- Server tries to push notification 5 minutes later
- NAT mapping expired -> notification never reaches client
Solutions:
Server sends periodic keep-alive NON notifications (every 30 sec)
Client sends periodic re-registration (GET with Observe=0)
How to use: Enter your NAT timeout and number of observers. The calculator recommends a keep-alive interval with safety margin and shows the bandwidth overhead. Use this to decide if keep-alives are practical or if you should switch to CoAP over TCP.
Pitfall: Mismanaging Observe Tokens Across Client Restarts
The Mistake: Clients generate new random tokens after reboot without deregistering previous observations, causing “ghost subscriptions” where the server continues sending notifications to tokens the client no longer recognizes.
Why It Happens: The Observe pattern uses tokens to match notifications to subscriptions. When a client reboots, it loses its token-to-subscription mapping but the server still has the observer registered.
The Fix: Implement proper token lifecycle management:
Persist tokens across reboots: Store active observation tokens in EEPROM/Flash
Use deterministic token generation: Generate from device ID + resource URI hash
Handle orphaned notifications gracefully: When receiving notification with unknown token, send RST
How to use: Enter your battery specifications and usage patterns. The calculator shows battery life for both polling and Observe approaches, plus the improvement factor. This helps justify the engineering effort of implementing Observe.
Interactive: CoAP Observe Pattern Animation
Worked Example: Industrial Vibration Monitoring with Observe
Scenario: A manufacturing plant monitors vibration levels on 100 motors using ESP32 sensors with accelerometers. Maintenance dashboard needs real-time updates when vibration exceeds thresholds (normal: <2.5 mm/s, warning: 2.5-4.5 mm/s, critical: >4.5 mm/s).
Use this flowchart to decide if Observe is appropriate for your application:
Requirement
Use Observe
Use Polling
Use Neither (MQTT)
Update frequency
Event-driven, infrequent changes
Periodic, consistent intervals
Continuous streaming
Number of observers
1-10 per resource
1-3 per resource
10+ observers (broker scales better)
Change rate
<10% of polling rate
>50% of polling rate
Constant changes
Battery constraints
Critical (coin cell, multi-year)
Moderate (rechargeable)
Mains-powered
Network reliability
Stable (LAN, Wi-Fi)
Lossy (cellular)
Stable with fallback
Observer lifecycle
Long-lived (hours to days)
Short-lived (seconds to minutes)
Permanent subscriptions
Decision tree:
Does the resource change less often than you would poll it? → No: Use polling (Observe overhead not justified) → Yes: Continue
Do you need notifications from more than 10 resources per client? → Yes: Consider MQTT (broker-based pub-sub scales better) → No: Continue
Are clients behind NAT/firewall without port forwarding? → Yes: Problem - server can’t push to client. Options:
CoAP over TCP (RFC 8323) for NAT traversal
Long polling instead of Observe
MQTT instead → No: Continue
Is battery life critical (>1 year target on coin cell)? → Yes: Use CoAP Observe (eliminate polling overhead) → No: Polling is acceptable, but Observe still beneficial
Example decisions:
Application
Observe?
Reasoning
Temperature sensor (changes every 30 min)
Yes
Polling every 5 min wastes 5× bandwidth
Stock price ticker (changes every second)
No
Polling every second = always changing, no savings
Door sensor (changes 10×/day)
Yes
Massive savings (99% fewer messages)
Accelerometer (100 Hz continuous)
No
Use MQTT or streaming protocol
Smart meter (reads every 15 min)
Depends
If value changes every time, polling OK. If often unchanged, Observe better
Common Mistake: Forgetting to Handle Observer Cleanup After Client Crashes
The Error: Server registers Observe subscriptions but never removes them when clients crash or restart, leading to “ghost observers” that consume memory and network bandwidth sending notifications to unreachable clients.
Why It Happens: CoAP Observe uses tokens to match notifications to requests. When a client crashes and restarts, it generates a new random token. The server continues sending notifications to the old token, which are now unrecognized and trigger RST responses.
Real-World Impact: An industrial monitoring system with 500 sensors and 20 dashboard clients (web browsers):
Without observer cleanup:
Each browser refresh creates new Observe subscription (new token)
Each old subscription remains active (server doesn't know browser closed)
After 1 week (browsers refresh ~50 times each):
Ghost observers: 20 clients × 50 refreshes = 1,000 stale subscriptions
Active observers: 20 clients = 20 valid subscriptions
Ratio: 1,000 / 20 = 50:1 ghost to valid
Notification overhead:
500 sensors × 10 changes/hour × 1,020 observers = 5.1M notifications/hour
Wasted: 5M going to ghost observers (98% waste)
Server CPU: 45% spent serializing notifications for dead clients
Network: 850 MB/day of wasted traffic (RST responses)
The Fix:
1. Max-Age timeout (automatic cleanup):
# Server sets observation lifetimeresponse.opt.max_age =3600# Expire after 1 hour# Observer must re-register before expiration or be removeddef cleanup_expired_observers(self): now = time.time()for resource_uri, observers inself.observers.items():self.observers[resource_uri] = [ o for o in observersif now - o.registered_at < o.timeout ]
2. RST detection (immediate cleanup):
def on_rst_received(self, client_addr, message_id):"""Remove observer when client sends RST to notification"""for resource_uri, observers inself.observers.items():self.observers[resource_uri] = [ o for o in observersifnot (o.client_addr == client_addr and o.last_mid == message_id) ] logging.info(f"Removed observer {client_addr} after RST")
Ghost observers after 1 week: 0 (all cleaned up within 1 hour)
Notification waste: 0% (only sending to active clients)
Server CPU: 5% (down from 45%)
Network traffic: 8.5 MB/day (down from 858 MB/day)
Prevention checklist:
Common Pitfalls
1. Using Confirmable Messages for Every CoAP Request
CON messages require an ACK roundtrip — on lossy networks with 20% packet loss, a 4-attempt retry with exponential backoff can delay responses by 45 seconds. Use NON for periodic telemetry where data freshness matters more than guaranteed delivery; reserve CON for actuation commands.
2. Ignoring CoAP Proxy Caching Semantics
CoAP proxies cache GET responses based on Max-Age option — a sensor returning temperature with Max-Age=60 will serve cached values for 60 seconds even if the physical reading changes. Set Max-Age to match your data freshness requirement, not the default 60 seconds.
3. Forgetting DTLS Session Management
DTLS handshake (6-8 roundtrips) dominates latency for short-lived CoAP connections — repeatedly creating new DTLS sessions for each request adds 500-2000ms overhead. Use DTLS session resumption (RFC 5077) to reduce reconnection to 1 roundtrip after the initial handshake.
Label the Diagram
Order the Steps
46.9 Practice Exercises
Hands-On Practice
Exercise 1: Observer Registry Design Design an observer registry that supports: - Maximum 50 observers per resource - Automatic cleanup of stale observers (> 24 hours) - Rate limiting: max 1 notification per second per observer
Exercise 2: Bandwidth Calculation A smart building has 200 temperature sensors, each observed by 3 clients (dashboard, HVAC controller, alarm system). Sensors report every time temperature changes by 0.5°C. Calculate: 1. Estimated notifications per hour (assuming 2 significant changes per sensor per hour) 2. Bandwidth usage with 20-byte notification payloads 3. Savings compared to polling every 30 seconds
Exercise 3: NAT Keep-Alive Strategy Design a keep-alive strategy for a CoAP server where: - NAT timeout is 45 seconds - Server has 1,000 active observers - Network bandwidth is limited to 10 Kbps for keep-alives
What keep-alive interval would you choose and why?
46.10 Concept Relationships
How CoAP Observe connects to broader IoT patterns and protocols:
Observe builds on:
CoAP Message Types - Uses CON/NON for notifications with sequence number tracking
UDP Transport - Stateless protocol requiring application-layer state management
Similar patterns in other protocols:
MQTT Subscriptions - Topic-based pub/sub vs URI-based observe
WebSocket Server Push - Full-duplex vs observe’s asymmetric push
Server-Sent Events - HTTP push mechanism for comparison
Observe enables:
Event-Driven IoT - React to changes instead of polling
Real-Time Monitoring - Dashboard updates without refresh
Smart Home Automation - Light sensors triggering actuators instantly