48  Message Queue Lab Challenges

In 60 Seconds

Dead letter queues capture messages that fail delivery after multiple retries, enabling debugging without data loss. Message deduplication prevents duplicate processing in QoS 1 by tracking recently processed message IDs. Flow control (backpressure) pauses delivery to slow consumers when in-flight messages exceed threshold, preventing queue overflow.

Key Concepts
  • Queue Overflow: Condition where incoming message rate exceeds processing rate, causing the queue to fill and subsequent messages to be dropped or rejected
  • Head-of-Line Blocking: Scenario where a slow or failed consumer blocks processing of subsequent messages in the queue, increasing latency for all downstream consumers
  • Consumer Group: Set of consumers sharing a queue’s workload, each receiving a subset of messages for horizontal scaling (Kafka consumer groups, RabbitMQ competing consumers)
  • Poison Message: Message that causes consumer failure on every processing attempt, repeatedly re-queued and blocking healthy messages from being processed
  • Dead Letter Exchange (DLX): Queue routing messages that have been rejected or expired to a separate dead letter queue for inspection and recovery
  • Backpressure: Mechanism where overwhelmed consumers signal upstream producers to slow message generation, preventing queue overflow
  • Message TTL: Time-to-live setting after which unprocessed queue messages are automatically expired and moved to dead letter queue
  • Circuit Breaker Pattern: Queue management strategy that stops delivering messages to a failing consumer for a cool-down period, allowing recovery before resuming delivery
Minimum Viable Understanding
  • Dead letter queues capture messages that fail delivery after multiple retries, enabling debugging and replay without losing data.
  • Message deduplication prevents duplicate processing in QoS 1 by tracking recently processed message IDs in a circular cache.
  • Flow control (backpressure) pauses message delivery to slow consumers when their in-flight message count exceeds a threshold, preventing queue overflow.

48.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Assemble a complete message broker simulation and execute it on ESP32
  • Construct dead letter queues that capture and categorize failed deliveries
  • Implement message deduplication using circular ID caches to prevent duplicate processing
  • Configure subscriber groups with round-robin load balancing across consumer instances
  • Design flow control mechanisms that apply backpressure to slow consumers

Message queues are like a super-organized mail room that handles tricky situations!

Sammy the Sensor was sending letters (messages) to his friends, but sometimes things went wrong. One letter bounced back THREE times because Max the Microcontroller was too busy to read it. “What do we do with letters nobody can receive?” asked Lila the LED.

“We put them in the Lost Letters Box!” said Bella the Battery. That is what grown-ups call a dead letter queue – a special place for messages that just could not be delivered, so someone can figure out what went wrong later.

But there was another problem! Sometimes Sammy accidentally sent the SAME letter twice. Max got confused: “Did the temperature change twice, or is this the same reading?” So they created a stamp collection (deduplication cache) – every letter gets a unique stamp number, and if the mail room sees the same stamp twice, it throws away the copy!

And when Max got really busy, Bella put up a “SLOW DOWN” sign at the mail room door. That is backpressure – telling senders to wait because the receiver needs to catch up!

In real IoT systems, things go wrong all the time: networks drop, devices get busy, and messages get duplicated. These lab challenges teach you the safety nets that production systems use:

  • Dead letter queues = “lost and found” for failed messages
  • Deduplication = making sure you don’t count the same sensor reading twice
  • Flow control = preventing fast sensors from overwhelming slow processors

Try the challenges in order – each builds on concepts from the previous one!

48.2 Introduction

This chapter provides hands-on lab challenges to reinforce message queue concepts. You’ll work with a complete Wokwi ESP32 simulation that demonstrates queue operations, pub/sub patterns, topic routing, and QoS handling.

48.3 Wokwi Simulation Lab

Time: ~45 min | Difficulty: Intermediate-Advanced | Lab Type: Wokwi ESP32 Simulation

48.3.1 How to Use This Simulation

  1. Click inside the Wokwi editor below
  2. Copy and paste the provided code into the editor
  3. Click the green Play button to start the simulation
  4. Observe the Serial Monitor output showing message queue operations
  5. Modify parameters to experiment with different scenarios

48.3.2 Complete Lab Code

The full implementation (1,200+ lines) is available in the Message Queue Lab Overview. The code demonstrates:

  • Queue operations (enqueue, dequeue, priority)
  • Pub/Sub pattern with topic routing
  • MQTT-style wildcards (+ and #)
  • QoS levels 0, 1, and 2
  • Message persistence and retained messages

48.4 Lab Challenges

48.4.1 Challenge 1: Implement Message Expiry (Beginner)

Add logic to check message expiry timestamps and remove expired messages from queues.

Objective: Messages with expiry > 0 should be removed when millis() > expiry.

void removeExpiredMessages(MessageQueue* q) {
    // Exercise: Implement this function
    // 1. Iterate through queue from head to tail
    // 2. Check if msg.expiry > 0 && millis() > msg.expiry
    // 3. Mark expired messages for removal
    // 4. Log expiry events with message ID and topic
    // 5. Update queue statistics
}

Hints:

  • Iterate using: for (int i = 0; i < q->count; i++)
  • Calculate index: int idx = (q->head + i) % q->maxSize
  • Check expiry: if (msg.expiry > 0 && millis() > msg.expiry)
  • Call this function in processBrokerQueue() before routing

Expected Output:

[EXPIRE] Message 15 expired: topic=sensors/humidity/room1, age=65000ms
[EXPIRE] Removed 1 expired messages from broker queue
void removeExpiredMessages(MessageQueue* q) {
  int expiredCount = 0;
  unsigned long now = millis();

  for (int i = 0; i < q->count; i++) {
    int idx = (q->head + i) % q->maxSize;
    Message* msg = &q->messages[idx];

    if (msg->expiry > 0 && now > msg->expiry) {
      Serial.printf("[EXPIRE] Message %lu expired: topic=%s, age=%lums\n",
                    msg->messageId, msg->topic, now - msg->timestamp);
      // Mark for removal (in real implementation, compact the queue)
      msg->messageId = 0;
      expiredCount++;
    }
  }

  if (expiredCount > 0) {
    Serial.printf("[EXPIRE] Removed %d expired messages\n", expiredCount);
    q->totalDropped += expiredCount;
  }
}
Try It: Message Expiry Simulator

Explore how message TTL (time-to-live) affects queue behavior. Adjust the message arrival rate, TTL duration, and processing speed to see how many messages expire before being consumed.

48.4.2 Challenge 2: Add Dead Letter Queue (Intermediate)

When messages fail delivery after 3 retries, move them to a “dead letter queue” for later analysis.

Objective: Implement a DLQ that captures failed messages with failure reasons.

MessageQueue deadLetterQueue;

struct DeadLetterInfo {
  Message originalMessage;
  char failureReason[64];
  uint32_t failureTime;
  int attemptCount;
};

void moveToDeadLetter(Message* msg, const char* reason) {
    // Exercise: Implement this function
    // 1. Create DeadLetterInfo with failure details
    // 2. Log the failure with message ID and reason
    // 3. Enqueue to deadLetterQueue
    // 4. Update DLQ statistics
}

Implementation Steps:

  1. Initialize deadLetterQueue in setup()
  2. Modify processSubscriberInboxes() to check deliveryCount >= 3
  3. Call moveToDeadLetter() instead of requeueing
  4. Add a printDeadLetterQueue() function for debugging

Expected Output:

[DLQ] Message 42 moved to dead letter queue
[DLQ] Reason: Max retries exceeded (3 attempts)
[DLQ] Original topic: alerts/motion/entrance
[DLQ] Dead letter queue size: 1
void moveToDeadLetter(Message* msg, const char* reason) {
  Message dlqMsg = *msg;

  // Modify payload to include failure info
  char newPayload[MAX_PAYLOAD_SIZE];
  snprintf(newPayload, MAX_PAYLOAD_SIZE,
           "{\"original\":%s,\"failure\":\"%s\",\"attempts\":%d}",
           msg->payload, reason, msg->deliveryCount);
  strncpy(dlqMsg.payload, newPayload, MAX_PAYLOAD_SIZE - 1);
  dlqMsg.payloadLen = strlen(dlqMsg.payload);

  // Change topic to DLQ topic
  snprintf(dlqMsg.topic, MAX_TOPIC_LENGTH, "$dlq/%s", msg->topic);

  if (enqueue(&deadLetterQueue, &dlqMsg)) {
    Serial.printf("[DLQ] Message %lu moved to dead letter queue\n", msg->messageId);
    Serial.printf("[DLQ] Reason: %s\n", reason);
    Serial.printf("[DLQ] Dead letter queue size: %d\n", getQueueSize(&deadLetterQueue));
  }
}

48.4.3 Challenge 3: Implement Message Deduplication (Intermediate)

For QoS 1 messages, add deduplication to prevent duplicate processing.

Objective: Track recently processed message IDs and reject duplicates.

struct MessageIdCache {
    uint32_t ids[100];
    int writeIndex;
    int count;
};

MessageIdCache processedIds;

bool isDuplicate(uint32_t messageId) {
    // Exercise: Check if messageId exists in cache
}

void addToCache(uint32_t messageId) {
    // Exercise: Add messageId to circular cache
}

Implementation Steps:

  1. Initialize cache in setup()
  2. Check isDuplicate() before processing in processSubscriberInboxes()
  3. Call addToCache() after successful processing
  4. Use circular buffer to limit memory usage

Expected Output:

[DEDUP] Processing message 50 for first time
[DEDUP] DUPLICATE detected: message 50 already processed
[DEDUP] Cache utilization: 45/100 (45%)
bool isDuplicate(uint32_t messageId) {
  for (int i = 0; i < processedIds.count; i++) {
    if (processedIds.ids[i] == messageId) {
      Serial.printf("[DEDUP] DUPLICATE detected: message %lu already processed\n",
                    messageId);
      return true;
    }
  }
  return false;
}

void addToCache(uint32_t messageId) {
  processedIds.ids[processedIds.writeIndex] = messageId;
  processedIds.writeIndex = (processedIds.writeIndex + 1) % 100;
  if (processedIds.count < 100) {
    processedIds.count++;
  }
  Serial.printf("[DEDUP] Cache utilization: %d/100 (%d%%)\n",
                processedIds.count, processedIds.count);
}
Try It: Deduplication Cache Simulator

See how cache size affects duplicate detection during network recovery bursts. Adjust the cache size and burst parameters to understand why undersized caches fail in production.

48.4.4 Challenge 4: Add Subscriber Groups (Advanced)

Implement “shared subscriptions” where messages are load-balanced across a subscriber group.

Objective: Only ONE member of a group receives each message (round-robin).

struct SubscriberGroup {
    char groupId[32];
    Subscriber* members[5];
    int memberCount;
    int nextMember;  // Round-robin index
};

SubscriberGroup groups[5];
int groupCount = 0;

int createGroup(const char* groupId);
int addToGroup(const char* groupId, Subscriber* subscriber);
void publishToGroup(Message* msg, SubscriberGroup* group);

Implementation Steps:

  1. Create group management functions
  2. Modify registerSubscriber() to detect group subscriptions (e.g., $share/group1/sensors/#)
  3. Modify publishMessage() to route to groups differently
  4. Implement round-robin or least-loaded member selection

Expected Output:

[GROUP] Created subscriber group: analytics-workers
[GROUP] Added subscriber worker-1 to group analytics-workers (1/5 members)
[GROUP] Added subscriber worker-2 to group analytics-workers (2/5 members)
[GROUP] Message 100 routed to worker-2 (round-robin)
[GROUP] Message 101 routed to worker-1 (round-robin)
void publishToGroup(Message* msg, SubscriberGroup* group) {
  if (group->memberCount == 0) {
    Serial.printf("[GROUP] No members in group %s\n", group->groupId);
    return;
  }

  // Round-robin selection
  Subscriber* selected = group->members[group->nextMember];
  group->nextMember = (group->nextMember + 1) % group->memberCount;

  // Deliver to selected member only
  if (enqueue(&selected->inbox, msg)) {
    Serial.printf("[GROUP] Message %lu routed to %s (round-robin)\n",
                  msg->messageId, selected->clientId);
  }
}

48.4.5 Challenge 5: Implement Flow Control (Advanced)

Add backpressure handling when a subscriber’s inbox is full.

Objective: Pause publishing to slow consumers and resume when they catch up.

Requirements:

  1. Track “in-flight” messages per subscriber
  2. Pause publishing when in-flight exceeds threshold (e.g., 10)
  3. Resume when subscriber acknowledges messages
  4. Implement “receive window” similar to TCP flow control
const int MAX_IN_FLIGHT = 10;

bool canPublishTo(Subscriber* sub) {
    // Exercise: Check if subscriber can receive more messages
    return sub->inFlightCount < MAX_IN_FLIGHT;
}

void applyBackpressure(Subscriber* sub) {
    // Exercise: Pause message delivery to this subscriber
}

void releaseBackpressure(Subscriber* sub) {
    // Exercise: Resume message delivery
}

Implementation Steps:

  1. Add inFlightCount field to Subscriber struct
  2. Increment on enqueue, decrement on acknowledge
  3. Check canPublishTo() before routing
  4. Queue messages at broker level when backpressure active
  5. Drain queued messages when backpressure releases

Expected Output:

[FLOW] Subscriber cloud-service: in-flight=10/10
[FLOW] BACKPRESSURE ACTIVATED for cloud-service
[FLOW] Queuing message 150 at broker (subscriber paused)
[FLOW] Acknowledgment received, in-flight=9/10
[FLOW] BACKPRESSURE RELEASED for cloud-service
[FLOW] Delivering 5 queued messages to cloud-service
Try It: Backpressure Flow Control Simulator

Visualize how backpressure protects slow consumers. Adjust the producer speed, consumer speed, and in-flight threshold to see when backpressure activates and how it affects message delivery.

48.5 Expected Simulation Outcomes

When running the full simulation, observe these behaviors in the Serial Monitor:

48.5.1 1. Topic Matching Demonstration

Topic: sensors/temperature/room1
  vs sensors/#                   -> MATCH
  vs sensors/temperature/+       -> MATCH
  vs alerts/#                    -> no match

48.5.2 2. Queue Operations

Enqueue msg 1: SUCCESS (queue size: 1)
Enqueue msg 2: SUCCESS (queue size: 2)
...
Enqueue msg 6: FAILED (queue size: 5)  <- Queue full!
[QUEUE] DROPPED: Queue full

48.5.3 3. QoS Level Differences

[QoS 0] Fire-and-forget delivery to cloud-service
[QoS 1] At-least-once delivery to temp-monitor
[QoS 1] PUBACK received for message 15
[QoS 2] Step 1: PUBLISH sent (ID=20)
[QoS 2] Step 2: PUBREC received
[QoS 2] Step 3: PUBREL sent
[QoS 2] Step 4: PUBCOMP received - transaction complete

48.5.4 4. Network Outage Handling

[NETWORK] Status changed: OFFLINE
[PERSIST] Storing message 25 for offline delivery
[NETWORK] Status changed: ONLINE
[PERSIST] Network restored - replaying buffered messages

48.5.5 5. Retained Message Delivery

[BROKER] Retained message stored: sensors/temperature/lobby
[BROKER] Subscriber registered: late-joiner -> sensors/temperature/#
[BROKER] Delivering retained message to new subscriber

48.5.6 6. Priority Queue Behavior

Queue full with 3 low-priority messages
Adding critical priority message...
[QUEUE] Dropping low-priority msg 10 for high-priority 15
Queue contents after priority insert:
  [0] ID=11 Priority=0 Topic=sensors/humidity/room1
  [1] ID=12 Priority=0 Topic=sensors/humidity/room1
  [2] ID=15 Priority=3 Topic=alerts/motion/entrance

48.7 Summary

This chapter provided hands-on lab challenges for message queue concepts:

  • Complete Lab Simulation: Wokwi ESP32 code demonstrating all queue concepts
  • Message Expiry: Automatically removing stale messages
  • Dead Letter Queues: Capturing failed deliveries for analysis
  • Deduplication: Preventing duplicate message processing
  • Subscriber Groups: Load-balancing across consumer instances
  • Flow Control: Backpressure handling for slow consumers

Key Takeaways:

  • Production message brokers implement all these patterns
  • Dead letter queues are essential for debugging delivery failures
  • Deduplication is critical for exactly-once semantics with QoS 1
  • Flow control prevents fast producers from overwhelming slow consumers

48.8 Knowledge Check

Common Pitfalls

Message queues that hold 10,000 messages are adequate for 100 msg/sec normal load but overflow in 100 seconds during a 1,000 msg/sec event storm. Always size queue capacity for peak burst scenarios (alarm storms, batch uploads) with 10x safety margin beyond projected peak throughput.

Without dead letter queues, poison messages either cause infinite retry loops (consuming resources) or are silently dropped (data loss). Every production queue must have an associated dead letter queue with monitoring and alerting on dead letter message arrival rates.

Acknowledging a message immediately on receipt (before processing) causes message loss if the consumer crashes during processing. Always acknowledge messages only after successfully processing and persisting their content. Use transactions where available for critical IoT data.

A queue with 100,000 messages at 10,000 messages/second processing rate has 10-second average queuing delay. For IoT applications with sub-second latency requirements, large queues are architecturally incompatible. Monitor queue depth as a latency proxy and trigger alerts when depth exceeds time-budget-derived thresholds.

48.10 What’s Next

If you want to… Read this
Study message queue fundamentals Message Queue Fundamentals
Practice with the message queue lab Message Queue Lab
Learn about pub/sub routing patterns Pub/Sub and Topic Routing
Study the full protocol bridging overview Communication and Protocol Bridging

You’ve completed the protocol bridging message queue series! Apply these concepts in your IoT gateway implementations. Next, explore: