%% fig-alt: Decision tree showing QoS level selection for five fleet message types based on loss tolerance and duplicate impact analysis
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D'}}}%%
flowchart TB
subgraph GPS["GPS Location (every 30s)"]
G1["Can lose it? YES<br/>Next reading arrives in 30s anyway"]
G2["Impact of loss: Dashboard shows<br/>stale position for 30s"]
G3[/"Decision: QoS 0"/]
G1 --> G2 --> G3
end
subgraph DEL["Delivery Confirmed"]
D1["Can lose it? NO<br/>Customer needs notification, billing triggered"]
D2["Would duplicate cause problems? YES<br/>Double billing, customer confusion"]
D3[/"Decision: QoS 2"/]
D1 --> D2 --> D3
end
subgraph PAN["Driver Panic Alert"]
P1["Can lose it? NO<br/>Safety critical, lives at stake"]
P2["Would duplicate cause problems? NO<br/>Multiple alerts better than none"]
P3[/"Decision: QoS 1<br/>(speed > duplicate prevention)"/]
P1 --> P2 --> P3
end
subgraph FUEL["Fuel Level"]
F1["Can lose it? YES<br/>Used for trend analysis, not real-time"]
F2["Impact of loss: Gap in historical data<br/>(interpolatable)"]
F3[/"Decision: QoS 0"/]
F1 --> F2 --> F3
end
subgraph SPD["Speed Violation"]
S1["Can lose it? NO<br/>Compliance/safety record required"]
S2["Would duplicate cause problems? NO<br/>Minor alarm duplication acceptable"]
S3[/"Decision: QoS 1"/]
S1 --> S2 --> S3
end
style G3 fill:#16A085,color:#fff
style D3 fill:#E67E22,color:#fff
style P3 fill:#2C3E50,color:#fff
style F3 fill:#16A085,color:#fff
style S3 fill:#2C3E50,color:#fff
1196 MQTT QoS Worked Examples
1196.1 Learning Objectives
By the end of this chapter, you will be able to:
- Apply QoS Selection Frameworks: Use systematic decision processes for real-world message types
- Calculate Resource Impact: Quantify battery, bandwidth, and broker costs for QoS choices
- Design Session Strategies: Configure persistent sessions for fleet-scale IoT deployments
- Handle Edge Cases: Implement adaptive QoS and store-and-forward patterns
- Validate Designs: Verify QoS and session configurations through worked calculations
Foundations: - MQTT QoS Fundamentals - Basic QoS and session concepts - MQTT QoS Levels - Technical QoS handshake details - MQTT Session Management - Session persistence and security
Hands-On: - MQTT Labs and Implementation - Build MQTT projects - Simulations Hub - MQTT broker simulations
1196.2 Worked Example: Selecting QoS Levels for Fleet Tracking
1196.3 Worked Example: Fleet Tracking QoS Selection
Scenario: You are designing the messaging system for a logistics company tracking 500 delivery trucks. Each truck sends various message types: GPS location updates, delivery confirmations, driver alerts, and fuel level readings. The system uses cellular connectivity which has variable reliability (3G/4G coverage varies by location).
Goal: Select the optimal QoS level for each message type, balancing reliability, battery consumption, and data costs.
What we do: List all message types and their characteristics.
Why: Different messages have different reliability requirements and tolerance for duplicates.
Message inventory:
| Message Type | Frequency | Critical? | Duplicate Impact | Loss Impact |
|---|---|---|---|---|
| GPS location | Every 30s | No | OK (shows same position) | Minor (next update in 30s) |
| Delivery confirmed | Per delivery | Yes | Bad (double billing possible) | Bad (customer not notified) |
| Driver panic alert | Rare (emergency) | Critical | OK (better safe than sorry) | Catastrophic (safety at risk) |
| Fuel level | Every 10 min | No | OK (harmless) | Minor (trend data only) |
| Speed violation | When detected | Yes | Minor (alarm duplication) | Bad (safety compliance) |
What we do: Match each message type to the appropriate QoS level using the decision criteria.
Why: The framework ensures consistent, justified decisions rather than guessing.
Decision questions: 1. Can we afford to lose this message? (Yes -> QoS 0, No -> Continue) 2. Would duplicates cause problems? (No -> QoS 1, Yes -> QoS 2)
Analysis per message type:
What we do: Quantify the battery, bandwidth, and cost impact of our QoS choices.
Why: Understanding the trade-offs helps justify decisions and optimize where possible.
Message overhead calculation:
| Message Type | Count/Day | QoS | Messages/Msg | Total Msgs/Day |
|---|---|---|---|---|
| GPS location | 2,880 | 0 | 1 | 2,880 |
| Delivery confirmed | 50 | 2 | 4 | 200 |
| Panic alert | ~0.1 | 1 | 2 | ~0.2 |
| Fuel level | 144 | 0 | 1 | 144 |
| Speed violation | ~10 | 1 | 2 | 20 |
| Total | 3,084 | - | - | 3,244 |
Overhead analysis: - If everything were QoS 0: 3,084 messages/day - With our QoS choices: 3,244 messages/day (+5.2% overhead) - If everything were QoS 2: 12,336 messages/day (+300% overhead)
Cellular data impact (assuming 100 bytes/message): - Our design: 3,244 x 100 = 324 KB/day per truck - 500 trucks x 324 KB x 30 days = 4.7 GB/month - All QoS 2: 18.5 GB/month (4x higher data cost)
What we do: Consider scenarios that might require QoS adjustments.
Why: Real-world conditions may differ from typical operation.
Edge case: Poor cellular coverage
When truck enters area with weak signal:
# Pseudo-code for adaptive QoS
def get_gps_qos(signal_strength, time_since_last_update):
if time_since_last_update > 300: # 5 min since last successful
return QOS_1 # Promote GPS to QoS 1 (need confirmation)
elif signal_strength < -110: # Very weak signal
return QOS_1 # Upgrade to ensure delivery
else:
return QOS_0 # Normal operationEdge case: Delivery confirmation retry
If QoS 2 handshake fails after 3 attempts:
# Store-and-forward pattern
def confirm_delivery(delivery_id, mqtt_client):
for attempt in range(3):
result = mqtt_client.publish(
f"fleet/truck123/delivery/{delivery_id}/confirmed",
payload=confirmation_data,
qos=2,
timeout=10
)
if result.is_published():
return True
time.sleep(5 * (2 ** attempt)) # Exponential backoff
# Store locally for later sync
local_buffer.save(delivery_id, confirmation_data)
return FalseOutcome: Optimized QoS configuration balancing reliability and efficiency.
Final QoS assignments:
| Message Type | QoS | Rationale |
|---|---|---|
| GPS location | 0 | High frequency, loss acceptable, next update imminent |
| Delivery confirmed | 2 | Business-critical, duplicates cause billing issues |
| Driver panic alert | 1 | Safety-critical, duplicates acceptable, speed matters |
| Fuel level | 0 | Trend data only, interpolatable, battery savings |
| Speed violation | 1 | Compliance record needed, duplicates minor annoyance |
Key design decisions: 1. 95% of messages use QoS 0 - GPS and fuel readings dominate volume 2. QoS 2 only where duplicates cause harm - Delivery confirmation (billing) 3. QoS 1 for safety events - Reliability without QoS 2 latency overhead 4. Adaptive QoS in poor coverage - Promote GPS to QoS 1 when stale 5. Local buffering - Store failed deliveries for eventual consistency
Cost-benefit summary: - Battery: 5.2% more messages than all-QoS-0 (acceptable for reliability) - Data cost: 4.7 GB/month vs 18.5 GB if all-QoS-2 (75% savings) - Reliability: Critical messages guaranteed, non-critical efficiently sent
1196.4 Worked Example: QoS Selection for Smart Door Lock System
Scenario: A commercial building deploys 50 smart door locks that must respond to unlock commands from a mobile app. The security team needs to ensure commands are executed reliably while minimizing battery drain on battery-backup locks.
Given:
- 50 door locks, each with 4x AA battery backup (2,400 mAh total at 6V)
- Unlock commands: ~200 per lock per day (employees entering/exiting)
- Status updates: locks publish state every 30 seconds
- Network: Enterprise Wi-Fi with 2% packet loss
- Critical requirement: No duplicate unlock commands (security audit trail)
- MQTT packet overhead: QoS 0 = 2 bytes, QoS 1 = 4 bytes, QoS 2 = 6 bytes
Steps:
- Analyze message types and requirements:
- Unlock commands (app to lock): Cannot duplicate (audit issues), must confirm execution
- Lock state (lock to app): Current status, duplicates harmless, high frequency
- Battery alerts (lock to app): Important but duplicates acceptable
- Calculate QoS overhead for unlock commands (200/day):
- QoS 1: PUBLISH (4 bytes) + PUBACK (4 bytes) = 8 bytes, 2 messages
- QoS 2: PUBLISH + PUBREC + PUBREL + PUBCOMP = 24 bytes, 4 messages
- With 2% packet loss, QoS 1 has 2% chance of duplicate per command
- Daily duplicate risk with QoS 1: 200 x 0.02 = 4 potential duplicate unlocks
- Calculate energy impact per command:
- Radio power: 80 mA active, 6V supply
- Message transmission: 5 ms per message at 250 kbps
- QoS 1 energy: 80 mA x 6V x 0.01s (2 msgs x 5ms) = 4.8 mJ per command
- QoS 2 energy: 80 mA x 6V x 0.02s (4 msgs x 5ms) = 9.6 mJ per command
- Calculate daily energy for commands:
- QoS 1: 200 commands x 4.8 mJ = 960 mJ = 0.96 J/day for commands
- QoS 2: 200 commands x 9.6 mJ = 1,920 mJ = 1.92 J/day for commands
- Additional QoS 2 cost: 0.96 J/day
- Calculate status message energy (2,880/day at 30-sec intervals):
- Using QoS 0 (fire-and-forget): 2 bytes overhead, 1 message
- Energy per status: 80 mA x 6V x 0.005s = 2.4 mJ
- Daily status energy: 2,880 x 2.4 mJ = 6.9 J/day
- Calculate total battery life:
- Battery capacity: 2,400 mAh x 6V = 14.4 Wh = 51,840 J
- Daily consumption (QoS 2 commands + QoS 0 status): 1.92 + 6.9 = 8.82 J/day
- Battery backup duration: 51,840 J / 8.82 J/day = 5,877 days = 16 years
Result: Use QoS 2 for unlock commands (zero duplicates, audit compliance) and QoS 0 for status updates (high frequency, duplicates harmless). Daily energy cost is 8.82 J, providing 16 years of battery backup. The additional 0.96 J/day for QoS 2 commands (vs QoS 1) is negligible compared to status message energy.
Key Insight: QoS 2βs 2x energy overhead (9.6 mJ vs 4.8 mJ per command) seems expensive, but commands represent only 18% of daily energy (1.92 J / 10.74 J). Status updates dominate at 82%. For security-critical commands where duplicates cause audit violations, QoS 2βs guarantee is worth the modest energy cost. Never use QoS 1 for commands where duplicates have real-world consequences.
1196.5 Worked Example: Persistent Session Sizing for Fleet Management
Scenario: A logistics company manages 1,000 delivery trucks, each with an MQTT client that sleeps during overnight hours. The broker must queue messages for offline trucks and deliver them when trucks reconnect in the morning.
Given:
- 1,000 trucks, each offline 10 hours/night (10 PM to 8 AM)
- During offline period, central system sends:
- Route updates: 1 per truck, 2 KB payload
- Delivery manifests: 5 per truck, 500 bytes each
- System alerts: 20 total (broadcast), 100 bytes each
- Persistent session enabled (Clean Session = false)
- QoS 1 for all queued messages (delivery confirmation required)
- Broker: EMQX with 16 GB RAM
Steps:
- Calculate per-truck queued message volume:
- Route updates: 1 x 2,048 bytes = 2,048 bytes
- Delivery manifests: 5 x 500 bytes = 2,500 bytes
- System alerts: 20 x 100 bytes = 2,000 bytes
- Total per truck: 6,548 bytes = 6.4 KB
- Calculate total broker queue memory:
- Message storage: 1,000 trucks x 6.4 KB = 6.4 MB payload data
- MQTT metadata per message: 200 bytes (topic, QoS, timestamp, client ID reference)
- Messages per truck: 1 + 5 + 20 = 26 messages
- Metadata total: 1,000 trucks x 26 messages x 200 bytes = 5.2 MB
- Queue memory: 6.4 MB + 5.2 MB = 11.6 MB
- Calculate persistent session state memory:
- Per-session overhead: 5 KB (client ID, subscriptions, connection metadata)
- Session state: 1,000 trucks x 5 KB = 5 MB
- Total session memory: 5 MB + 11.6 MB = 16.6 MB
- Calculate reconnection storm impact:
- All 1,000 trucks reconnect between 7:50-8:10 AM (20-minute window)
- Connection rate: 1,000 / 20 minutes = 50 connections/minute = 0.83/second
- Queued message delivery: 26,000 messages in 20 minutes = 1,300 msg/min = 22 msg/sec
- Peak bandwidth: 22 msg/sec x 6.4 KB avg = 141 KB/sec = 1.1 Mbps
- Verify broker capacity:
- EMQX memory usage: 16.6 MB / 16 GB = 0.1% (well within limits)
- Connection handling: 50/minute is trivial (EMQX handles 10,000/sec)
- Message throughput: 22 msg/sec is trivial (EMQX handles 100,000/sec)
- Calculate queue expiry settings:
- Maximum offline period: 10 hours
- Safety margin: 2x = 20 hours
- Recommended
message_expiry_interval: 72,000 seconds (20 hours) - Disk spillover threshold: 100 MB (current 16.6 MB is 16% of threshold)
Result: Persistent sessions for 1,000 trucks require only 16.6 MB broker memory, handling 26,000 queued messages during 10-hour offline periods. Morning reconnection storm (50/minute) and message delivery (22 msg/sec) are within 1% of broker capacity. Set message expiry to 20 hours and disk spillover at 100 MB.
Key Insight: Persistent sessions seem expensive but scale efficiently - 1,000 trucks need only 16.6 MB total. The real cost is reconnection storms: when trucks reconnect simultaneously, message delivery spikes. Stagger reconnection times (e.g., random 0-10 minute delay) to spread load. Without persistent sessions, trucks would miss route updates and require manual re-sync, costing far more in operational overhead than the 16.6 MB memory investment.
1196.6 Worked Example: QoS Level Selection for Medical Device Telemetry
Scenario: A hospital deploys 100 patient monitors that transmit vital signs (heart rate, blood oxygen, blood pressure) to a central monitoring station. Nurses need real-time alerts for critical values, and all readings must be logged for medical records. You must select appropriate QoS levels balancing reliability with device battery life.
Given: - 100 patient monitors, battery-powered (1,500 mAh, 3.7V Li-ion) - Vital sign readings: 3 parameters x 1 reading/second = 3 msg/sec per device - Critical alert threshold: heart rate < 50 or > 120 BPM triggers immediate alert - Regulatory requirement: all vital signs must be logged (no data loss for records) - Network: Hospital Wi-Fi with 0.5% packet loss - Device radio power: 120 mA transmit, 15 mA idle - Message transmission time: 8 ms for QoS 0, 25 ms for QoS 1, 50 ms for QoS 2
Steps:
- Categorize message types by criticality:
- Routine vitals (99% of messages): Regular readings within normal range
- Critical alerts (<1% of messages): Out-of-range values requiring immediate attention
- Acknowledgment requests (rare): Nurse confirms alert received
- Analyze delivery requirements per message type:
%% fig-alt: Healthcare IoT QoS decision analysis showing three message types with loss impact, duplicate impact, latency tolerance, and resulting QoS level
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D'}}}%%
flowchart TB
subgraph ROUTINE["Routine Vitals (99% of messages)"]
R1["Loss impact: Minor<br/>(next reading in 1 second)"]
R2["Duplicate impact: Harmless<br/>(logging handles duplicates)"]
R3["Latency tolerance:<br/>1-2 seconds acceptable"]
R4[/"Decision: QoS 0<br/>(fire-and-forget)"/]
R1 --> R2 --> R3 --> R4
end
subgraph CRITICAL["Critical Alerts (<1% of messages)"]
C1["Loss impact: SEVERE<br/>(patient safety at risk)"]
C2["Duplicate impact: Acceptable<br/>(multiple alerts better than none)"]
C3["Latency tolerance:<br/>Must arrive within 500 ms"]
C4[/"Decision: QoS 1<br/>(guaranteed delivery, duplicates OK)"/]
C1 --> C2 --> C3 --> C4
end
subgraph ACK["Alert Acknowledgments (rare)"]
A1["Loss impact: Bad<br/>(nurse thinks alert not seen)"]
A2["Duplicate impact: Confusing<br/>(multiple ACK confirmations)"]
A3["Latency tolerance:<br/>5 seconds acceptable"]
A4[/"Decision: QoS 2<br/>(exactly once, no duplicate confusion)"/]
A1 --> A2 --> A3 --> A4
end
style R4 fill:#16A085,color:#fff
style C4 fill:#2C3E50,color:#fff
style A4 fill:#E67E22,color:#fff
- Calculate energy consumption per QoS level:
- QoS 0 energy: 120 mA x 8 ms = 0.96 mAs per message
- QoS 1 energy: 120 mA x 25 ms = 3.0 mAs per message
- QoS 2 energy: 120 mA x 50 ms = 6.0 mAs per message
- Calculate daily energy with mixed QoS strategy:
- Routine vitals (QoS 0): 3 msg/sec x 86,400 sec x 0.96 mAs = 248,832 mAs = 69.1 mAh
- Critical alerts (QoS 1): 10 alerts/day x 3.0 mAs = 30 mAs = 0.008 mAh
- Acknowledgments (QoS 2): 10/day x 6.0 mAs = 60 mAs = 0.017 mAh
- Total daily: 69.1 mAh for MQTT transmission
- Battery life (1,500 mAh, 50% for MQTT): 750 / 69.1 = 10.9 days
- Compare with all-QoS-1 approach (regulatory conservative):
- All messages QoS 1: 259,200 msg/day x 3.0 mAs = 777,600 mAs = 216 mAh
- Battery life: 750 / 216 = 3.5 days (3x more frequent charging)
- Address logging requirement (no data loss):
- Use persistent session (Clean Session = false) for historian subscriber
- Broker queues missed messages during historian restarts
- Historian publishes acknowledgment after writing to database
- If historian offline > 1 hour, alert IT staff (not QoS problem)
Result: Use QoS 0 for routine vital signs (99% of traffic), QoS 1 for critical alerts, and QoS 2 for nurse acknowledgments. This mixed strategy provides 10.9 days battery life versus 3.5 days with all-QoS-1. Regulatory logging is ensured by persistent sessions on the historian subscriber, not by upgrading publisher QoS.
Key Insight: QoS selection should be per-message-type, not per-device. Critical alerts representing <1% of traffic can use QoS 1 without significantly impacting battery life. The logging requirement (no data loss) is solved at the subscriber side with persistent sessions, not by forcing publishers to use higher QoS. Never use QoS 2 for high-frequency telemetry - the 6x energy cost destroys battery life. Reserve QoS 2 for rare, non-idempotent operations like acknowledgments where duplicate confusion matters.
1196.7 Worked Example: Session Persistence for Sleep-Wake IoT Sensors
Scenario: An agricultural monitoring system uses 200 soil moisture sensors that sleep for 55 minutes, wake for 5 minutes to transmit data and receive commands, then return to sleep. Sensors must receive any pending irrigation commands issued while they were asleep.
Given: - 200 sensors with solar-powered batteries - Sleep/wake cycle: 55 min sleep, 5 min active (5.5 duty cycle = 9%) - During active period: publish 3 readings, check for commands - Commands issued centrally: ~50 per day total across all sensors - Command types: calibrate (safe to duplicate), irrigate (must not duplicate) - Broker: Mosquitto with persistent message storage enabled - Maximum acceptable command delay: 60 minutes (one full cycle)
Steps:
Configure session persistence for each sensor:
// ESP32 sensor MQTT configuration const char* clientId = "sensor-042"; // Stable ID (from chip MAC) bool cleanSession = false; // Persistent session - broker saves subscriptions // Subscribe to command topic on every wake client.subscribe("farm/zone-b/sensor-042/command", 1); // QoS 1Calculate broker session storage per sensor:
- Session state: 5 KB (client ID, subscriptions, connection metadata)
- Queued commands during sleep: avg 0.25 commands x 200 bytes = 50 bytes
- Total per sensor: 5.05 KB
- Total for 200 sensors: 1,010 KB = 1 MB
Calculate message queue depth requirements:
- Commands per sensor per sleep cycle: 50 commands/day / 200 sensors / 26 cycles = 0.01 per cycle
- Maximum queue (worst case all to one sensor): 5 commands
- Mosquitto setting:
max_queued_messages 10(per client)
Design command delivery flow:
Central system issues command at 10:15 AM: "farm/zone-b/sensor-042/command" = "{"action":"irrigate","duration":300}" Sensor-042 timeline: 10:00 AM - Goes to sleep 10:15 AM - Command arrives, broker queues it (sensor offline) 10:55 AM - Sensor wakes, connects with same clientId 10:55 AM - Broker detects persistent session, sends queued command 10:55 AM - Sensor receives command, starts irrigation 10:56 AM - Sensor publishes confirmation, goes back to sleep Latency: 40 minutes (acceptable, within 60-min requirement)Handle QoS for different command types:
# Central command publisher # Calibrate command - QoS 1 (duplicates safe, sensor recalibrates) client.publish("farm/zone-b/sensor-042/command", '{"action":"calibrate","value":2.5}', qos=1, retain=False) # Irrigate command - QoS 2 (duplicates waste water, damage crops) client.publish("farm/zone-b/sensor-042/command", '{"action":"irrigate","duration":300}', qos=2, retain=False)Calculate session expiry settings:
- Maximum offline time: 55 minutes (normal sleep)
- Safety margin for extended sleep (low battery): 24 hours
- Mosquitto setting:
persistent_client_expiration 1d - MQTT 5.0:
Session Expiry Interval: 86400(24 hours)
Handle sensor replacement scenario:
# When replacing sensor hardware, clear old session def decommission_sensor(sensor_id): # Connect with same clientId, clean session to clear state temp_client = mqtt.Client(client_id=sensor_id, clean_session=True) temp_client.connect(broker, port) temp_client.disconnect() # Old session cleared, queued messages discarded print(f"Session cleared for {sensor_id}")
Result: Configure sensors with Clean Session = false and stable client IDs derived from hardware MAC. Broker queues commands during 55-minute sleep periods, delivering them when sensors wake. Use QoS 1 for idempotent commands (calibrate) and QoS 2 for non-idempotent commands (irrigate). Set session expiry to 24 hours to handle extended low-battery sleep. Total broker memory for 200 sensors: ~1 MB.
Key Insight: Persistent sessions transform MQTT into a store-and-forward system for sleeping devices. The key requirements are: (1) stable client IDs - random IDs break session restoration, (2) subscribe on every wake - subscriptions persist but re-subscribing is harmless and handles broker restarts, (3) match QoS to command idempotency - irrigate twice wastes water while calibrate twice is harmless. Without persistent sessions, sensors would need to poll for commands or miss them entirely, requiring complex application-level queuing that MQTT already provides for free.
1196.8 Summary
This chapter provided detailed worked examples for MQTT QoS and session configuration:
- Fleet Tracking: 95% of messages use QoS 0 for efficiency, QoS 2 only for delivery confirmations where duplicates cause billing issues, QoS 1 for safety alerts where speed matters more than duplicate prevention
- Smart Door Locks: QoS 2 for unlock commands (audit compliance, no duplicates), QoS 0 for status updates (high frequency, duplicates harmless), battery impact is dominated by status messages not commands
- Fleet Session Sizing: 1,000 trucks need only 16.6 MB broker memory, reconnection storms are the real challenge, stagger reconnections with random delays
- Medical Telemetry: Mixed QoS by message type provides 3x better battery life than uniform QoS 1, logging requirements are solved at subscriber side with persistent sessions
- Sleep-Wake Sensors: Persistent sessions enable store-and-forward for sleeping devices, stable client IDs are critical, QoS matches command idempotency
1196.9 Whatβs Next
The next chapter, MQTT Labs and Implementation, covers practical hands-on projects including ESP32 publishers with DHT22 sensors, Python MQTT dashboards, home automation systems with motion detection and lighting control, and secure MQTT deployments with TLS encryption and authentication.