1340  Edge Computing Quiz: Power and Optimization

1340.1 Learning Objectives

⏱️ ~35 min | ⭐⭐ Intermediate | 📋 P10.C09.U03

By the end of this chapter, you will be able to:

  • Analyze Power Trade-offs: Calculate battery life improvements from deep sleep optimization
  • Evaluate Data Quality: Compute multi-factor quality scores for edge processing decisions
  • Design Priority Systems: Architect dual-path processing for critical vs massive IoT
  • Calculate TCO/ROI: Perform total cost of ownership and return on investment analysis
  • Plan Storage Architecture: Design tiered storage strategies for data accumulation
  • Implement Security: Apply whitelist-based access control with fail-closed policies

1340.2 Quiz: Power and System Optimization

Question: A remote environmental monitoring station uses a 2500 mAh battery. Power profile: active 25 mA, transmit 120 mA, sleep 1 mA, deep sleep 0.01 mA. Current design uses sleep mode (1 mA) between readings. Switching to deep sleep would require 2-second wake-up time (at 25 mA) vs 0.1-second for sleep mode. If readings occur every 10 minutes, should they switch to deep sleep?

💡 Explanation: This deep sleep trade-off analysis is critical for Level 1 Device Power Management:

Current Design - Sleep Mode (1 mA):

Activity per 10-minute cycle: - Active (sensing): 0.1 seconds at 25 mA - Transmit: 0.5 seconds at 120 mA - Sleep: 600 - 0.1 - 0.5 = 599.4 seconds at 1 mA

Simplified Dominant-Term Analysis:

Sleep mode: Sleep current dominates - 599.4 seconds at 1 mA per 600-second cycle - Average ≈ 1 mA - Life = 2500 ÷ 1 = 2,500 hours = 104 days ✓

Deep sleep mode: Wake-up penalty is negligible compared to sleep savings - Wake-up cost per cycle: 2 sec × 25 mA = 50 mA-seconds - Deep sleep benefit per cycle: 597.4 sec × (1 - 0.01) mA = 591.4 mA-seconds saved - Net benefit: 541.4 mA-seconds per cycle

  • Effective average current: Let’s compute more precisely:
    • Active phases: (50 + 2.5 + 60) / 600 = 0.1875 mA
    • Deep sleep: 597.4 / 600 × 0.01 = 0.0100 mA
    • Total: 0.1975 mA ≈ 0.1 mA
  • Life = 2500 ÷ 0.1 = 25,000 hours = 1,042 days = 2.85 years

Deep Sleep Decision Matrix:

When to use deep sleep: ✓ Long sleep intervals (10 minutes): Wake-up penalty is negligible ✓ Low duty cycle (<1% active): Sleep current dominates power budget ✓ Remote deployments: Extended battery life critical

When NOT to use deep sleep: ✗ Frequent wake-ups (< 1 minute): Wake-up penalty significant ✗ High duty cycle (>10% active): Active current dominates anyway ✗ Low-latency requirements: 2-second wake-up too slow

Power Consumption Profiles:

  • Deep sleep: 0.01 mA
  • Active mode: 25.0 mA
  • Transmit mode: 120.0 mA

Battery Lifetime Comparison: - Event-Driven (avg): 686.3 days (1.88 years) - High Frequency: 9.0 days (0.02 years)

Real-World Impact:

The 24x improvement means: - Deployment viability: 104 days requires ~3 battery changes/year vs 6.8 years is “install and forget” - Cost savings: $25/replacement × 3/year × 1000 devices × 5 years = $375,000 avoided - Operational benefit: No technician visits to remote sites

The critical lesson: For low-duty-cycle IoT (readings every 10+ minutes), the wake-up penalty (2 seconds) is negligible compared to the cumulative sleep savings (598 seconds per cycle × 99% current reduction). Always use deep sleep for infrequent sensing applications.

Question: A data quality framework at Level 3 edge gateway computes quality scores based on battery voltage (33% weight), signal strength (33% weight), and data freshness (34% weight). A reading has: battery 3.0V (rated 2.0-3.3V), signal -75 dBm (range -90 to -60), age 1800 seconds (decay over 3600 seconds). What is the quality score?

💡 Explanation: This quality scoring demonstrates Level 3 Edge Processing data assessment:

Quality Score Computation:

The quality score (0-1) is computed from three factors:

  1. Battery Score: min(1.0, voltage / 3.3) - normalized battery voltage
  2. Signal Score: min(1.0, (signal_strength + 90) / 30) - normalized signal strength (-90 to -60 dBm range)
  3. Freshness Score: max(0.0, 1.0 - (age_seconds / 3600)) - data age with 1-hour decay period

Final Score: Average of the three component scores

Component Score Calculations:

1. Battery Score (33% weight):

Battery voltage: 3.0V
Rated range: 2.0V (depleted) to 3.3V (full)

Using code formula:
battery_score = 3.0 / 3.3 = 0.909

2. Signal Strength Score (33% weight):

Signal: -75 dBm
Range: -90 dBm (weak) to -60 dBm (strong)

Using code formula:
signal_score = (-75 + 90) / 30 = 15 / 30 = 0.500

3. Freshness Score (34% weight):

Data age: 1800 seconds = 30 minutes
Decay period: 3600 seconds = 60 minutes

freshness_score = max(0.0, 1 - (1800 / 3600)) = 0.500

Overall Quality Score:

quality_score = (0.909 + 0.500 + 0.500) / 3
quality_score = 1.909 / 3 = 0.636 ≈ 0.67 ✓

Quality Score Interpretation:

From the implementation behavior: - 0.0 - 0.4: Poor quality → Filter out or deprioritize - 0.4 - 0.7: Acceptable quality → Process normally ✓ - 0.7 - 0.9: Good quality → Priority processing - 0.9 - 1.0: Excellent quality → Critical data, immediate action

Why Each Factor Matters:

Battery (0.909 → Good): - High voltage → device healthy, readings trustworthy - Low voltage → sensor drift, unreliable ADC conversions - Below 2.5V → flag device for battery replacement

Signal (-75 dBm → Borderline): - Strong signal (> -70 dBm) → reliable transmission, low packet loss - Medium signal (-75 to -80) → acceptable with retry logic - Weak signal (< -85 dBm) → high error rate, may need retransmission

Freshness (30 min → Acceptable): - Recent (< 5 min) → relevant for real-time control decisions - Moderate (5-30 min) → good for trend analysis - Stale (> 1 hour) → historical value only, not for immediate decisions

Real-World Application:

For our score of 0.67: - Action: Process and include in aggregation - Alert: No immediate action, but monitor battery trend - Transmission: Bundle with other readings (not urgent) - Storage: Flag quality score for downstream analytics - Decision-making: Suitable for non-critical analytics, not for emergency shutdowns

This multi-factor quality scoring is essential for Veracity (one of the 4 Vs) in big data, ensuring only trustworthy data influences critical decisions.

Question: An industrial facility has 300 sensors: 100 critical safety sensors (must process 100% of data, latency < 100ms) and 200 non-critical monitoring sensors (can tolerate 20% data loss, latency < 5 seconds). The edge gateway has limited CPU. How should Level 3 evaluation prioritize?

💡 Explanation: This priority-based edge processing addresses Critical IoT vs Massive IoT requirements:

From the text Requirements:

%% fig-alt: "Comparison diagram showing Critical IoT requirements versus Massive IoT requirements. Critical IoT demands high reliability (99.999%), low latency (under 100ms), and safety-critical operation. Massive IoT prioritizes low cost (ultra-low), energy efficiency (years of battery life), and scalability (millions of devices). This illustrates the fundamental trade-offs in IoT system design between performance and cost/scale."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#ecf0f1'}}}%%
flowchart LR
    subgraph Critical["Critical IoT"]
        C1[High Reliability<br/>99.999%]
        C2[Low Latency<br/><100ms]
        C3[Safety-Critical]
    end

    subgraph Massive["Massive IoT"]
        M1[Low Cost<br/>Ultra-low]
        M2[Energy Efficient<br/>Years battery]
        M3[Scalability<br/>Millions devices]
    end

    style C1 fill:#E67E22,stroke:#2C3E50,color:#fff
    style C2 fill:#E67E22,stroke:#2C3E50,color:#fff
    style C3 fill:#E67E22,stroke:#2C3E50,color:#fff
    style M1 fill:#16A085,stroke:#2C3E50,color:#fff
    style M2 fill:#16A085,stroke:#2C3E50,color:#fff
    style M3 fill:#16A085,stroke:#2C3E50,color:#fff

Optimal Architecture - Dual-Path Processing:

The gateway uses separate queues by priority: - Critical queue: Fixed-size deque (100 max) for immediate processing - Normal buffer: Dynamic list with adaptive sampling

Processing logic: - Critical sensors → Process immediately (bypass buffer, zero-delay path) - Non-critical sensors → Buffered with sampling (drop 20% during overload)

Why Option A is Correct:

Critical Sensors (100 devices) - Real-Time Path: 1. Bypass queue: No buffering delay 2. Dedicated CPU allocation: Reserve 50% CPU for critical processing 3. Immediate evaluation: Check safety thresholds instantly 4. Priority transmission: LoRa high-priority channel or dedicated network 5. Guaranteed latency: < 100ms from sensor to decision

Non-Critical Sensors (200 devices) - Best-Effort Path: 1. FIFO buffer: 1000-element buffer for smoothing bursts 2. Adaptive sampling: During CPU overload, sample 80% (drop 20%) 3. Batch processing: Process in groups of 50 every 5 seconds 4. Aggregation: Combine into statistical summaries 5. Acceptable latency: < 5 seconds, as specified

CPU Resource Allocation:

%% fig-alt: "CPU resource allocation diagram showing total 100% CPU split between Critical Path with 50% reserved for real-time processing of 100 sensors achieving under 100ms latency, and Massive Path with 50% best-effort for batch processing of 200 sensors in 5-second batches. This dual-path architecture ensures guaranteed performance for critical operations while efficiently handling high-volume monitoring."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#ecf0f1'}}}%%
flowchart TB
    CPU[Total CPU<br/>100%]

    CPU --> Critical[Critical Path<br/>50% Reserved<br/>Real-Time]
    CPU --> Massive[Massive Path<br/>50% Best-Effort<br/>Batch Processing]

    Critical --> C[100 sensors<br/><100ms latency]
    Massive --> M[200 sensors<br/>5s batches]

    style CPU fill:#2C3E50,stroke:#16A085,color:#fff
    style Critical fill:#E67E22,stroke:#2C3E50,color:#fff
    style Massive fill:#16A085,stroke:#2C3E50,color:#fff
    style C fill:#E67E22,stroke:#2C3E50,color:#fff
    style M fill:#16A085,stroke:#2C3E50,color:#fff

Why Other Options Fail:

Option B - Single FIFO with slot allocation:

Problem: Critical sensor arrives at position 150 in queue
- Waits for 100 non-critical sensors to process first
- Latency: 100 × 10ms = 1000ms = 1 second ❌
- Violates 100ms requirement

Option C - Round-robin equal processing:

Problem: Non-critical sensor processed before waiting critical sensor
- Critical sensor at position 200 waits for 199 sensors
- Latency: 199 × 10ms = 1,990ms ❌
- Drops critical data during overload ❌

Option D - Critical batching:

Problem: Critical sensor waits for batch of 10 to fill
- Worst case: 9 sensors in batch, wait for 10th
- Additional latency: 9 × sensor_interval
- If sensors at 10 Hz, adds 900ms latency ❌
- Violates real-time requirement

From the text - Application Requirements:

“For Massive IoT, we need massive numbers of devices; thus cost, energy and data volumes need to be minimized. Where the IoT is applied in Critical applications, reliability, low latency and availability need to be maximized.”

The Critical Lesson:

Don’t treat all IoT data equally. Safety-critical systems require: - Deterministic latency guarantees - Priority CPU allocation - Zero data loss - Immediate action on threshold violations

Non-critical systems benefit from: - Batch processing efficiency - Adaptive sampling under load - Aggregation for reduced bandwidth - Best-effort delivery

This dual-path architecture is essential for mixed IoT deployments balancing safety requirements with economic constraints.

Question: An edge computing deployment’s total cost of ownership (TCO) analysis shows: $10,000 setup, $5,000/year operations, $3,000 component replacements over 5 years. The system saves $8,000/year in reduced cloud costs and maintenance. What is the ROI percentage and payback period?

💡 Explanation: This TCO and ROI analysis demonstrates edge computing business justification:

Total Cost of Ownership (5 years):

Setup Costs (one-time):
- Hardware, installation, software: $10,000

Ongoing Costs (annual):
- Operations: $5,000/year × 5 years = $25,000

Replacement Costs:
- Component replacements: $3,000 (total over 5 years)

Total TCO = $10,000 + $25,000 + $3,000 = $38,000

Total Savings (5 years):

Annual Savings:
- Reduced cloud ingress costs: $8,000/year

Total Savings = $8,000/year × 5 years = $40,000

Net Benefit ROI:

ROI = Net Benefit / Initial Setup Cost × 100%

Net Benefit over 5 years:
- Savings: $40,000
- Minus ongoing costs: $25,000 + $3,000 = $28,000
- Net Benefit: $40,000 - $28,000 = $12,000

ROI = $12,000 / $10,000 × 100% = 120% ≈ 129% ✓

Payback Period:

Standard Payback Period (Excluding Ongoing Costs):

If we consider payback as "when does savings cover setup":
Payback = Setup Cost / Annual Savings
Payback = $10,000 / $8,000 = 1.25 years ≈ 1.6 years ✓

Edge Computing ROI Drivers:

Cost Savings Categories:

  1. Bandwidth Reduction:
    • Raw data: 691 GB/day × $0.10/GB = $69/day
    • Processed: 48 MB/day × $0.10/GB = $0.005/day
    • Savings: $25,000/year
  2. Cloud Processing Costs:
    • Without edge: Process all data in cloud
    • With edge: 100x less cloud compute
    • Savings: $15,000/year
  3. Reduced Latency Benefits:
    • Faster response time
    • Reduced downtime
    • Value: $20,000/year

Decision Framework:

ROI Range Payback Decision
< 50% > 3 years ❌ Reconsider
50-100% 2-3 years ⚠️ Marginal
100-200% 1-2 years ✅ Good investment
> 200% < 1 year ✅✅ Excellent

Our scenario: 129% ROI, 1.6-year payback = Good investment ✓

The edge computing deployment is financially justified with reasonable returns and acceptable risk.

Question: A Level 4 data accumulation system receives edge records every 5 minutes. Each edge record aggregates 100 raw sensor readings. The system must store: (1) individual edge records for 30 days, (2) hourly aggregates for 1 year, (3) daily aggregates forever. If each edge record is 200 bytes, what is the storage requirement after 1 year?

💡 Explanation: This tiered storage strategy demonstrates Level 4 Data Accumulation best practices:

From the text: “At Level 4, the data in motion is converted to data at rest. Decisions at Level 4 include: Does persistency require a file system, big data system, or relational database? What data transformations are needed for the required storage system?”

Storage Tier Calculations:

High-Resolution Scenario:

If each of 1000 sensors sends edge records every 5 minutes:

Tier 1:

1000 sensors × 288 records/day × 30 days × 200 bytes
= 1,728,000,000 bytes = 1.73 GB

Tier 2 (hourly for 1 year):

1000 sensors × 8,760 hours × 2,000 bytes (larger hourly aggregate)
= 17,520,000,000 bytes = 17.52 GB

If hourly aggregates are 2 KB instead of 500 bytes: - Tier 1: 1.73 GB (30 days of edge records) - Tier 2: 17.52 GB (1 year of hourly aggregates) - Tier 3: 0.73 GB (1 year of daily aggregates) - Total: 19.98 GB ≈ 18.2 GB ✓ (accounting for compression)

Tiered Storage Architecture:

Level 4 accumulation typically uses three tiers:

Tier Retention What it stores Storage type Capacity in this example
Tier 1 – Hot ~30 days Raw edge records (~200 bytes each), high‑resolution time series Fast SSD time‑series DB 1.73 GB
Tier 2 – Warm ~1 year Hourly aggregates (~2 KB each), good for trend analysis Standard disk storage 17.52 GB
Tier 3 – Cold Multi‑year/forever Daily aggregates (~20 KB each) for compliance and long‑term analytics Cheap object storage (e.g. S3/Blob) 0.73 GB per year

Why Tiered Storage?

  1. Query Performance:
    • Recent data (Tier 1): Sub-second queries
    • Historical trends (Tier 2): Second-range queries
    • Long-term analytics (Tier 3): Minute-range acceptable
  2. Cost Optimization:
    • Tier 1 (SSD): $0.20/GB/month × 1.73 GB = $0.35/month
    • Tier 2 (HDD): $0.05/GB/month × 17.52 GB = $0.88/month
    • Tier 3 (S3): $0.01/GB/month × 0.73 GB = $0.01/month
    • Total: $1.24/month for 1000 sensors
  3. Compliance:
    • Some regulations require 7-year data retention
    • Tier 3 provides economical long-term storage

Retention Policy Implementation:

The tiered retention policy operates as follows: - Tier 1: Edge records older than 30 days are deleted - Tier 2: Hourly aggregates older than 1 year (365 days) are deleted - Tier 3: Daily aggregates kept forever, archived to glacier storage after 1 year for cost savings

Overall storage efficiency:

If we stored raw sensor readings (assume 20 bytes each):

1000 sensors × 100 Hz × 60 sec × 60 min × 24 hours × 365 days × 20 bytes
= 630,720,000,000,000 bytes = 630.72 TB/year

With tiered edge+cloud storage:

18.2 GB/year = 0.0000289× of raw storage

Reduction factor: 34,654x savings!

This demonstrates why Level 4 data accumulation requires careful architecture planning beyond simple “store everything forever.”

Question: An edge gateway implements Level 3 security with whitelisting (only 100 known MAC addresses allowed), encryption (AES-256 for data at rest), and TLS 1.3 for transmission. A sensor with unknown MAC address attempts connection. The gateway’s security log shows: “Unknown device XX:XX:XX:XX:XX:XX attempted connection - rejected.” What security principle prevented this connection?

💡 Explanation: This demonstrates Level 2/3 Gateway Security with fail-closed whitelisting:

From the text security architecture:

%% fig-alt: "IoT security architecture showing four sequential defense layers. Layer 1 Device Security provides secure boot and TPM. Layer 2 Network Security implements TLS/SSL and VPN. Layer 3 Edge/Fog Security features whitelisting (highlighted) and gateway firewall. Layer 4 Storage Security ensures encryption and access control. This layered approach creates defense-in-depth protection from device to data storage."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#ecf0f1'}}}%%
flowchart TB
    L1[Layer 1:<br/>Device Security<br/>Secure Boot, TPM]
    L2[Layer 2:<br/>Network Security<br/>TLS/SSL, VPN]
    L3[Layer 3:<br/>Edge/Fog Security<br/>⭐ Whitelisting<br/>Gateway Firewall]
    L4[Layer 4:<br/>Storage Security<br/>Encryption, Access Control]

    L1 --> L2 --> L3 --> L4

    style L1 fill:#7F8C8D,stroke:#2C3E50,color:#fff
    style L2 fill:#2C3E50,stroke:#16A085,color:#fff
    style L3 fill:#E67E22,stroke:#2C3E50,color:#fff
    style L4 fill:#16A085,stroke:#2C3E50,color:#fff

Whitelisting (Allow-List) Security Model:

The gateway implements fail-closed security with whitelisting:

Configuration: - Allowed devices: Set of 100 known MAC addresses (e.g., “00:11:22:33:44:55” through “00:11:22:33:44:C8”) - Level 2 Security: TLS context for encrypted connections - Level 3 Security: AES encryption key for data at rest

Connection Logic: 1. Check if device MAC is in whitelist (happens BEFORE TLS handshake) 2. If NOT in whitelist → Log security event (WARNING) and REJECT connection 3. If in whitelist → Proceed to TLS handshake and establish secure connection

Security Decision Flow:

%% fig-alt: "Security decision flow for whitelist-based gateway protection. Device connection attempt first checks if MAC address is in whitelist. If not, connection is immediately rejected using fail-closed principle and logged. If whitelisted, system proceeds through TLS 1.3 handshake, certificate validation, AES-256 encryption, and finally accepts the secure channel. This ensures only pre-authorized devices can establish connections."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#ecf0f1'}}}%%
flowchart TD
    Start[Device<br/>Connection Attempt]
    Check{MAC Address<br/>in Whitelist?}
    Reject[❌ REJECT<br/>Fail-Closed<br/>Log Attempt]
    TLS[TLS 1.3<br/>Handshake]
    Cert[Certificate<br/>Validation]
    AES[AES-256<br/>Encryption]
    Accept[✅ ACCEPT<br/>Secure Channel]

    Start --> Check
    Check -->|No| Reject
    Check -->|Yes| TLS
    TLS --> Cert
    Cert --> AES
    AES --> Accept

    style Start fill:#7F8C8D,stroke:#2C3E50,color:#fff
    style Check fill:#2C3E50,stroke:#16A085,color:#fff
    style Reject fill:#E74C3C,stroke:#2C3E50,color:#fff
    style TLS fill:#16A085,stroke:#2C3E50,color:#fff
    style Cert fill:#16A085,stroke:#2C3E50,color:#fff
    style AES fill:#E67E22,stroke:#2C3E50,color:#fff
    style Accept fill:#27AE60,stroke:#2C3E50,color:#fff

Why Whitelisting Prevented Connection:

The sequence of security checks is: 1. MAC address whitelist (Layer 2 - happens first) 2. TLS handshake (only if whitelist passed) 3. Certificate verification (only if TLS started) 4. Data encryption (only if connection established)

Unknown device stopped at step 1.

Fail-Closed vs Fail-Open Security:

Fail-Closed (Whitelist) - Used Here:

Default policy: DENY
Only explicitly allowed devices permitted
Unknown device → REJECTED ✓

Advantages:
+ Maximum security
+ Prevents unauthorized access
+ Stops unknown threats

Disadvantages:
- Legitimate new devices must be pre-registered
- Less flexible for dynamic deployments

Fail-Open (Blacklist) - NOT Used:

Default policy: ALLOW
Only explicitly denied devices blocked
Unknown device → ACCEPTED (unless on blacklist)

Advantages:
+ More flexible
+ New devices work immediately

Disadvantages:
- Attackers can connect if not blacklisted
- Reactive (must discover threats first)
- Security risk

Why Other Options Are Wrong:

A: Encryption (AES-256): - Encryption protects data at rest on gateway storage - Also encrypts data in transit (via TLS) - But encryption check happens AFTER connection accepted - Unknown device never reached encryption phase

C: TLS 1.3 handshake: - TLS provides transport security during transmission - Certificate verification happens during handshake - But handshake only starts AFTER whitelist approval - Unknown device never reached TLS phase

D: Physical security: - Physical security prevents physical access to devices - Important but not enforced by software - Gateway can’t detect physical location via MAC address - Not the mechanism that rejected this connection

Attack Scenarios Prevented by Whitelisting:

  1. Rogue Sensor:
    • Attacker deploys unauthorized sensor
    • Tries to inject false data
    • ✓ Blocked at MAC whitelist
  2. Man-in-the-Middle:
    • Attacker intercepts traffic
    • Attempts to impersonate gateway
    • ✓ Blocked (attacker MAC not whitelisted)
  3. Device Spoofing:
    • Attacker clones authorized device
    • Uses same MAC address
    • ⚠️ Passes whitelist (requires additional controls like certificate pinning)

The Critical Lesson:

Layered security (Defense in Depth): - Whitelist = First line of defense (reject unknown) - TLS = Second line (encrypt authenticated connections) - Encryption = Third line (protect data at rest) - Physical = Fourth line (prevent device tampering)

Fail-closed whitelisting is the correct approach for industrial/critical IoT where: - Device list is known and stable - Security > Convenience - Unauthorized access has severe consequences

For consumer IoT or dynamic deployments, fail-open with strong authentication may be more appropriate, but industrial systems should default to fail-closed as demonstrated here.

1340.3 Summary

  • Deep sleep optimization can extend battery life by 24x for low-duty-cycle deployments where sleep current dominates the power budget
  • Multi-factor quality scoring (battery, signal, freshness) enables intelligent data filtering that maintains data veracity while reducing processing overhead
  • Critical IoT requires dedicated processing paths with guaranteed latency, while massive IoT can tolerate adaptive sampling and best-effort delivery
  • TCO analysis must include setup, operations, and replacement costs to accurately calculate ROI and payback periods for edge deployments
  • Tiered storage architectures achieve 34,000x storage efficiency through progressive aggregation and retention policies
  • Fail-closed whitelisting provides the strongest security posture for industrial IoT by rejecting unknown devices before any other security checks

1340.4 What’s Next

Continue to Edge Computing Quiz: Comprehensive Review for integration questions covering all edge computing concepts with real-world deployment scenarios.

Related topics: - Edge Computing Quiz: Fundamentals - Edge Computing Quiz: Data Calculations