9  Context-Aware Energy Management

9.1 Learning Objectives

  • Explain how duty cycling reduces average power consumption and calculate energy savings for different wake/sleep ratios
  • Describe the ACE system architecture (Inference Cache, Rule Miner, Sensing Planner) and how it achieves 60-80% energy savings
  • Evaluate when to process locally versus offload to cloud using the MAUI framework and energy cost analysis
  • Apply association rule mining concepts (support, confidence) to predict context and avoid unnecessary sensor activations
In 60 Seconds

Context-aware energy management uses learned patterns about device context — location, time, activity — to predict when sensing is needed and sleep longer when it is not, achieving 60–80% energy savings over fixed-schedule approaches.

Key Concepts

  • Context-Aware Energy Management: Energy optimization approach using learned patterns about device context (location, time, activity) to predict when sensing is needed and extend sleep periods.
  • Association Rule Mining: Machine learning technique discovering co-occurrence patterns (e.g., “temperature sensing needed when motion detected in morning”) used to build context-energy prediction rules.
  • Context Feature: Input variable used in energy prediction models such as time of day, device location, recent motion events, or calendar data.
  • Energy Prediction Model: Machine learning model estimating expected energy consumption or sensing necessity based on current context features.
  • Adaptive Duty Cycling: Energy management strategy dynamically adjusting device sleep/wake cycles based on predicted context relevance rather than fixed schedules.
  • Rule Confidence: Statistical measure of association rule reliability (fraction of times the rule correctly predicts the outcome) used to filter energy prediction rules.
  • Context Cache: Stored representation of recent context patterns enabling fast energy decision-making without recomputing predictions from scratch.

Energy and power management determines how long your IoT device can operate between battery changes or charges. Think of packing for a camping trip with limited battery packs – every bit of power must be used wisely. Since many IoT sensors need to run for months or years unattended, power management is often the single most important engineering decision.

“Regular power management says ‘wake up every 10 minutes no matter what,’” explained Bella the Battery. “But context-aware energy management is smarter – it says ‘wake up often when something interesting is happening, but sleep longer when nothing is going on.’ It is like how you check your phone more often when expecting a message!”

Sammy the Sensor gave an example: “If I am a motion sensor in an office, I know nobody comes in on weekends. So on Saturday and Sunday, I can check once an hour instead of every minute. That saves 60 times more energy! The system learns patterns and predicts when sensing is actually needed.”

Max the Microcontroller explained the big idea: “Context-aware systems can save 60 to 80 percent of energy compared to fixed schedules. They use tricks like caching – remembering recent readings so they do not have to measure again – and prediction – guessing what will happen based on what happened before.” Lila the LED added, “Smart energy management means Bella lasts years instead of months. That is the difference between a practical product and one that needs constant battery changes!”

How to Use: Adjust the parameters above to match your IoT device specifications. The calculator shows how context-aware adaptive sampling reduces energy consumption, and how the ACE system (caching + inference) further amplifies those savings. Typical ACE deployments achieve 60-80% total energy reduction.

9.2 Context-Aware Energy Management

This section provides a stable anchor for cross-references to context-aware energy management across the curriculum.

Context-aware energy management enables IoT devices to dynamically adapt their operation based on real-time understanding of user behavior, environmental conditions, and system state. Rather than using static power budgets, context-aware systems optimize for each specific situation, achieving energy savings of 60-80% while maintaining user experience.

9.3 How It Works

Context-aware energy management operates through a three-phase continuous cycle:

Phase 1: Context Sensing and Caching Applications request context attributes (location, activity, environment). Instead of directly activating expensive sensors, the system first checks an inference cache. If a recent value exists within its validity window (typically 30 seconds to 5 minutes depending on attribute volatility), that cached value is returned immediately at zero energy cost.

Phase 2: Rule-Based Inference When cache misses occur, the system attempts inference using learned association rules. For example, if the rule “Driving=True → AtHome=False” has 90% confidence and “Driving” is cached, the system infers “AtHome=False” without GPS sensing. The Rule Miner continuously learns these correlations from historical context data using metrics like support (pattern frequency) and confidence (conditional probability).

Phase 3: Adaptive Sensing Only when inference is impossible or unreliable does the system activate sensors. The Sensing Planner selects the cheapest sensing strategy: checking lower-power proxy attributes first (accelerometer before GPS), using the most energy-efficient radio protocol, and adapting sampling rates based on battery level and detected activity patterns.

This three-tier approach (cache → infer → sense) reduces actual sensor activations by 60-80% while maintaining acceptable accuracy for most IoT applications.

9.4 Chapter Overview

This topic has been organized into four focused chapters for easier learning:

9.4.1 1. Duty Cycling Fundamentals

Learn the foundation of low-power IoT design through duty cycling - the practice of periodically waking devices for sensing and returning to sleep mode.

Key Topics:

  • Duty cycle calculation and average power
  • Interactive duty cycle calculator
  • Deep sleep vs light sleep modes
  • Fixed vs event-driven wake-up strategies
  • Common misconceptions about duty cycling

9.4.2 2. ACE System and Shared Context Sensing

Explore the ACE (Adaptive Context-aware Energy-saving) system that achieves 60-80% energy savings through intelligent caching, cross-app context sharing, and association rule mining.

Key Topics:

  • Shared context sensing across applications
  • Cross-app context correlations
  • Association rule mining (support and confidence)
  • ACE system architecture (Inference Cache, Rule Miner, Sensing Planner)
  • Worked examples: Smart building sensors and solar harvesting

9.4.3 3. Code Offloading and Heterogeneous Computing

Understand when to process locally versus offload to cloud, and how to leverage heterogeneous processors (CPU, GPU, DSP, NPU) for energy-efficient execution.

Key Topics:

  • Energy-preserving sensing plans
  • MAUI offloading framework
  • Wi-Fi vs cellular offloading trade-offs
  • Heterogeneous core scheduling
  • Code offloading worksheets

9.4.4 4. Energy Optimization Worksheets and Assessment

Apply your knowledge through comprehensive worksheets, quizzes, and practical exercises covering all aspects of context-aware energy management.

Key Topics:

  • Context-aware battery life calculations
  • Mixed usage analysis
  • ACE system energy savings calculations
  • Comprehensive assessment questions
  • Key concepts reference

9.5 Learning Path

For the best learning experience, work through these chapters in order:

Learning path diagram showing four sequential chapters: Duty Cycling Fundamentals, ACE System and Shared Context Sensing, Code Offloading and Heterogeneous Computing, and Energy Optimization Worksheets and Assessment

9.6 Quick Reference: Key Figures

The following figures are distributed across the chapters:

Architecture diagram showing ACE system components including inference cache, rule miner, sensing planner, and contexters working together to minimize energy consumption through context inference
Figure 9.1: Context: ACESystem
Bar chart comparing cache hit rates and energy savings percentages across different context-aware caching strategies showing 60-80% energy reduction
Figure 9.2: Context: CachingPerformance
Graph plotting energy consumption in millijoules over time comparing direct sensing versus ACE system showing significant reduction through caching and inference
Figure 9.3: Context: EnergyConsumed
Table listing context attributes with their sensing energy costs ranging from low-cost accelerometer at 10mW to high-cost GPS at 100mW
Figure 9.4: Context: EnergyOfContextAttributes
Flow diagram illustrating energy-preserving sensing plan decision tree showing when to use cached values, proxy attributes, or direct sensing
Figure 9.5: Context: EnergyPreservingSensingPlan
System overview diagram of LEO context-aware framework showing local and remote components for energy-efficient mobile app context management
Figure 9.6: Context: LEOOverview
Table displaying learned association rules between context attributes with support and confidence percentages such as Driving equals true implies AtHome equals false
Figure 9.7: Context: LearnedRulesbyRuleMiner
Performance comparison chart showing local GPU computation speedup versus cloud offloading for mobile keyword spotting achieving 21x faster processing
Figure 9.8: Context: LocalComputation1
Energy efficiency graph comparing local heterogeneous cores versus cloud processing showing GPU batching reduces energy consumption below cloud transmission overhead
Figure 9.9: Context: LocalComputation2
Timeline visualization demonstrating low-overhead context inference using cached attributes versus expensive direct sensing operations
Figure 9.10: Context: LowOverhead
Decision tree diagram for MAUI offloading framework showing energy cost calculation comparing local execution versus remote cloud execution with network transmission costs
Figure 9.11: Context: MAUIOffloading
Bar chart comparing energy costs of different wireless technologies Wi-Fi versus 3G versus LTE showing Wi-Fi lowest at 100mJ and LTE highest at 1000mJ per transmission
Figure 9.12: Context: NetworkingCosts
Performance metrics table showing GPU-optimized keyword spotting achieves 6x speedup over cloud and 21x over sequential CPU processing
Figure 9.13: Context: OptimizedGPUEfficient1
Energy consumption comparison demonstrating GPU batching with optimized implementation uses less energy than cloud offloading for audio processing tasks
Figure 9.14: Context: OptimizedGPUEfficient2
Latency analysis showing GPU parallel processing reduces keyword spotting latency from 500ms sequential to 25ms with batched GPU execution
Figure 9.15: Context: OptimizedGPUEfficient3
Diagram illustrating rule miner simplification process showing how complex context history is distilled into simple if-then association rules
Figure 9.16: Context: RuleMinerSimplification
Bar graph showing user battery life extension from days to weeks achieved through context-aware energy management with ACE system implementation
Figure 9.17: Context: UserSavings
Complete workflow flowchart showing ACE system operation from app context request through cache lookup, rule inference, sensing plan execution, and cache update
Figure 9.18: Context: Workflow

Scenario: You’re designing a network of 500 temperature sensors for a commercial building’s HVAC optimization system. The building management wants 5-year battery life with standard AA batteries while maintaining room comfort within ±1°C of setpoint.

Given:

  • Sensors: DHT22 temperature/humidity sensor (1 mA for 2s reading)
  • MCU: ATmega328P (5 mA active, 0.1 µA power-down)
  • Radio: Zigbee (25 mA TX for 50ms)
  • Battery: 2× AA alkaline (2000 mAh @ 3V)
  • Building occupancy: 7 AM - 7 PM weekdays, empty nights/weekends
  • Current approach: Fixed 5-minute sampling

Step 1: Calculate baseline fixed-sampling energy:

Per cycle (300 seconds):
- Sensor reading: 1 mA × 2s = 2 mAs
- MCU active: 5 mA × 2.1s = 10.5 mAs
- Zigbee TX: 25 mA × 0.05s = 1.25 mAs
- Sleep: 0.0001 mA × 297.9s = 0.03 mAs
Total: 13.78 mAs per cycle

Cycles per day: 86,400s / 300s = 288
Daily energy: 288 × 13.78 mAs = 3,969 mAs = 3.97 mAh
Battery life: 2000 mAh / 3.97 mAh = 504 days = 1.4 years ✗ (does NOT meet 5-year target)

Step 2: Apply context-aware adaptive sampling:

Context rules:
1. Occupied (7 AM-7 PM weekdays): Sample every 5 minutes (rapid response)
2. Unoccupied (nights/weekends): Sample every 30 minutes (slow drift)
3. Stable temperature (<0.2°C change): Extend to 15 minutes
4. Rapid change (>0.5°C): Reduce to 2 minutes (event detected)

Occupancy breakdown:
- Occupied: 12 hours/day × 5 days = 60 hours/week
- Unoccupied: 108 hours/week

Sampling rates:
- Occupied normal: 12 cycles/hour × 60h = 720 cycles/week
- Occupied stable (50% of time): 4 cycles/hour × 30h = 120 cycles/week
- Occupied event (10% of time): 30 cycles/hour × 6h = 180 cycles/week
- Unoccupied: 2 cycles/hour × 108h = 216 cycles/week
Total: 1,236 cycles/week vs baseline 2,016 cycles/week (39% reduction)

Weekly energy adaptive: 1,236 × 13.78 mAs = 17,032 mAs = 17.03 mAh
Daily energy: 17.03 / 7 = 2.43 mAh
Battery life: 2000 / 2.43 = 822 days = 2.3 years (63% improvement vs baseline)

How ACE Achieves These Savings:

The 60-80% energy reduction comes from three synergistic mechanisms:

  1. Caching (30-40% reduction): With 30s cache lifetime, 1 GPS fix serves 5 app requests = \(100\,\text{mJ} / 5 = 20\,\text{mJ}\) per app (80% saving per repeated request)
  2. Inference (20-30% reduction): Accelerometer (10 mJ) + inference (0.1 mJ) replaces GPS (100 mJ) when confidence ≥90% (89.9% saving when rule applies)
  3. Adaptive sampling (10-20% reduction): Context-aware schedules reduce daily samples by 39% (288 → 176)

Combined multiplicative effect: \(1 - (0.6 \times 0.7 \times 0.61) = 74.38\%\) total savings.

Step 3: Add ACE system energy savings:

ACE features:
1. Cache hits (30%): Use cached value from nearby sensors instead of sampling
2. Inference (20%): Predict temperature based on time-of-day patterns
3. Direct sensing (50%): Actually sample the sensor

Effective sampling energy with ACE:
- Cache hit: 0 mAs (instant return from memory)
- Inference: 0.1 mAs (pattern matching computation)
- Direct sensing: 13.78 mAs (full cycle)

Average per cycle: (0.30 × 0) + (0.20 × 0.1) + (0.50 × 13.78) = 6.91 mAs

Weekly energy with ACE: 1,236 × 6.91 mAs = 8,541 mAs = 8.54 mAh
Daily energy: 8.54 / 7 = 1.22 mAh
Battery life: 2000 / 1.22 = 1,639 days = 4.5 years (3.3× improvement vs baseline)

Step 4: Performance validation:

Key metrics measured after 90-day pilot:

Temperature control accuracy:
- Fixed 5-minute: ±0.8°C average deviation from setpoint
- Adaptive: ±0.7°C average (better due to faster response during events)

Energy consumption (measured with current shunt):
- Fixed: 4.05 mAh/day actual (vs 3.97 predicted) ✓
- Adaptive + ACE: 1.31 mAh/day actual (vs 1.22 predicted)
- Prediction accuracy: 93% (within 8% of measured)

User complaints (tickets per 100 sensors):
- Fixed: 2.1 tickets/month ("too cold in morning", "takes forever to adjust")
- Adaptive: 0.8 tickets/month (fewer complaints due to faster event response)

Result: Context-aware adaptive sampling combined with ACE system extended battery life from 1.4 years to 4.5 years (3.3× improvement) while simultaneously IMPROVING temperature control responsiveness during occupancy periods. The 500-sensor deployment reduced annual maintenance costs by $8,500 (battery replacement labor) and improved building comfort scores from 72% to 89%.

Key Insight: Context-awareness isn’t just about saving energy—it’s about being SMART about when to use energy. By sensing more frequently when it matters (occupied periods, rapid changes) and less frequently when it doesn’t (nights, stable conditions), adaptive systems achieve both better performance AND longer battery life. Fixed-schedule systems waste energy sampling when nothing is changing.

Factor Context-Aware Management Fixed Duty Cycling Evaluation Criteria
Activity Pattern Predictability High predictability (office building, traffic patterns) Unpredictable (wildlife monitoring, earthquake sensors) Can you predict when activity occurs? >70% predictable → context-aware
Energy Budget Tight budget (<100 µA average) Moderate budget (>200 µA average) Average current target: <100 µA favors context-aware optimization
Responsiveness Requirements Variable (fast when active, slow when idle) Constant (always same latency) Need fast response during events? Context-aware provides both
Deployment Scale Large (>1000 devices) justifies complexity Small (<100 devices) simple is better ROI: >1000 devices = $10K+ savings justify development cost
Development Resources 2-4 weeks extra dev time acceptable Need rapid deployment Timeline: Extra 3 weeks development → save 6 months maintenance
Sensor Data Volatility High correlation with context (temp varies by occupancy) Random noise (seismic data, radiation) Correlation coefficient: >0.6 → context-aware effective

Scoring System (0-5 points per factor):

Total Score Recommendation Rationale
25-30 Strongly recommend context-aware All factors aligned; expect 60-80% energy savings
18-24 Recommend with pilot Most factors favorable; validate with 50-device pilot first
12-17 Consider hybrid approach Mixed results; use simple time-based rules (day/night)
6-11 Fixed duty cycling preferred Context complexity outweighs benefits; keep it simple
0-5 Definitely fixed duty cycling Context-awareness adds cost without benefit

Real-World Example:

A smart agriculture deployment of 2,000 soil moisture sensors across 500 acres:

Factor Score Justification
Predictability 4/5 Soil moisture varies with irrigation schedule (6 AM daily) and rainfall (weather forecast)
Energy Budget 5/5 Target: 5-year battery life = 45 µA avg (very tight)
Responsiveness 4/5 Need hourly readings during growing season, daily in winter
Scale 5/5 2,000 devices × $20 battery replacement = $40K per cycle
Dev Resources 3/5 4-week timeline acceptable (already 6-month dev cycle)
Volatility 5/5 Soil moisture directly correlates with irrigation schedule and rainfall
Total 26/30 STRONG RECOMMENDATION FOR CONTEXT-AWARE

Implemented Strategy:

def calculate_sampling_interval(sensor_id):
    # Context factors
    season = get_current_season()  # growing/dormant
    last_irrigation = time_since_irrigation(sensor_id)
    weather_forecast = get_rainfall_probability()
    battery_level = get_battery_voltage(sensor_id)

    # Base intervals
    if season == "growing":
        base_interval = 3600  # 1 hour
    else:
        base_interval = 86400  # 24 hours

    # Adjust for irrigation schedule
    if last_irrigation < 6 * 3600:  # Within 6h of irrigation
        interval = base_interval / 3  # 20 min (rapid changes expected)
    elif last_irrigation < 24 * 3600:
        interval = base_interval  # 1 hour (monitoring drainage)
    else:
        interval = base_interval * 2  # 2 hours (stable)

    # Adjust for weather
    if weather_forecast > 70:  # >70% chance rain
        interval = min(interval, 1800)  # Max 30 min (monitor infiltration)

    # Battery conservation
    if battery_level < 3.0:  # <30% battery
        interval *= 4  # Extend interval to preserve remaining life

    return interval

Result: Energy savings of 67% compared to fixed 1-hour sampling, extending battery life from 3.1 years to 9.4 years. Farmer feedback improved due to faster alerts during irrigation events (20-minute sampling) while maintaining low power during dormant winter months (48-hour sampling).

Common Mistake: Context Inference Rules with Low Confidence

The Mistake: Developers implement ACE-style context inference using association rules with confidence thresholds that are too low, causing frequent incorrect inferences that degrade user experience while still consuming energy on eventual re-sensing.

Quantified Example:

An ACE system learns this rule from 30 days of training data: - Rule: “Driving=True → AtHome=False” (user is NOT at home when driving) - Support: 12% (120 of 1000 observations had both attributes) - Confidence: 55% (55 of 100 “Driving=True” cases had “AtHome=False”)

Developer sets minimum confidence threshold at 50% (barely passing) and deploys to production.

What Goes Wrong:

Over 90 days of real-world use with 1,000 users:

Total "AtHome" queries: 10,000 per user = 10M queries fleet-wide
ACE attempts inference using this rule: 20% of queries (2M queries)

Correct inferences: 55% × 2M = 1,100,000 (saved energy)
Incorrect inferences: 45% × 2M = 900,000 (wrong answer delivered)

User-visible failures:
- Smart lock didn't unlock (user arrived home but system inferred "not home")
- HVAC stayed off (system thought user was away)
- Lights didn't turn on automatically

Energy cost of incorrect inferences:
Wrong inference delivered: 0 mAs (but user experience damaged)
User retries manually: triggers forced re-sensing at 100 mAs
Total wasted energy: 900,000 × 100 mAs = 90,000,000 mAs = 90,000 As = 25 Ah fleet-wide

Support tickets generated: 4,500 (0.5% of users hit edge case monthly)
Customer churn: 45 users (0.1% → competitor due to poor reliability)

The Correct Approach:

Use confidence thresholds based on consequence severity:

Application Type Min Confidence Max Error Rate Rationale
Telemetry (low consequence) 60% 40% acceptable Temperature reading slightly off → no big deal, next reading in minutes
Convenience (medium consequence) 75% 25% acceptable Lights don’t auto-turn-on → minor annoyance, user manually overrides
Security/Safety (high consequence) 95% 5% acceptable Smart lock doesn’t open → major frustration and safety risk

Redesigned Implementation:

class ACEInferenceEngine:
    # Confidence thresholds by attribute criticality
    THRESHOLDS = {
        "temperature": 0.60,      # Low consequence
        "activity": 0.70,         # Medium consequence
        "location": 0.85,         # High consequence (affects security)
    }

    def infer_attribute(self, target_attr, cached_attrs, rules):
        # Find applicable rules
        applicable = [r for r in rules if r.antecedent in cached_attrs]

        # Sort by confidence, highest first
        applicable.sort(key=lambda r: r.confidence, reverse=True)

        # Get threshold for this attribute
        threshold = self.THRESHOLDS.get(target_attr, 0.75)

        for rule in applicable:
            if rule.confidence >= threshold:
                # High-confidence inference
                return InferenceResult(
                    value=rule.consequent,
                    method="inference",
                    energy_cost=0.1,  # nJ (pattern matching)
                    confidence=rule.confidence
                )

        # No high-confidence rule found → must sense directly
        return InferenceResult(
            value=None,
            method="direct_sensing_required",
            energy_cost=100,  # mJ (GPS/sensor)
            confidence=1.0
        )

Result After Fix:

Re-trained rules with minimum confidence 75% for location attributes:

New rule: "Driving=True AND WiFiNetwork!=Home → AtHome=False"
Support: 8% (fewer observations, more specific)
Confidence: 92% (much more reliable)

Over 90 days:
Inference attempts: 1.5M (25% fewer, rule is more specific)
Correct: 1,380,000 (92%)
Incorrect: 120,000 (8%)

Support tickets: 600 (87% reduction vs. 4,500)
Customer churn: 0 users (no reliability complaints)
Energy savings: Still 60%+ but without user experience degradation

Key Insight: Context inference with low confidence (<70%) is worse than not inferring at all. You waste energy delivering wrong answers, then waste MORE energy re-sensing when the user notices the error, AND you damage trust in your system. Set confidence thresholds based on consequence severity: relaxed for telemetry (60%), moderate for convenience (75%), strict for security (95%). A 92% confidence rule that applies less often beats a 55% confidence rule that applies frequently.

9.7 Concept Relationships

Context-aware energy management connects to multiple IoT domains:

Foundational Concepts:

  • Duty Cycling (prerequisite): Context-awareness extends basic duty cycling by making wake intervals adaptive rather than fixed
  • Power Budgeting: Context prediction must consume less energy than the optimizations it enables
  • Sleep Modes: Deep sleep effectiveness depends on accurate context prediction to avoid unnecessary wake-ups

System Architecture:

  • Edge Computing: Local context inference reduces cloud offloading costs
  • Caching Strategies: Shared context cache extends distributed cache coherency concepts to sensor data
  • Machine Learning: Association rule mining is a lightweight ML approach suitable for resource-constrained devices

Communication Protocols:

  • Network Selection: Context (Wi-Fi available? Battery level?) determines whether to use Wi-Fi, cellular, or LoRa
  • Transmission Batching: Activity prediction enables intelligent message queuing

Advanced Applications:

  • Predictive Maintenance: Context history enables anomaly detection (unusual activity patterns indicate faults)
  • Privacy-Preserving Sensing: Inference from proxy attributes reduces raw sensor data exposure

Understanding context-aware energy management requires integrating knowledge from power management, distributed systems, machine learning, and wireless communication protocols.

9.9 See Also

Prerequisite Reading:

Related Advanced Topics:

Architecture Patterns:

  • Edge Fog Computing - Distributed context inference across edge-fog-cloud tiers
  • WSN Overview - Network-wide energy optimization through coordinated sleep scheduling

Communication Protocols:

  • LoRaWAN Overview - Class A/B/C modes provide protocol-level context-aware power management
  • Bluetooth Fundamentals - BLE advertising intervals can adapt based on detected activity
  • MQTT - QoS levels and retained messages support context-driven message prioritization

Research Foundations:

  • Original ACE paper: “Acquisitional Context Engine for Context-Aware Computing” (MobiSys 2010)
  • MAUI framework: “MAUI: Making Smartphones Last Longer with Code Offload” (MobiSys 2010)

9.10 Try It Yourself

Exercise 1: Implement Basic Context Caching (15 minutes)

Create a simple context cache in Python that demonstrates energy savings:

import time
from datetime import datetime, timedelta

class ContextCache:
    def __init__(self, validity_seconds=60):
        self.cache = {}
        self.validity = timedelta(seconds=validity_seconds)
        self.sensor_reads = 0  # Track actual sensor activations

    def get_context(self, attribute):
        # Check cache first
        if attribute in self.cache:
            value, timestamp = self.cache[attribute]
            if datetime.now() - timestamp < self.validity:
                print(f"[CACHE HIT] {attribute} = {value} (saved sensor read!)")
                return value

        # Cache miss - must sense
        self.sensor_reads += 1
        value = self._sense_attribute(attribute)
        self.cache[attribute] = (value, datetime.now())
        print(f"[SENSOR READ #{self.sensor_reads}] {attribute} = {value}")
        return value

    def _sense_attribute(self, attribute):
        # Simulate sensor reading (replace with actual sensor code)
        time.sleep(0.1)  # Simulate sensor delay
        return f"value_{attribute}_{self.sensor_reads}"

# Test the cache
cache = ContextCache(validity_seconds=5)

# Simulate 10 requests over 12 seconds
for i in range(10):
    temperature = cache.get_context("temperature")
    time.sleep(1.2)

print(f"\nTotal sensor reads: {cache.sensor_reads} / 10 requests")
print(f"Energy savings: {100 - (cache.sensor_reads / 10 * 100):.0f}%")

What to observe: With 5-second cache validity and 1.2-second request intervals, you should see ~4 sensor reads instead of 10 (60% cache hit rate).

Extension: Modify validity_seconds to 10 and observe higher cache hit rate but potentially stale data.


Exercise 2: Association Rule Mining for Context Prediction (20 minutes)

Implement a simple rule miner that learns “Driving → NotAtHome” pattern:

class SimpleRuleMiner:
    def __init__(self, min_support=0.05, min_confidence=0.70):
        self.observations = []
        self.min_support = min_support
        self.min_confidence = min_confidence

    def observe(self, driving, at_home):
        self.observations.append({"driving": driving, "at_home": at_home})

    def mine_rules(self):
        total = len(self.observations)

        # Count: Driving=True AND AtHome=False
        both_true = sum(1 for o in self.observations
                       if o["driving"] and not o["at_home"])

        # Count: Driving=True (regardless of AtHome)
        driving_true = sum(1 for o in self.observations if o["driving"])

        # Calculate metrics
        support = both_true / total if total > 0 else 0
        confidence = both_true / driving_true if driving_true > 0 else 0

        print(f"Rule: Driving=True → AtHome=False")
        print(f"Support: {support:.1%} ({both_true}/{total} observations)")
        print(f"Confidence: {confidence:.1%} ({both_true}/{driving_true} when driving)")

        if support >= self.min_support and confidence >= self.min_confidence:
            print("✓ Rule accepted for inference")
            return True
        else:
            print("✗ Rule rejected (insufficient support or confidence)")
            return False

# Simulate 100 context observations
miner = SimpleRuleMiner()

# Training data (simulate realistic patterns)
import random
for _ in range(100):
    driving = random.random() < 0.20  # User drives 20% of time
    if driving:
        at_home = random.random() < 0.08  # Rarely at home when driving (8%)
    else:
        at_home = random.random() < 0.40  # Often at home when not driving (40%)

    miner.observe(driving, at_home)

# Mine and evaluate rule
miner.mine_rules()

What to observe: With realistic training data, the rule should have ~18% support (driving happens 20% of time, 90% of those are not-at-home) and ~92% confidence (very high reliability).

Extension: Add more complex rules like “AtHome=True AND Night=True → Sleeping=True” and calculate support/confidence.


Exercise 3: Adaptive Duty Cycling Based on Battery Level (25 minutes)

Implement battery-aware context-adaptive sampling:

class AdaptiveSensor:
    def __init__(self, battery_capacity_mah=2000):
        self.battery_mah = battery_capacity_mah
        self.base_interval_sec = 60  # Normal: sample every 60 seconds

    def get_battery_level(self):
        # Simulate battery depletion (replace with actual battery monitor)
        return self.battery_mah / 2000  # Returns 0.0 to 1.0

    def calculate_interval(self):
        battery_pct = self.get_battery_level() * 100

        if battery_pct > 50:
            scale = 1.0  # Normal operation
            mode = "NORMAL"
        elif battery_pct > 30:
            scale = 2.0  # 2× longer intervals
            mode = "CONSERVE"
        elif battery_pct > 15:
            scale = 5.0  # 5× longer intervals
            mode = "LOW_BATTERY"
        else:
            scale = 10.0  # 10× longer intervals
            mode = "CRITICAL"

        interval = self.base_interval_sec * scale
        print(f"Battery: {battery_pct:.0f}% | Mode: {mode:12s} | Interval: {interval:.0f}s")
        return interval

    def run_simulation(self, duration_hours=24):
        current_time = 0
        samples = 0

        # Simulate 24 hours of operation
        while current_time < duration_hours * 3600:
            interval = self.calculate_interval()
            current_time += interval
            samples += 1

            # Simulate battery drain (2 mA active for 2s, 10 µA sleep)
            active_mah = 0.002 * (2/3600)  # 2mA for 2 seconds
            sleep_mah = 0.00001 * (interval/3600)  # 10 µA sleep
            self.battery_mah -= (active_mah + sleep_mah)

            if self.battery_mah <= 0:
                print(f"\nBattery depleted after {current_time/3600:.1f} hours, {samples} samples")
                break

        print(f"\nFinal battery: {self.battery_mah:.0f} mAh after {duration_hours}h")
        print(f"Total samples: {samples}")

sensor = AdaptiveSensor()
sensor.run_simulation(duration_hours=168)  # 1 week

What to observe: As battery depletes, intervals extend automatically. Compare energy usage to fixed 60-second sampling.

Extension: Add activity detection (sample faster when motion detected) and implement hysteresis to prevent rapid mode switching.

9.11 Knowledge Check

Common Pitfalls

Deploying context inference rules with support below 10% or confidence below 70% leads to incorrect context predictions, causing unnecessary sensor activations instead of saving energy. Always validate rules against held-out data before deployment.

Reusing cached context values beyond their valid time window causes the system to make decisions based on outdated information. Set explicit TTL (time-to-live) values for each context type based on how quickly it changes in your deployment.

Assuming cloud offloading always saves device energy is wrong — the radio transmission cost to send data can exceed the local computation cost. Always compare local processing energy versus transmission energy using the MAUI framework before deciding.

Association rules learned from lab data often fail in real deployments where user patterns differ. Retrain or fine-tune the rule miner with data collected from the actual deployment environment.

9.12 What’s Next

If you want to… Read this
Learn the foundation of duty cycling Duty Cycling Fundamentals
Understand the ACE system in depth ACE & Shared Context Sensing
Explore computation offloading tradeoffs Computation Offloading
See a complete context optimization system Context Energy Optimization
Apply energy strategies to hardware Hardware & Software Optimisation