1327  Edge Computing: Cyber-Foraging and Caching

1327.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Explain Cyber-Foraging Concepts: Understand how mobile devices opportunistically discover and leverage nearby computational resources
  • Apply the What/Where/When Framework: Make informed decisions about offloading computation to surrogates
  • Compare Virtualization vs Mobile Agents: Select appropriate cyber-foraging implementation approaches
  • Design Edge Caching Strategies: Implement multi-tier caching hierarchies for optimal latency and bandwidth

1327.2 Prerequisites

Before diving into this chapter, you should be familiar with:

1327.3 Cyber-Foraging Overview

Cyber-foraging extends edge computing concepts by enabling resource-constrained mobile devices to opportunistically discover and offload computation to nearby devices (surrogates).

1327.3.1 What is Cyber-Foraging?

Definition: Cyber-foraging is the practice of augmenting the computing capabilities of wireless mobile computers by dynamically discovering and leveraging nearby computational resources.

Core Concept: Instead of relying on fixed infrastructure (cloudlets, fog nodes), mobile devices scavenge for computational resources in their environment - nearby smartphones, laptops, vehicles, or IoT gateways.

Key Characteristics:

  • Opportunistic: Uses whatever resources are available nearby
  • Dynamic: Adapts to changing network topology and device availability
  • Heterogeneous: Works across diverse device types with varying capabilities
  • Autonomous: Devices make independent decisions about offloading

1327.4 Cyber-Foraging Framework: What/Where/When

Effective cyber-foraging requires answering three fundamental questions:

%% fig-alt: "Edge caching framework diagram showing three decision dimensions: WHAT (what data/computation to cache or offload - frequently accessed data, compute-intensive tasks, real-time processing), WHERE (which edge nodes to use - nearby surrogates, dedicated fog nodes, hybrid cache hierarchy), and WHEN (timing and triggers - on-demand requests, predictive prefetching, periodic sync, event-driven updates). Shows decision flowchart connecting these dimensions to optimal edge caching strategies."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D', 'fontSize': '14px'}}}%%
graph TB
    subgraph What["WHAT to Cache/Process?"]
        W1[Frequently Accessed Data<br/>Hot content cache]
        W2[Compute-Intensive Tasks<br/>Video transcoding, ML]
        W3[Real-Time Processing<br/>AR, sensor fusion]
        W4[Privacy-Sensitive Data<br/>Local processing only]
    end

    subgraph Where["WHERE to Process?"]
        WH1[Nearby Surrogates<br/>Opportunistic devices]
        WH2[Dedicated Fog Nodes<br/>Stable infrastructure]
        WH3[Hybrid Cache Hierarchy<br/>Multi-tier edge]
        WH4[Device-Local Cache<br/>On-device storage]
    end

    subgraph When["WHEN to Trigger?"]
        WN1[On-Demand<br/>User requests]
        WN2[Predictive Prefetch<br/>Usage patterns]
        WN3[Periodic Sync<br/>Scheduled updates]
        WN4[Event-Driven<br/>Context changes]
    end

    Decision{Edge Caching<br/>Decision}

    What --> Decision
    Where --> Decision
    When --> Decision

    Decision --> Strategy[Optimal Edge<br/>Caching Strategy]

    style Decision fill:#E67E22,stroke:#2C3E50,color:#fff
    style Strategy fill:#16A085,stroke:#2C3E50,color:#fff

Figure 1327.1: Edge caching framework diagram showing three decision dimensions: WHAT (what data/computation to cache or offload), WHERE (which edge nodes to use), and WHEN (timing and triggers).

Decision Framework:

Question Considerations Example
WHAT? Task characteristics (CPU/memory/network), data sensitivity, QoS requirements Video transcoding is CPU-heavy but delay-tolerant - good candidate for offloading
WHERE? Surrogate capabilities, network latency, reliability, trust Nearby laptop has GPU - offload AR rendering; stranger’s device - avoid for private data
WHEN? Task urgency, network conditions, battery state, surrogate availability Battery <10% + Wi-Fi available + non-urgent task - offload immediately

1327.5 Scavenger System Architecture

The Scavenger system exemplifies cyber-foraging with a distributed architecture enabling mobile devices to discover and utilize nearby computational resources:

%% fig-alt: "Cyber-foraging scavenger system architecture showing heterogeneous network topology with mobile client (netbook running Scavenger client library and application) discovering nearby surrogates (laptop, iPhone, Nokia N900, desktop) via service discovery protocol. Client side has App layer calling Scavenger Library which uses RPC to communicate with surrogates. Surrogate side has Frontend receiving RPC requests, using IPC to communicate with Execution environment where tasks run. Shows bidirectional communication and result return paths."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D', 'fontSize': '14px'}}}%%
graph TB
    subgraph Client["Mobile Client (Netbook)"]
        App[Application Layer<br/>Photo Processing App]
        ScavLib[Scavenger Client Library<br/>Task Profiling & Discovery]
        RPC_Client[RPC Interface<br/>Remote Procedure Call]

        App -->|Task request| ScavLib
        ScavLib -->|Serialize task| RPC_Client
    end

    subgraph Network["Heterogeneous Network"]
        Discovery[Service Discovery<br/>mDNS/Zeroconf]
    end

    subgraph Surrogates["Nearby Surrogates"]
        Laptop[Laptop<br/>Intel i7, 16GB RAM]
        iPhone[iPhone<br/>A15 Bionic]
        Nokia[Nokia N900<br/>ARM Cortex]
        Desktop[Desktop PC<br/>NVIDIA GPU]
    end

    subgraph Surrogate_Arch["Surrogate Architecture (Example: Laptop)"]
        Frontend[Frontend<br/>Accept RPC requests]
        IPC[IPC Layer<br/>Inter-Process Comm]
        Execution[Execution Environment<br/>Sandbox, Resource Mgmt]

        Frontend -->|Deserialize task| IPC
        IPC -->|Execute| Execution
        Execution -->|Return results| IPC
        IPC -->|Serialize results| Frontend
    end

    RPC_Client -->|1. Discover surrogates| Discovery
    Discovery -->|2. Advertise capabilities| Surrogates
    RPC_Client -->|3. Select best surrogate| Laptop
    Laptop -->|4. Execute task| Surrogate_Arch
    Surrogate_Arch -->|5. Return results| RPC_Client

    style App fill:#2C3E50,stroke:#16A085,color:#fff
    style ScavLib fill:#16A085,stroke:#2C3E50,color:#fff
    style RPC_Client fill:#E67E22,stroke:#2C3E50,color:#fff
    style Discovery fill:#7F8C8D,stroke:#16A085,color:#fff
    style Laptop fill:#16A085,stroke:#2C3E50,color:#fff
    style Frontend fill:#2C3E50,stroke:#16A085,color:#fff
    style Execution fill:#E67E22,stroke:#2C3E50,color:#fff

Figure 1327.2: Cyber-foraging scavenger system architecture showing heterogeneous network topology with mobile client discovering nearby surrogates via service discovery protocol.

Client-Side Components:

  1. Application Layer: User application (e.g., photo processing, video transcoding)
  2. Scavenger Library: Profiles tasks (CPU, memory, network requirements), discovers surrogates, selects optimal target
  3. RPC Interface: Serializes tasks, transmits to surrogate, receives results

Surrogate-Side Components:

  1. Frontend: Accepts incoming RPC requests from clients
  2. IPC Layer: Manages communication between frontend and execution environment
  3. Execution Environment: Sandboxed environment for running client tasks with resource limits

1327.6 Network Topology and Device Heterogeneity

Real-world cyber-foraging environments include diverse devices with varying capabilities:

%% fig-alt: "Heterogeneous device network topology showing mobile client at center (netbook with limited CPU/battery) surrounded by diverse surrogates: laptop with Intel i7 and 16GB RAM ideal for CPU-intensive tasks, iPhone with A15 Bionic good for ML inference, Nokia N900 with ARM Cortex suitable for lightweight tasks, and desktop PC with NVIDIA GPU optimized for video processing and graphics. Shows capability-aware task assignment where photo stitching goes to laptop, object detection to iPhone, text processing to Nokia, and video transcoding to desktop GPU."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D', 'fontSize': '14px'}}}%%
graph TB
    Client[Mobile Client<br/>Netbook<br/>Limited CPU/Battery]

    subgraph Surrogates["Available Surrogates"]
        L1[Laptop<br/>Intel i7, 16GB RAM<br/>Battery-powered]
        L2[iPhone<br/>A15 Bionic<br/>Mobile GPU]
        L3[Nokia N900<br/>ARM Cortex<br/>Limited resources]
        L4[Desktop PC<br/>NVIDIA RTX GPU<br/>AC-powered]
    end

    subgraph TaskMapping["Capability-Aware Task Assignment"]
        T1[Photo Stitching to Laptop<br/>CPU-intensive]
        T2[Object Detection to iPhone<br/>ML inference]
        T3[Text Processing to Nokia<br/>Lightweight]
        T4[Video Transcoding to Desktop<br/>GPU-accelerated]
    end

    Client -->|Discover & profile| Surrogates
    Surrogates -->|Match capabilities| TaskMapping

    style Client fill:#E67E22,stroke:#2C3E50,color:#fff
    style L1 fill:#16A085,stroke:#2C3E50,color:#fff
    style L2 fill:#16A085,stroke:#2C3E50,color:#fff
    style L3 fill:#7F8C8D,stroke:#16A085,color:#fff
    style L4 fill:#2C3E50,stroke:#16A085,color:#fff

Figure 1327.3: Heterogeneous device network topology showing mobile client surrounded by diverse surrogates with capability-aware task assignment.

Device Capability Spectrum:

Device Type CPU RAM GPU Battery Network Best For
Netbook (Client) 1.6 GHz dual 2 GB Integrated Limited Wi-Fi Lightweight tasks only
Laptop 2.8 GHz quad 16 GB Integrated Hours Wi-Fi/LTE CPU-bound tasks (compilation, compression)
iPhone 3.2 GHz A15 6 GB 5-core GPU Moderate 5G/Wi-Fi ML inference, image processing
Nokia N900 600 MHz ARM 256 MB None Limited 3G/Wi-Fi Simple data processing
Desktop PC 3.5 GHz 8-core 32 GB RTX 3080 Unlimited Gigabit Video encoding, 3D rendering, ML training

1327.7 Virtualization vs. Mobile Agents Comparison

Two primary approaches exist for implementing cyber-foraging, each with distinct trade-offs:

Approach Pros Cons Representative Systems
Virtualization Strong isolation (sandboxing), no special software on client, standardized VM interfaces, security through hypervisor VM spin-off delays (30-60s), resource overhead (hypervisor), fixed VM sizes not adaptable to task Cloudlets, MAUI, CloneCloud, ThinkAir
Mobile Agents Dynamic deployment (code mobility), autonomous execution (detached operation), fine-grained adaptation to resources Agent management complexity, security risks (code injection), heterogeneous runtime environments, debugging difficulty Scavenger, Aglets, IBM Mobile Agents

When to Use Each:

Virtualization (Cloudlets): - Best for: Computation-intensive applications with graphical interfaces (AR, gaming, video editing) - Scenario: Predictable workloads where VM startup time is amortized over long sessions (>5 minutes) - Example: Mobile AR navigation app running on cloudlet VM for entire campus tour (30 minutes)

Mobile Agents (Scavenger): - Best for: Short-lived tasks (<1 minute) on opportunistic, heterogeneous devices - Scenario: Rapidly changing network topology (moving vehicles, crowded spaces) - Example: Tourist photo processing - offload photo stitching to nearby laptop in coffee shop for 20 seconds, then move on

Hybrid Approach: - Many modern systems use both: Cloudlets for persistent sessions + Mobile agents for quick offloads - Example: Smartphone uses cloudlet for continuous AR, but offloads burst photo processing to nearby laptop via Scavenger

1327.8 Edge Caching Strategies

Cyber-foraging and edge computing benefit from intelligent caching to minimize latency and bandwidth:

Cache Placement Hierarchy:

%% fig-alt: "Three-tier edge caching hierarchy: Tier 1 On-Device Cache (1-10ms access, limited capacity 1-10GB, volatile storage) handles most frequent accesses; Tier 2 Nearby Surrogates (10-50ms access, moderate capacity 100GB-1TB, semi-persistent storage) provides local sharing; Tier 3 Fog/Cloud Cache (50-200ms access, unlimited capacity, persistent storage) serves as fallback and origin. Shows 90% hits at Tier 1, 8% at Tier 2, 2% at Tier 3 demonstrating effectiveness of hierarchical caching."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D', 'fontSize': '14px'}}}%%
graph TB
    Client[Mobile Client<br/>Request]

    subgraph Tier1["Tier 1: On-Device Cache"]
        T1[Device Memory/Disk<br/>1-10ms access<br/>Limited capacity 1-10GB]
    end

    subgraph Tier2["Tier 2: Nearby Surrogates"]
        T2[Surrogate Cache<br/>10-50ms access<br/>Moderate capacity 100GB-1TB]
    end

    subgraph Tier3["Tier 3: Fog/Cloud Cache"]
        T3[Cloud Storage<br/>50-200ms access<br/>Unlimited capacity]
    end

    Client -->|1. Check local| Tier1
    Tier1 -->|Miss 10%| Tier2
    Tier2 -->|Miss 2%| Tier3
    Tier3 -->|Always hit| Client

    Tier1 -.->|Hit 90%| Client
    Tier2 -.->|Hit 8%| Client

    style Client fill:#E67E22,stroke:#2C3E50,color:#fff
    style T1 fill:#16A085,stroke:#2C3E50,color:#fff
    style T2 fill:#2C3E50,stroke:#16A085,color:#fff
    style T3 fill:#7F8C8D,stroke:#16A085,color:#fff

Figure 1327.4: Three-tier edge caching hierarchy showing hit rates at each level demonstrating effectiveness of hierarchical caching.

Caching Policy Trade-offs:

Policy What to Cache When to Update Use Case
LRU (Least Recently Used) Most recently accessed items On access General-purpose caching
LFU (Least Frequently Used) Most frequently accessed items Periodic count reset Static content (maps, models)
TTL (Time-To-Live) All items with expiration After TTL expires Dynamic data (sensor readings, weather)
Context-Aware Items based on location/time/activity On context change Predictive prefetch (navigation routes)

Real-World Example: Mobile Video Streaming

Scenario: User watches YouTube videos on phone while commuting

Tier 1 (Device):
- Recently watched videos cached locally (2 GB)
- Hit rate: 30% (rewatching favorite clips)

Tier 2 (Nearby Wi-Fi AP Cache):
- Popular videos in neighborhood cached on Wi-Fi router (500 GB)
- Hit rate: 40% (trending content)

Tier 3 (CDN/Cloud):
- All videos available from global CDN
- Hit rate: 30% (long-tail content)

Result: 70% of video traffic served from edge (Tier 1+2), reducing cellular bandwidth by 5 GB/month

1327.9 Implementation Considerations

1327.9.1 Service Discovery

Cyber-foraging requires efficient discovery of nearby surrogates:

# Example: mDNS-based surrogate discovery
import zeroconf

class SurrogateDiscovery:
    def __init__(self):
        self.zeroconf = zeroconf.Zeroconf()
        self.surrogates = {}

    def on_service_state_change(self, name, state):
        if state == "added":
            info = self.zeroconf.get_service_info("_cyberforage._tcp.local.", name)
            self.surrogates[name] = {
                "address": info.parsed_addresses()[0],
                "port": info.port,
                "capabilities": info.properties
            }

    def get_best_surrogate(self, task_requirements):
        """Select surrogate matching task needs"""
        for name, surrogate in self.surrogates.items():
            if self.matches_requirements(surrogate, task_requirements):
                return surrogate
        return None

1327.9.2 Task Profiling

Effective offloading requires understanding task characteristics:

# Example: Task profiler
class TaskProfile:
    def __init__(self, task):
        self.cpu_intensity = self.estimate_cpu(task)
        self.memory_requirement = self.estimate_memory(task)
        self.input_size = len(task.input_data)
        self.expected_output_size = self.estimate_output(task)
        self.deadline_ms = task.deadline

    def should_offload(self, local_cpu, surrogate_cpu, network_latency):
        """Decide whether offloading benefits this task"""
        local_time = self.cpu_intensity / local_cpu
        remote_time = (self.cpu_intensity / surrogate_cpu +
                      (self.input_size + self.expected_output_size) / network_bandwidth +
                      network_latency * 2)
        return remote_time < local_time and remote_time < self.deadline_ms
TipSee Also
  • Cloudlets Architecture: Fog Architecture - Detailed cloudlet VM synthesis
  • Edge and Fog Computing: ../architectures/distributed-specialized/edge-fog-cloud-overview.qmd
  • Data in the Cloud: data-in-the-cloud.qmd
  • Cloud Computing for IoT: ../architectures/foundations/cloud-computing.qmd

1327.10 Summary

  • Cyber-foraging enables mobile devices to opportunistically discover and offload computation to nearby surrogates
  • The What/Where/When framework guides offloading decisions based on task characteristics, surrogate capabilities, and timing
  • Scavenger architecture demonstrates client-server design with discovery, profiling, and execution components
  • Virtualization vs mobile agents trade strong isolation (VMs) against deployment flexibility (agents)
  • Three-tier caching places data at device, surrogate, and cloud levels for optimal access latency
  • Caching policies (LRU, LFU, TTL, context-aware) match data characteristics to update strategies
  • Service discovery (mDNS/Zeroconf) and task profiling enable intelligent offloading decisions

1327.11 What’s Next

Continue exploring edge computing patterns: