Fog resource allocation borrows from TCP congestion control: AIMD (Additive Increase Multiplicative Decrease) distributes fog workloads with provable fairness, achieving Nash equilibrium in 5-15 iterations. Game-theoretic approaches show that selfish fog nodes cause a Price of Anarchy of 1.3-2.0x, meaning uncoordinated allocation wastes 30-100% more resources than optimal cooperative strategies.
Key Concepts
Resource Allocation: Assignment of CPU, memory, network bandwidth, and storage to competing workloads on a fog node based on priority and QoS requirements
Quality of Service (QoS): Guaranteed performance levels (maximum latency, minimum throughput) negotiated for specific IoT workloads running on shared fog infrastructure
Work Stealing: Dynamic load-balancing technique where underloaded fog nodes pull tasks from overloaded neighbors, maximizing cluster utilization
Priority Queuing: Task scheduling where safety-critical events (fire alarm, motor fault) preempt lower-priority tasks (telemetry aggregation) immediately
Resource Reservation: Pre-allocating a fraction of fog node capacity for latency-sensitive workloads, preventing best-effort tasks from causing deadline misses
Bin Packing: Optimization algorithm maximizing the number of containerized workloads that fit on available fog hardware while meeting resource constraints
Elasticity at Edge: Ability to scale workloads up or down on fog nodes in response to demand changes, limited by physical hardware constraints unlike cloud
SLA Violation: Event where a fog workload fails to meet its contracted response time or throughput, potentially triggering failover or escalation to cloud resources
41.1 Learning Objectives
By the end of this chapter, you will be able to:
Apply TCP Principles: Adapt proven TCP congestion control mechanisms to fog resource management
Analyze Game Theory: Distinguish Nash equilibrium, Pareto efficiency, and Price of Anarchy in fog contexts
Design Resource Allocation: Create multi-objective optimization strategies for fog deployments
Implement AIMD: Apply Additive Increase Multiplicative Decrease for distributed fog load balancing
Fog/Edge Fundamentals: Knowledge of fog computing concepts and edge processing fundamentals
Transport Protocols: Familiarity with TCP congestion control provides foundation for applying these principles to fog
Minimum Viable Understanding (MVU)
If you are short on time, focus on these three core concepts:
AIMD for Fog (Section 2): The Additive Increase / Multiplicative Decrease algorithm from TCP enables distributed, fair resource sharing in fog without central coordination. Devices slowly increase demands (+1) and cut sharply (x0.5) when overload is detected.
Nash Equilibrium vs. Pareto Efficiency (Section 3): Selfish device behavior converges to a Nash equilibrium that is often worse than the cooperative optimum (Pareto-efficient). The gap between them is the “Price of Anarchy.”
Network Class Choice Matters (Section 3.3): Contention-based networks (Class-1, Wi-Fi) have unbounded inefficiency from selfish behavior; scheduled networks (Class-2, TDMA) bound inefficiency to at most 2x optimal. This directly impacts fog network design.
These three ideas underpin nearly all fog resource allocation decisions in practice.
For Beginners: Why Resource Allocation Matters
Imagine a popular restaurant with limited seating. How do you decide who gets tables? First-come-first-served? Reservations? Priority for VIPs? Each approach has trade-offs between efficiency and fairness.
Fog computing faces the same challenge: thousands of IoT devices want to use limited fog resources (CPU, memory, bandwidth). Without smart allocation, some devices hog resources while others starve. The solutions come from two surprising sources:
TCP Internet protocols - Solved this problem for internet traffic in the 1980s
Game theory - Mathematical framework for analyzing strategic decisions
Term
Simple Explanation
AIMD
Additive Increase, Multiplicative Decrease - slowly request more, quickly back off if overloaded
Nash Equilibrium
Stable state where no one can improve by changing strategy alone
Pareto Efficiency
Optimal allocation where you can’t help one device without hurting another
Price of Anarchy
How much worse is selfish behavior compared to coordinated sharing?
Sensor Squad: The Playground Sharing Problem
Hey Sensor Squad! Imagine your school has ONE amazing playground with slides, swings, and monkey bars. At recess, ALL the kids want to play at the same time!
Sammy the Sensor says: “I figured it out! Each kid slowly walks toward the playground. If it gets too crowded, you step back fast and wait. That way, everyone gets a turn!”
That is exactly what AIMD does! Devices slowly ask for more (walk toward playground) and quickly back off (step back) when things get crowded.
Lila the LED adds: “But what if one kid is a bully and hogs the swings?” That is the Nash Equilibrium problem – everyone playing selfishly. The solution? Make fair rules (like taking turns), just like fog computing uses scheduling and quotas!
Max the Microcontroller reminds us: “The best playground is one where everyone gets equal time and nobody has to wait too long. That is Pareto Efficiency!”
41.3 Introduction
Resource allocation in fog computing addresses one of the most fundamental challenges in distributed systems: how do thousands of autonomous IoT devices share limited computational resources fairly and efficiently, without a central coordinator dictating every decision?
This chapter draws on two powerful theoretical foundations. First, TCP congestion control – a family of algorithms refined over 35+ years that enable billions of internet devices to share bandwidth without coordination. Second, game theory – the mathematical study of strategic interactions that reveals when selfish behavior leads to poor outcomes and how mechanism design can align individual incentives with collective good.
Understanding these foundations is essential for designing fog systems that remain stable under load, fair across heterogeneous devices, and efficient as deployments scale from tens to tens of thousands of nodes.
41.4 Fog Computing Optimization Strategies
Fog computing optimization strategies involve balancing multiple competing objectives – latency, energy, bandwidth, cost – through intelligent resource allocation, task partitioning, and placement decisions across the edge-fog-cloud continuum. The optimization process flows from inputs through an engine to deployment configurations.
The following table summarizes the four stages, their components, and specific details for each optimization strategy:
Stage
Components
Details
Optimization Inputs
Application Requirements
Latency constraints, Energy budget, Reliability needs
Resource Availability
Fog CPU/Memory, Network bandwidth, Device capabilities
CPU/Memory per task, Network bandwidth, Storage quotas
Processing Pipeline
Edge filters, Fog analytics, Cloud storage
41.5 Alternative View: Fog Optimization Strategies Visualization
Fog optimization framework diagram showing inputs flowing through optimization engine to strategy outputs and deployment configuration
Figure 41.1: Fog computing optimization framework showing how application requirements and resource availability feed into optimization engines (multi-objective optimizer, constraint solver, resource allocator) that determine task partitioning, placement decisions, data management, and network optimization strategies, ultimately creating deployment configurations that distribute processing across edge, fog, and cloud tiers.
41.6 TCP Congestion Principles Applied to Fog
Fog computing resource allocation can learn from decades of TCP congestion control research. Five fundamental TCP principles apply directly to fog resource management. The mapping between TCP mechanisms and fog resource management is remarkably direct, because both face the same core problem: distributed agents sharing limited resources without central coordination.
41.6.1 Five TCP Principles for Fog Computing
41.7 Alternative View: TCP Principles for Fog
Diagram of five TCP principles mapped to fog computing resource management strategies
Figure 41.2: Five TCP congestion control principles applied to fog computing resource management.
Alternative View: AIMD Behavior Over Time
This variant visualizes how AIMD (Additive Increase Multiplicative Decrease) works over time, showing the characteristic “sawtooth” pattern that enables fair, distributed resource sharing.
AIMD sawtooth pattern over time showing gradual additive increase followed by sharp multiplicative decrease on congestion detection, with multiple devices converging to fair shares
Key Insight: AIMD’s brilliance is that it achieves fair resource allocation without coordination. Each device independently increases slowly and decreases sharply based only on local observations (latency, rejections), yet the system converges to equitable sharing.
1. End-point Control via Negative Feedback
TCP: Sender adjusts transmission rate based on receiver ACKs/NACKs
Fog: Edge devices adjust offloading rate based on fog node acceptance/rejection signals
2. Sliding Window Mechanism
TCP: Dynamic buffer size controls how much data in-flight
Fog: Dynamic resource quotas (CPU/memory/bandwidth) per client control how many tasks in-flight
TCP: Slowly increase sending rate (+1 per RTT), cut sharply on loss (x0.5)
Fog: Gradually increase task offloading, cut sharply when fog node signals overload
Why it works: AIMD converges to fair, efficient resource sharing without central coordination
4. Inferring Congestion by Loss or Delay
TCP: Packet loss or increasing RTT signals network congestion
Fog: Task queue length or response time increases signal fog node overload
5. RTT-based Timers and Estimation
TCP: Adaptive retransmission timers based on measured round-trip time
Fog: Adaptive offloading decisions based on measured edge-to-fog latency
For Beginners: Why TCP Matters for Fog Computing
TCP (Transmission Control Protocol) solved a hard problem in the 1980s: How do thousands of computers share the internet fairly without a central coordinator telling everyone when to send data?
The Breakthrough: Each computer independently measures how well things are going (are my packets arriving? how long do they take?) and adjusts its behavior accordingly. If packets get lost, slow down sharply. If everything’s working, speed up gradually.
Fog Connection: Fog computing faces the same challenge. Thousands of IoT devices want to offload tasks to fog nodes. No central coordinator. Each device must independently decide: “Should I send my next task to the fog node, or process it locally?” TCP’s 35+ years of proven algorithms (AIMD, congestion detection, adaptive timers) provide battle-tested solutions.
Real Example: Smart city with 10,000 sensors, 50 fog nodes. Each sensor independently monitors: “When I send tasks to Fog Node #7, how long until I get results?” If latency increases from 5ms to 50ms, the sensor infers overload and reduces offloading (AIMD multiplicative decrease). If latency stays low, gradually increase offloading (AIMD additive increase). Result: Automatic load balancing without complex coordination protocols.
41.7.1 AIMD Pseudocode for Fog Resource Allocation
The following pseudocode shows how a fog-aware IoT device implements AIMD-based task offloading:
# AIMD-based Fog Task Offloading# Each device runs this independently - no central coordinator neededINITIAL_RATE =1# Start with 1 task per intervalADDITIVE_INCREMENT =1# Increase by 1 task per successful intervalMULTIPLICATIVE_FACTOR =0.5# Cut rate in half on overloadLATENCY_THRESHOLD_MS =50# Overload signal thresholdMIN_RATE =1MAX_RATE =100offload_rate = INITIAL_RATEwhile device_is_active:# Send 'offload_rate' tasks to fog node this interval results = fog_node.submit_tasks(offload_rate)# Measure round-trip latency (like TCP measures RTT) measured_latency = results.average_latency_ms rejection_count = results.rejected_tasks# Congestion detection (like TCP loss/delay detection)if measured_latency > LATENCY_THRESHOLD_MS or rejection_count >0:# MULTIPLICATIVE DECREASE: cut sharply offload_rate =max(MIN_RATE,int(offload_rate * MULTIPLICATIVE_FACTOR)) log(f"Overload detected! Rate: {offload_rate}")else:# ADDITIVE INCREASE: grow slowly offload_rate =min(MAX_RATE, offload_rate + ADDITIVE_INCREMENT) log(f"Capacity available. Rate: {offload_rate}") wait(interval_duration)
Why this works: When multiple devices run this algorithm independently, they converge to equal shares of fog resources. The asymmetry between slow increase and fast decrease is mathematically proven to achieve fairness – devices that temporarily use more than their fair share lose resources faster than they gain them.
41.7.2 TCP Variants Evolution and Fog Implications
TCP Variant
Year
Key Innovation
Fog Computing Lesson
TCP Tahoe
1988
Slow start + congestion avoidance
Start conservatively, probe for capacity
TCP Reno
1990
Fast retransmit/recovery
Quick recovery from temporary overload
TCP NewReno
1999
Multiple loss handling
Handle multiple simultaneous fog failures
TCP CUBIC
2005
Cubic growth for high-speed networks
Aggressive scaling for high-capacity fog tiers
Key Takeaway: Fog resource allocation doesn’t need to reinvent congestion control—adapt proven TCP principles to task offloading, resource quotas, and distributed coordination.
Try It: AIMD Fog Resource Simulator
Adjust the AIMD parameters below and watch how multiple fog devices converge to fair resource shares over time. Notice how the additive increment and multiplicative factor affect convergence speed and stability.
For deeper understanding of TCP congestion control mechanisms, see: - Transport Fundamentals - TCP flow control and congestion avoidance - Transport Optimizations - Modern TCP variants and performance tuning
41.8 Game Theory for Resource Allocation
Fog computing resource allocation is a multi-player game where devices (players) compete for shared fog resources (bandwidth, CPU, memory). Game theory provides mathematical frameworks for understanding equilibria and fairness. Understanding these frameworks helps system designers predict how devices will behave and design mechanisms that produce desirable outcomes even when each device acts selfishly.
41.8.1 Network Access Classes and Throughput Dependence
Consider two classes of network access technologies commonly used in fog deployments:
Throughput of each user depends on all users in the network
Contention-based: All devices compete for channel access
Example: Wi-Fi in crowded environments
Class-2: TDMA (Time Division Multiple Access)
Throughput depends only on number of users (N)
Each user gets fixed time slot: Throughput = Total_Bandwidth / N
Example: Cellular networks with scheduled access
41.9 Alternative View: Game Theory Network Classes
Game theory framework comparing Class-1 contention and Class-2 scheduled network access for fog resource allocation
Figure 41.3: Game theory framework comparing contention-based and scheduled access for fog resource allocation.
For Beginners: Game Theory and the Tragedy of the Commons
What is Game Theory? The mathematical study of strategic decision-making when multiple players interact. Each player wants the best outcome for themselves, but their actions affect everyone.
The Classic Example - Tragedy of the Commons: Imagine a shared pasture (the “commons”) where 10 farmers graze sheep. Each farmer thinks: “If I add one more sheep, I get 100% of the profit from that sheep, but the cost of overgrazing is shared among all 10 farmers (I only pay 10% of the cost).” Result: Everyone adds too many sheep, the pasture is destroyed, everyone loses.
Fog Computing Connection: Shared fog resources (bandwidth, CPU, memory) are the “commons.” Each IoT device wants to offload as many tasks as possible to the fog (benefit: save battery, get fast results). But if everyone offloads aggressively, the fog node becomes overloaded and everyone’s performance degrades.
Nash Equilibrium: The stable state where no device can improve its performance by changing its offloading strategy alone (given what others are doing). Problem: The Nash equilibrium is often inefficient – everyone would be better off if they collectively reduced offloading, but no individual has incentive to do so alone.
Pareto Efficiency: An allocation is Pareto-efficient if you can’t improve one device’s performance without hurting another. This is the “fair and efficient” goal.
Price of Anarchy (PoA): How much worse is the Nash equilibrium (selfish behavior) compared to the Pareto-efficient optimum (coordinated behavior)?
Class-1 (Wi-Fi): PoA can be unbounded (very bad – selfish behavior can collapse the system)
Class-2 (TDMA): PoA is less than or equal to 2 (at worst, selfish behavior is twice as bad as optimal – much better!)
41.9.1 Nash Equilibrium and Pareto Efficiency
Nash Equilibrium: A strategy profile where no player can improve their outcome by unilaterally changing their strategy (given others’ strategies remain fixed).
Problem: Nash equilibria can be inefficient – all players might achieve better outcomes through cooperation, but individual incentives prevent it.
Pareto Efficiency: An allocation is Pareto-efficient if no player can be made better off without making another player worse off.
Goal: Design fog resource allocation mechanisms that drive selfish devices toward Pareto-efficient Nash equilibria.
Quick Check: Game Theory Fundamentals
41.9.2 Price of Anarchy (PoA) Bounds
The Price of Anarchy measures how much system performance degrades due to selfish behavior compared to optimal coordinated allocation.
Class-1 Networks (802.11 DCF):
PoA: Unbounded (in worst case)
Meaning: Selfish behavior can arbitrarily degrade system performance
Real example: One aggressive Wi-Fi device using 90% airtime can reduce everyone else to 10% total
Class-2 Networks (TDMA):
PoA is less than or equal to 2 (best case) or (N+M)/N (general bound, where N = number of players, M = number of resources)
Meaning: Even with selfish behavior, system achieves at least 50% of optimal efficiency
Real example: 10 devices sharing TDMA slots selfishly still achieve reasonable fairness
Metric
Class-1 (Contention)
Class-2 (Scheduled)
Price of Anarchy
Unbounded
PoA <= 2
Worst-case efficiency
Can reach 0%
At least 50% of optimal
Predictability
Low under load
High under any load
Peak throughput
Higher (when uncontested)
Lower (overhead)
Fairness guarantee
None
Guaranteed
Best for
Low-load, bursty traffic
Critical, time-sensitive IoT
Putting Numbers to It
The Price of Anarchy bounds quantify inefficiency from selfish behavior:
Class-2 TDMA: With \(N = 10\) devices sharing bandwidth \(B = 100 \text{ Mbps}\), optimal allocation gives each \(\frac{100}{10} = 10 \text{ Mbps}\). Selfish Nash equilibrium: each device requests maximum, causing system to allocate \(\frac{100}{10} = 10 \text{ Mbps}\) anyway. PoA = \(\frac{\text{Optimal}}{\text{Nash}} = \frac{100}{100} = 1\) (no degradation — TDMA enforces fairness).
Class-1 Wi-Fi: One selfish device using CSMA/CA aggressively can capture 60% of airtime, leaving 40% for 9 others. Nash throughput = \(60 + 9(40/9) = 100\) total, but distribution is \((60, 4.4, 4.4, \ldots)\). PoA for fairness = \(\frac{10}{4.4} \approx 2.27\) for victims — unbounded PoA means one device can drive others to near-zero throughput.
Try It: Price of Anarchy Explorer
Compare how selfish behavior affects Class-1 (contention-based) and Class-2 (scheduled) fog networks. Adjust the number of devices and the aggression level to see how the Price of Anarchy changes.
5 IoT devices (players) competing for 1 fog node with 100 units of CPU capacity
Each device chooses offloading intensity: Low (10 units), Medium (20 units), High (30 units)
If total demand exceeds 100 units, fog node rejects tasks probabilistically
Payoff Matrix (for two representative devices, simplified):
Device B: Low (10)
Device B: Medium (20)
Device B: High (30)
Device A: Low (10)
(10, 10) - both safe
(10, 20) - B gains
(10, 30) - B gains more
Device A: Medium (20)
(20, 10) - A gains
(20, 20) - Pareto optimal
(20, 25) - slight overload
Device A: High (30)
(30, 10) - A gains more
(25, 20) - slight overload
(20, 20) - heavy overload, both lose
Values represent effective throughput after accounting for rejection probability.
Strategies:
Cooperative (Pareto-efficient): Each device uses Medium (20 units) -> Total = 100 units, no rejection, everyone happy
Nash Equilibrium (selfish): Each device independently chooses High (30 units) because “more for me!” -> Total = 150 units -> 33% rejection rate -> Everyone worse off than cooperative case
Price of Anarchy: PoA = Performance_Cooperative / Performance_Nash = 100% success / 67% success which is approximately 1.5
Choose offloading strategies for 5 IoT devices competing for a shared fog node. See how cooperative vs. selfish choices affect rejection rates and the Price of Anarchy in real time.
Beyond Nash equilibrium and Pareto efficiency, fog system designers must choose a specific fairness criterion that determines how resources are divided. Different fairness definitions lead to different optimal allocations.
Fairness Metric
Definition
Fog Example
When to Use
Max-Min Fairness
Maximize the minimum allocation across all devices
Ensure every sensor gets at least X ms of fog CPU
Safety-critical systems where no device can be starved
Proportional Fairness
Maximize the product of all allocations (log-sum)
Higher-priority devices get proportionally more fog time
Mixed-priority deployments (medical vs. environmental sensors)
Weighted Fairness
Each device gets shares proportional to assigned weights
J = 1.0 means perfectly equal; J = 1/n means maximally unfair
Evaluating and comparing allocation schemes
Jain’s Fairness Index in Practice:
For a system with N devices receiving allocations x_1, x_2, …, x_N:
J(x) = (sum of x_i)^2 / (N * sum of x_i^2)
Range: 1/N (worst case: one device gets everything) to 1.0 (best case: all devices equal)
Threshold: J > 0.9 is generally considered “fair” in practice
Example: 4 fog clients with CPU allocations [25, 25, 25, 25] yields J = 1.0 (perfectly fair). Allocations [90, 5, 3, 2] yields J = 0.30 (highly unfair).
Try It: Jain’s Fairness Index Calculator
Enter resource allocations for up to 8 fog devices and see how Jain’s Fairness Index responds. Experiment with equal vs. skewed distributions to build intuition about what “fair” looks like quantitatively.
Show code
viewof jfi_preset = Inputs.select( ["Custom","Equal (25,25,25,25)","Slightly Unequal (30,25,25,20)","Skewed (50,25,15,10)","Highly Unfair (90,5,3,2)","One Hog (80,5,5,5,5)"], {value:"Custom",label:"Preset allocation"})viewof jfi_d1 = Inputs.range([0,100], {value:25,step:1,label:"Device A allocation"})viewof jfi_d2 = Inputs.range([0,100], {value:25,step:1,label:"Device B allocation"})viewof jfi_d3 = Inputs.range([0,100], {value:25,step:1,label:"Device C allocation"})viewof jfi_d4 = Inputs.range([0,100], {value:25,step:1,label:"Device D allocation"})
Avoid these common design mistakes when implementing fog resource allocation:
1. Static Allocation (the “Set and Forget” trap)
Mistake: Assigning fixed CPU/memory quotas to each device at deployment time
Why it fails: IoT workloads are bursty and time-varying. A sensor that needs 50% CPU during peak hours and 2% at night wastes resources with a fixed 25% allocation
Fix: Use dynamic allocation with AIMD or feedback-based adjustment
2. Ignoring Heterogeneity
Mistake: Treating all IoT devices identically in the allocation scheme
Why it fails: A medical sensor with 10ms latency requirements and an environmental logger with 60s tolerance have fundamentally different needs
Fix: Use weighted fairness or priority classes with differentiated service levels
3. Centralized Bottleneck
Mistake: Routing all allocation decisions through a single coordinator
Why it fails: Single point of failure; does not scale beyond hundreds of devices; coordinator itself becomes a resource bottleneck
Fix: Use distributed algorithms (AIMD, local game-theoretic strategies) where each device makes independent decisions
4. Optimizing the Wrong Metric
Mistake: Maximizing total throughput when the application needs minimum latency
Why it fails: A system can have high aggregate throughput while individual safety-critical devices experience unacceptable delays
Fix: Define clear optimization objectives tied to application requirements before selecting an allocation strategy
5. Ignoring Price of Anarchy
Mistake: Assuming devices will cooperate without incentive mechanisms
Why it fails: In Class-1 networks, unbounded PoA means one misbehaving device can collapse performance for all
Fix: Design mechanisms (pricing, quotas, backpressure) that align selfish incentives with system-wide efficiency
41.11 Knowledge Check
Test your understanding of resource allocation concepts.
Quiz: Resource Allocation and Game Theory
Interactive: Fog Node Task Offloading
Worked Example: AIMD Fog Resource Allocation Convergence
Scenario: A fog node serves 4 IoT devices (A, B, C, D) competing for 100 units of CPU capacity. Each device independently runs AIMD (Additive Increase, Multiplicative Decrease) without coordination.
AIMD Parameters:
Additive increase: +5 units per interval (when no overload detected)
All devices cut in half: A=7.5, B=12.5, C=17.5, D=22.5
Total: 60 units (under capacity)
Iteration 2 (t=2):
All devices increase: A=12.5, B=17.5, C=22.5, D=27.5
Total: 80 units (OK, no overload)
Iteration 3 (t=3):
All increase: A=17.5, B=22.5, C=27.5, D=32.5
Total: 100 units (OK, at capacity)
Iteration 4 (t=4):
All increase: A=22.5, B=27.5, C=32.5, D=37.5
Total: 120 units > 100 → overload
All cut: A=11.25, B=13.75, C=16.25, D=18.75
Total: 60 units
Iteration 5 (t=5):
All increase: A=16.25, B=18.75, C=21.25, D=23.75
Total: 80 units (OK)
Iteration 6 (t=6):
All increase: A=21.25, B=23.75, C=26.25, D=28.75
Total: 100 units (OK, at capacity)
Observation: By iteration 6, allocations are converging toward equal shares (25 units each). The system oscillates around 100 units capacity, with devices gradually approaching fair allocation.
Mathematical Proof of Convergence:
Let \(x_i(t)\) be device \(i\)’s allocation at time \(t\)
AIMD dynamics: \(x_i(t+1) = x_i(t) + \alpha\) if no overload, \(x_i(t+1) = \beta \cdot x_i(t)\) if overload
At equilibrium: \(\sum x_i = C\) (capacity), and all devices have equal allocation (Chiu-Jain theorem, 1989)
Convergence Time Estimate:
From unequal start (10, 20, 30, 40) to near-equal (23, 24, 25, 28): approximately 10-15 AIMD iterations
If interval = 5 seconds, convergence time = 50-75 seconds
Key Insight: AIMD achieves fair resource sharing without coordination. No central controller. No inter-device communication. Each device measures only local signals (overload yes/no) and adjusts independently. The asymmetry (slow increase +1, fast decrease ×0.5) is what drives convergence to fairness.
Decision Framework: Selecting Fairness Metric for Fog Resource Allocation
Application Context
Use Max-Min Fairness
Use Proportional Fairness
Use Weighted Fairness
All devices have equal priority
✓ Best choice
Alternative if easier to implement
Not needed
Devices have different priorities (medical > environmental)
✗ Wastes resources on low-priority
✗ Doesn’t respect priorities
✓ Best choice
Bottleneck resource (CPU, bandwidth)
✓ Guarantees minimum for all
~ Balances min guarantee + efficiency
✓ With weights=priority
Maximize total throughput
✗ Sacrifices throughput for fairness
✓ Balances fairness + throughput
~ Depends on weight choice
Prevent device starvation
✓ Strongest guarantee
✓ Prevents starvation (log penalty)
✓ If weights > 0 for all
Formulas:
Max-Min Fairness: Maximize \(\min(x_1, x_2, ..., x_n)\) subject to \(\sum x_i \leq C\) - Result: All devices get equal share (or their max demand, whichever is smaller) - Example: 4 devices, 100 units capacity → each gets 25 units
Proportional Fairness: Maximize \(\sum \log(x_i)\) subject to \(\sum x_i \leq C\) - Result: Devices get shares proportional to their demand (within capacity limits) - Example: Demands (10, 20, 30, 40), capacity 80 → allocations (8, 16, 24, 32)
Weighted Fairness: Maximize \(\sum w_i \log(x_i)\) subject to \(\sum x_i \leq C\) - Result: Devices get shares proportional to their weights - Example: Weights (1, 2, 3, 4), capacity 100 → allocations (10, 20, 30, 40)
Key Decision Rule:
Equal-priority devices? → Max-min fairness (simplest, most fair)
Want to maximize total system utility? → Proportional fairness (balances fairness + efficiency)
Devices have explicit priority levels? → Weighted fairness with weights = priority
Common Mistake: Using max-min fairness when devices have wildly different demands (e.g., video camera needing 100× more bandwidth than temperature sensor). This wastes resources giving the temperature sensor bandwidth it cannot use. Use proportional or weighted fairness instead.
Common Mistake: Ignoring Price of Anarchy in Class-1 Networks (Wi-Fi)
The Mistake: Deploying fog gateways on shared Wi-Fi (Class-1 network with contention-based access) and assuming all devices will cooperate fairly. One selfish device aggressively transmits without backoff, monopolizing the channel.
Real-World Example: A smart office deployed 500 occupancy sensors on shared Wi-Fi. One poorly configured sensor had a firmware bug causing it to retry failed transmissions with zero backoff (violating 802.11 CSMA/CA). This single device consumed 60% of airtime, reducing bandwidth for the other 499 sensors to 40% shared.
Why This Happens (Price of Anarchy):
Class-1 networks (Wi-Fi): Each device’s throughput depends on all other devices’ behavior (contention-based MAC)
Nash equilibrium: Each device maximizes its own throughput → everyone transmits aggressively → collisions → throughput collapse
Price of Anarchy (PoA): Ratio of optimal throughput to selfish throughput = unbounded for Class-1 networks
Proof by Example:
Cooperative: 10 devices, equal share → 10 Mbps each, total 100 Mbps system throughput
Selfish: Each device transmits at max rate → collisions → 1 Mbps each, total 10 Mbps system throughput
PoA: 100 Mbps / 10 Mbps = 10× (90% efficiency loss due to selfishness!)
How to Avoid:
1. Use Class-2 networks (TDMA/scheduled access) for critical fog traffic:
PoA for Class-1: Unbounded (can approach infinity with adversarial behavior)
PoA for Class-2: Bounded at 2× (worst case is 50% efficiency)
Decision Rule: For fog deployments with > 50 devices or any safety-critical traffic, avoid Class-1 networks (Wi-Fi) or use strict admission control + QoS. Prefer Class-2 scheduled access (cellular, proprietary IoT) where PoA is provably bounded.
Interactive Quiz: Match Concepts
Interactive Quiz: Sequence the Steps
🏷️ Label the Diagram
💻 Code Challenge
41.12 Summary
41.12.1 Key Takeaways
This chapter covered fog resource allocation strategies using two powerful theoretical foundations:
Concept
Key Insight
Practical Application
Optimization Framework
Balance latency, energy, bandwidth, and cost through multi-objective optimization
Use the four-stage pipeline: inputs, engine, strategies, deployment configuration
TCP Principles for Fog
Five proven congestion control principles map directly to fog resource management
Implement AIMD-based task offloading without needing a central coordinator
AIMD Algorithm
Slow additive increase + fast multiplicative decrease = fair convergence
Each device independently adjusts offloading rate based on local latency/rejection signals
Game Theory
Selfish devices converge to Nash equilibria that are often inefficient
Design mechanisms (pricing, quotas, feedback) to align selfish incentives with system efficiency
Network Classes
Class-1 (contention) has unbounded PoA; Class-2 (scheduled) bounds PoA to 2
Use TDMA for critical IoT traffic, Wi-Fi for best-effort data; hybrid recommended
Fairness Metrics
Max-min, proportional, and weighted fairness serve different deployment needs
Choose fairness metric based on device heterogeneity and application requirements
Common Pitfalls
Static allocation, ignoring heterogeneity, and centralized bottlenecks are frequent errors
Use dynamic, distributed, priority-aware allocation with appropriate incentive mechanisms
41.12.2 Design Decision Checklist
When designing a fog resource allocation system, consider:
Do devices need guaranteed minimum resources? Use max-min fairness or TDMA scheduling
Are device priorities heterogeneous? Use weighted fairness with differentiated service classes
Is central coordination feasible? If not, use distributed AIMD-based approaches
What is the acceptable Price of Anarchy? Choose network class accordingly (Class-2 for bounded PoA)
How bursty is the workload? Dynamic allocation (AIMD) outperforms static quotas for variable loads
41.14.3 Example 3: Weighted Fairness with Priorities (Advanced)
Scenario: Medical sensors (priority 3) and environmental sensors (priority 1) share fog node.
Modified AIMD:
def weighted_aimd(priority): increment = priority *1# High priority increases faster multiplicative_factor =0.5# Same decrease for allif overload: rate *= multiplicative_factor # Same cutelse: rate += increment # Weighted increase
Result: Medical sensors converge to 3× allocation of environmental sensors (proportional fairness).
41.15 Concept Check
Quick Quiz: Resource Allocation Fundamentals
41.16 Try It Yourself
Exercise 1: AIMD Simulation
Implement a simple AIMD simulator for 3 devices competing for 100 CPU units:
# Fog Resource AIMD Simulatordevices = [10, 20, 30] # Starting ratescapacity =100history = []for iteration inrange(20): total =sum(devices) history.append(devices.copy())if total > capacity:# Multiplicative decrease devices = [max(1, int(d *0.5)) for d in devices]print(f"Iter {iteration}: OVERLOAD {total} > {capacity} → {devices}")else:# Additive increase devices = [min(50, d +5) for d in devices]print(f"Iter {iteration}: OK {total} ≤ {capacity} → {devices}")# Plot history to see convergence pattern
Expected Output: Devices converge to approximately equal shares (33 each) after 10-15 iterations.
Exercise 2: Calculate Jain’s Fairness Index
Given allocations [40, 30, 20, 10], compute fairness:
def jains_fairness_index(allocations): n =len(allocations) sum_x =sum(allocations) sum_x_squared =sum(x**2for x in allocations)return (sum_x**2) / (n * sum_x_squared)allocations = [40, 30, 20, 10]J = jains_fairness_index(allocations)print(f"Jain's Index: {J:.3f}")# Should output ~0.833 (below 0.9 fairness threshold)
Exercise 3: Compare Class-1 vs Class-2 PoA
Research question: For a fog deployment with 50 devices, calculate worst-case throughput under selfish behavior for: - Class-1 (Wi-Fi contention) - Class-2 (TDMA scheduled)
Use the PoA bounds from this chapter to quantify the difference.
41.17 Concept Relationships
Concept
Builds On
Enables
Common Confusion
AIMD Algorithm
TCP congestion control principles
Distributed fair resource sharing without coordination
Thinking symmetric increase/decrease would work (asymmetry is essential)
Nash Equilibrium
Game theory foundations
Predicting stable states under selfish behavior
Confusing Nash equilibrium (stable) with Pareto efficiency (optimal)
Price of Anarchy
Nash equilibrium, Pareto efficiency
Network class selection (Class-1 vs Class-2)
Assuming all networks have bounded PoA (Class-1 Wi-Fi is unbounded)
Jain’s Fairness Index
Allocation fairness metrics
Evaluating and comparing resource allocation schemes
Treating J=0.8 as “fair” (threshold is 0.9+)
Weighted Fairness
Proportional fairness, device priorities
Priority-aware fog resource allocation
Using max-min fairness when devices have different priorities
41.18 See Also
Fog Optimization Overview: Context for where resource allocation fits in the overall fog optimization framework