283  SDN OpenFlow Protocol

283.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Configure OpenFlow: Set up basic OpenFlow rules for packet forwarding and network management
  • Understand Flow Tables: Describe flow table entry structure including match fields, actions, and timeouts
  • Analyze Flow Processing: Trace packet processing through OpenFlow switch pipeline
  • Address SDN Challenges: Identify TCAM limitations and controller placement strategies

OpenFlow is like a recipe book that tells kitchen workers exactly what to do with each ingredient!

283.1.1 The Sensor Squad Adventure: The Recipe Book

Remember Connie the Controller from the traffic jam story? Well, Connie needed a way to give instructions to ALL the network switches. It was like being a head chef in a giant kitchen - how do you tell hundreds of cooks what to do?

β€œI’ll use a recipe book!” Connie announced. β€œIt’s called OpenFlow!”

Here’s how it works: Connie writes recipes (called β€œflow rules”) and sends them to each switch. The recipe says things like:

  • β€œIf you see a message from Thermo (temperature sensor), send it to Port 5”
  • β€œIf you see an emergency message, forward it immediately - highest priority!”
  • β€œIf you don’t know what to do, ask me!”

Power Pete was curious: β€œWhat happens when a new message arrives that doesn’t match any recipe?”

β€œGreat question!” said Connie. β€œThe switch says β€˜PACKET_IN!’ which means β€˜Chef, I got a new ingredient I don’t recognize!’ Then I write a new recipe and send a β€˜FLOW_MOD’ message which means β€˜Here’s how to handle that from now on!’”

Now ALL the switches in the network use the same recipe book, and they all know exactly what to do!

283.1.2 Key Words for Kids

Word What It Means
Flow Rule A recipe that tells a switch what to do when it sees a specific type of message
PACKET_IN When a switch says β€œHelp! I don’t know what to do with this!”
FLOW_MOD When the controller says β€œHere’s a new recipe for you!”
Flow Table The recipe book stored in each switch

283.2 Introduction

~12 min | Advanced | P04.C31.U04

OpenFlow is the standardized southbound protocol for communication between SDN controllers and switches. It defines how controllers program switch forwarding behavior through flow rules.

OpenFlow protocol architecture showing control plane and data plane separation

OpenFlow Protocol - Standard quality PNG format

OpenFlow protocol architecture showing control plane and data plane separation - scalable vector format

OpenFlow Protocol - Scalable SVG format for high resolution
Figure 283.1: OpenFlow protocol architecture showing control plane and data plane separation

283.3 OpenFlow Switch Components

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#E8F4F8','primaryTextColor':'#2C3E50','primaryBorderColor':'#16A085','lineColor':'#16A085','secondaryColor':'#FEF5E7','tertiaryColor':'#FDEBD0','fontSize':'14px'}}}%%
graph TB
    Packet["Incoming Packet"]

    subgraph Switch["OpenFlow Switch"]
        FlowTable["Flow Table<br/>Match-Action Rules"]
        GroupTable["Group Table<br/>Multicast/Failover"]
        Meter["Meter Table<br/>Rate Limiting"]
        SecureChannel["Secure Channel<br/>(TLS to Controller)"]
    end

    Controller["SDN Controller"]
    Output["Output Port(s)"]

    Packet -->|1. Arrives| FlowTable
    FlowTable -->|2. Match?| Decision{Match<br/>Found?}
    Decision -->|Yes| Action["Execute Actions"]
    Decision -->|No| SecureChannel

    SecureChannel -->|PACKET_IN| Controller
    Controller -->|FLOW_MOD| SecureChannel
    SecureChannel --> FlowTable

    Action --> GroupTable
    Action --> Meter
    GroupTable --> Output
    Meter --> Output

    style FlowTable fill:#16A085,stroke:#2C3E50,color:#fff
    style GroupTable fill:#2C3E50,stroke:#16A085,color:#fff
    style Meter fill:#2C3E50,stroke:#16A085,color:#fff
    style SecureChannel fill:#E67E22,stroke:#2C3E50,color:#fff
    style Controller fill:#16A085,stroke:#2C3E50,color:#fff,stroke-width:3px
    style Decision fill:#FDEBD0,stroke:#E67E22
    style Action fill:#E8F4F8,stroke:#16A085

Figure 283.2: OpenFlow Switch Packet Processing Pipeline with Flow, Group, and Meter Tables

{fig-alt=β€œOpenFlow switch components showing packet processing pipeline: incoming packets match against flow table, execute actions via group/meter tables to output ports, or send PACKET_IN to controller via secure channel for new flow rules”}

283.3.1 Switch Components

An OpenFlow switch contains several key components:

1. Flow Tables - Store match-action rules - Multiple tables form a pipeline - Each table processed sequentially

2. Group Tables - Enable multicast (one packet to multiple ports) - Fast failover (backup paths) - Load balancing (select action)

3. Meter Tables - Rate limiting per flow - QoS enforcement - Bandwidth management

4. Secure Channel - TLS-encrypted connection to controller - Handles OpenFlow messages


283.4 Flow Table Entry Structure

Each flow entry contains the following fields:

1. Match Fields (Packet Header Fields): - Layer 2: Source/Dest MAC, VLAN ID, Ethertype - Layer 3: Source/Dest IP, Protocol, ToS - Layer 4: Source/Dest Port (TCP/UDP) - Input Port - Metadata

2. Priority: - Higher priority rules matched first - Allows specific rules to override general rules

3. Counters: - Packets matched - Bytes matched - Duration

4. Instructions/Actions: - Forward to port(s) - Drop - Modify header fields (MAC, IP, VLAN) - Push/Pop VLAN/MPLS tags - Send to controller - Go to next table

5. Timeouts: - Idle Timeout: Remove rule if no matching packets for N seconds - Hard Timeout: Remove rule after N seconds regardless of activity

6. Cookie: - Opaque identifier set by controller

Example Flow Rule:

Match: src_ip=10.0.0.5, dst_ip=192.168.1.10, protocol=TCP, dst_port=80
Priority: 100
Actions: output:port3, set_vlan=100
Idle_timeout: 60
Hard_timeout: 300

283.5 OpenFlow Messages

OpenFlow defines several message types for controller-switch communication:

283.5.1 Controller-to-Switch Messages

Message Purpose
FLOW_MOD Add, modify, or delete flow rules
PACKET_OUT Send packet out specific port
BARRIER Request confirmation that prior messages processed
GET_CONFIG Query switch configuration
SET_CONFIG Modify switch configuration
MULTIPART_REQUEST Request statistics

283.5.2 Switch-to-Controller Messages

Message Purpose
PACKET_IN Send packet to controller (no matching rule)
FLOW_REMOVED Notify controller of expired/deleted flow
PORT_STATUS Notify controller of port state changes
ERROR Report errors

283.5.3 Symmetric Messages

Message Purpose
HELLO Connection establishment
ECHO_REQUEST/REPLY Keepalive, latency measurement
EXPERIMENTER Vendor extensions

283.6 SDN Challenges

~12 min | Advanced | P04.C31.U05

283.6.1 Rule Placement Challenge

SDN rule placement
Figure 283.3: Rule placement strategies in SDN switches for efficient flow management

Problem: Switches have limited TCAM (Ternary Content-Addressable Memory) for storing flow rules.

TCAM Characteristics: - Fast lookup (single clock cycle) - Expensive ($15-30 per Mb) - Limited capacity (few thousand entries) - Power-hungry

Challenges: - How to select which flows to cache in TCAM? - When to evict rules (LRU, LFU, timeout-based)? - How to minimize PACKET_IN messages to controller?

Solutions: - Wildcard Rules: Match multiple flows with single rule - Hierarchical Aggregation: Aggregate at network edge - Rule Caching: Intelligent replacement algorithms - Hybrid Approaches: TCAM + DRAM for overflow

283.6.2 Controller Placement Challenge

Flat SDN architecture
Figure 283.4: Flat SDN architecture with single controller tier
Hierarchical SDN architecture
Figure 283.5: Hierarchical SDN architecture with multi-tier controller deployment

Problem: Where to place controllers for optimal performance?

Considerations: - Latency: Controller-switch delay affects flow setup time - Throughput: Controller capacity (requests/second) - Reliability: Controller failure impacts network - Scalability: Number of switches per controller

Architectures:

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#E8F4F8','primaryTextColor':'#2C3E50','primaryBorderColor':'#16A085','lineColor':'#16A085','secondaryColor':'#FEF5E7','tertiaryColor':'#FDEBD0','fontSize':'14px'}}}%%
graph TB
    subgraph Centralized["Centralized (Single Controller)"]
        C1["Controller"]
        S1["Switch"] & S2["Switch"] & S3["Switch"]
        C1 --> S1 & S2 & S3
    end

    subgraph Distributed["Distributed (Multiple Controllers)"]
        C2A["Controller A"] & C2B["Controller B"]
        S4["Switch"] & S5["Switch"] & S6["Switch"]
        C2A <-->|Sync| C2B
        C2A --> S4 & S5
        C2B --> S5 & S6
    end

    subgraph Hierarchical["Hierarchical (Tiered Controllers)"]
        C3Root["Root Controller"]
        C3A["Regional A"] & C3B["Regional B"]
        S7["Switch"] & S8["Switch"] & S9["Switch"] & S10["Switch"]
        C3Root --> C3A & C3B
        C3A --> S7 & S8
        C3B --> S9 & S10
    end

    style C1 fill:#16A085,stroke:#2C3E50,color:#fff
    style C2A fill:#16A085,stroke:#2C3E50,color:#fff
    style C2B fill:#16A085,stroke:#2C3E50,color:#fff
    style C3Root fill:#E67E22,stroke:#2C3E50,color:#fff
    style C3A fill:#16A085,stroke:#2C3E50,color:#fff
    style C3B fill:#16A085,stroke:#2C3E50,color:#fff
    style S1 fill:#2C3E50,stroke:#16A085,color:#fff
    style S2 fill:#2C3E50,stroke:#16A085,color:#fff
    style S3 fill:#2C3E50,stroke:#16A085,color:#fff
    style S4 fill:#2C3E50,stroke:#16A085,color:#fff
    style S5 fill:#2C3E50,stroke:#16A085,color:#fff
    style S6 fill:#2C3E50,stroke:#16A085,color:#fff
    style S7 fill:#2C3E50,stroke:#16A085,color:#fff
    style S8 fill:#2C3E50,stroke:#16A085,color:#fff
    style S9 fill:#2C3E50,stroke:#16A085,color:#fff
    style S10 fill:#2C3E50,stroke:#16A085,color:#fff

Figure 283.6: SDN Controller Deployment Models: Centralized, Distributed, and Hierarchical

{fig-alt=β€œThree SDN controller placement architectures: centralized (single controller managing all switches), distributed (multiple synchronized controllers for redundancy), and hierarchical (root controller coordinating regional controllers managing switch groups)”}

Placement Strategies: - K-median: Minimize average latency to switches - K-center: Minimize maximum latency (worst-case) - Failure-aware: Ensure backup controller coverage

This variant shows what happens during a controller failure in a distributed deployment, demonstrating the failover process that maintains network operation.

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#2C3E50','primaryTextColor':'#fff','primaryBorderColor':'#16A085','lineColor':'#16A085','secondaryColor':'#E67E22','tertiaryColor':'#7F8C8D','fontSize':'12px'}}}%%
sequenceDiagram
    participant S as Switch
    participant P as Primary Controller
    participant B as Backup Controller
    participant DB as State Database

    Note over S,DB: Normal Operation
    S->>P: PACKET_IN (new flow)
    P->>DB: Store flow decision
    P->>S: FLOW_MOD (install rule)

    Note over P: Controller Fails
    P--xP: Crash / Network Partition

    Note over S,B: Failover Process (~3-5 seconds)
    S->>P: Heartbeat
    S->>S: No response (timeout 3s)
    S->>B: Connect to backup

    B->>DB: Load latest state
    DB-->>B: Network topology + flows
    B->>B: Become primary (leader election)

    B->>S: HELLO (establish connection)
    S->>B: FEATURES_REQUEST
    B-->>S: FEATURES_REPLY

    Note over S,B: Normal Operation Resumed
    S->>B: PACKET_IN (new flow)
    B->>S: FLOW_MOD (install rule)

    Note over S,DB: Existing flows continued<br/>during entire failover

Figure 283.7: This sequence illustrates SDN’s resilience through distributed controllers. Key insight: existing flow rules in switch memory continue forwarding traffic during controller failover, only new flows are delayed. The 3-5 second failover time comes from heartbeat timeout (3s) plus state synchronization (1-2s). Production deployments use techniques like pre-computed backup paths and proactive rule installation to minimize even this brief disruption.

This variant presents controller architecture selection as a decision matrix, helping students choose the right approach for their IoT deployment scale.

%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#2C3E50','primaryTextColor':'#fff','primaryBorderColor':'#16A085','lineColor':'#16A085','secondaryColor':'#E67E22','tertiaryColor':'#7F8C8D','fontSize':'14px'}}}%%
flowchart TB
    Start([Network Scale?]) --> Q1{Switches<br/>< 100?}

    Q1 -->|Yes| Centralized["CENTRALIZED<br/>───────────<br/>Simple management<br/>Low cost<br/>Easy debugging<br/>───────────<br/>Single point of failure<br/>Limited scalability<br/>───────────<br/>Small campus<br/>Lab/prototype"]

    Q1 -->|No| Q2{Switches<br/>< 1000?}

    Q2 -->|Yes| Distributed["DISTRIBUTED<br/>───────────<br/>High availability<br/>Geographic spread<br/>Load balancing<br/>───────────<br/>Sync complexity<br/>Consistency delays<br/>───────────<br/>Enterprise<br/>Multi-site IoT"]

    Q2 -->|No| Hierarchical["HIERARCHICAL<br/>───────────<br/>Massive scale<br/>Domain isolation<br/>Regional autonomy<br/>───────────<br/>Complex operations<br/>Multiple failure domains<br/>───────────<br/>Smart city<br/>Carrier network"]

    style Centralized fill:#16A085,stroke:#2C3E50,color:#fff
    style Distributed fill:#E67E22,stroke:#2C3E50,color:#fff
    style Hierarchical fill:#2C3E50,stroke:#16A085,color:#fff

Figure 283.8: This decision matrix guides architecture selection based on network scale. Key insight: IoT deployments often start centralized for simplicity, then migrate to distributed as device count grows. Hierarchical architectures are primarily for city-scale or carrier deployments where regional autonomy is essential. The trade-off is always between operational simplicity (centralized) and resilience/scale (distributed/hierarchical).

283.7 Knowledge Check

Question 1: An OpenFlow switch receives a packet matching this flow rule: β€œMatch: dst_ip=192.168.1.100, priority=100, actions=output:port5, idle_timeout=30, hard_timeout=120”. After 25 seconds, another matching packet arrives. What happens to the rule?

Explanation: OpenFlow flow rules have two independent timeout mechanisms: Idle timeout (30s): Rule removed if NO matching packets arrive for 30 consecutive seconds. Every matching packet resets the idle timer back to 30s. Since a packet arrived at 25s (before 30s expired), the idle timer resets and the rule stays active. Hard timeout (120s): Rule removed after 120 seconds regardless of activity. This timer never resets - it counts from rule installation. Clock starts at 0s, packet arrives at 25s (resets idle timer), rule must be removed at 120s even if packets continue arriving every second. Use cases: Idle timeout removes unused rules to save flow table space (e.g., TCP connection that ended). Hard timeout provides absolute expiration for temporary policies (e.g., β€œallow access for next 2 minutes”). IoT example: Smart building grants visitor Wi-Fi access with hard_timeout=3600 (1 hour maximum stay) and idle_timeout=300 (disconnect if inactive 5 minutes).

Question 2: In SDN security, what is the primary risk if an attacker compromises the SDN controller?

Explanation: SDN controller compromise is catastrophic because the controller is the β€œbrain” of the network with complete control over all forwarding behavior. Attack capabilities: (1) Traffic manipulation: Install flow rules redirecting traffic to attacker-controlled servers. Example: Redirect all bank traffic to phishing site. (2) Denial of service: Install rules dropping packets for specific targets, isolating critical devices. Delete existing flow rules, breaking all connectivity. (3) Data exfiltration: Mirror all traffic to attacker’s port using β€œoutput:attacker_port” in flow rules. (4) Lateral movement: Use controller’s network visibility to map infrastructure, identify targets, and plan further attacks. (5) Persistence: Modify controller code/configuration for long-term access. Why so severe? Traditional network compromise affects individual devices. SDN compromise affects network-wide forwarding decisions for all devices simultaneously. Mitigations: (1) Controller hardening: Strong authentication (TLS certificates), regular security updates, minimal exposed services. (2) Network segmentation: Isolate controller on dedicated management network, not user network. (3) RBAC: Role-based access control - limit which applications can modify which flow rules. (4) Monitoring: Detect anomalous controller behavior (unusual flow installations). (5) Redundancy: Multiple controllers - compromise detection triggers controller replacement from verified backup.

Question 3: An SDN controller uses OpenFlow’s group tables for multicast. A sensor broadcasts to 5 subscribers. How does a group table improve this compared to individual flow rules?

Explanation: OpenFlow group tables enable efficient multi-output actions like multicast, load balancing, and fast failover: Without group tables: Flow rule actions apply to single packet copy. For multicast to 5 destinations: Option 1: 5 separate flow rules, each outputting to different port - wastes flow table entries. Option 2: Single rule action=β€œcontroller”, controller receives packet, sends 5 PACKET_OUT messages - extremely inefficient, overloads controller. With group tables: (1) Create group (one-time): Group ID=42, Type=ALL (multicast), Buckets=[port2, port5, port7, port9, port12]. (2) Install flow rule: Match: dst_ip=224.0.1.1 (multicast address) to Action: group=42. (3) Packet arrives: Switch matches rule, looks up group 42, replicates packet to all 5 ports in single hardware operation. Benefits: (1) Efficient: One flow rule instead of 5. Packet replicated in switch hardware, not controller software. (2) Dynamic membership: Update group (add/remove subscribers) without changing flow rule. (3) Atomic operations: All ports updated simultaneously (consistent multicast tree). Other group types: SELECT (load balance across ports), INDIRECT (modify then forward), FAST_FAILOVER (backup path if primary fails). Essential for IoT multicast scenarios like over-the-air firmware updates to multiple sensors.

Question 4: OpenFlow supports multiple flow tables in a pipeline. Why is this better than a single flow table?

Explanation: Flow table pipeline allows modular, layered packet processing similar to how software uses function pipelines. Each table handles a specific network function: Example pipeline: Table 0 (Security/ACL): Match: src_ip=10.0.0.0/8, dst_port=22 to Action: Drop (block SSH from internal network to external). Table 1 (Routing): Match: dst_ip=192.168.0.0/16 to Action: output:port5, Goto Table 2. Table 2 (QoS): Match: dst_port=80 (HTTP) to Action: set_queue=2 (medium priority). Benefits: (1) Separation of concerns: Security policies independent from routing logic. Update ACLs without touching routing rules. (2) Fewer rules: Single-table requires cross-product of all combinations. Multi-table: 10 ACL rules + 20 routing rules + 5 QoS rules = 35 total. Single-table: 10x20x5 = 1000 rules! (3) Logical organization: Easier management and debugging. (4) Efficient hardware utilization: Different tables can use different matching mechanisms (TCAM for exact match, SRAM for wildcards).


283.9 Summary

This chapter covered the OpenFlow protocol and SDN challenges:

Key Takeaways:

  1. OpenFlow Switch Components: Flow tables for match-action rules, group tables for multicast/failover, meter tables for rate limiting, and secure channel for controller communication

  2. Flow Table Entry Structure: Match fields (L2-L4 headers), priority, counters, instructions/actions, timeouts (idle and hard), and cookie

  3. OpenFlow Messages: Controller-to-switch (FLOW_MOD, PACKET_OUT), switch-to-controller (PACKET_IN, FLOW_REMOVED), and symmetric (HELLO, ECHO)

  4. Rule Placement Challenge: TCAM limitations require wildcard rules, hierarchical aggregation, and intelligent caching strategies

  5. Controller Placement: Centralized (simple, single point of failure), distributed (high availability, sync complexity), and hierarchical (massive scale, regional autonomy)

Understanding OpenFlow mechanics is essential for implementing and troubleshooting SDN deployments in IoT environments.


283.10 What’s Next?

Explore how SDN is applied specifically to IoT networks, wireless sensor networks, and mobile environments.

Continue to SDN IoT Applications