144  SOA and Microservices Fundamentals

In 60 Seconds

SOA decomposes IoT platforms into loosely coupled services communicating via standardized interfaces. Microservices take this further with independently deployable units, each owning its own data store. Use the “two-pizza team” rule for service boundaries – if one team cannot own, build, and operate a service, it is too large. Start monolithic for MVP (ships 3-6 months faster), then decompose along domain boundaries when scaling requires it.

144.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Evaluate SOA versus Microservices: Distinguish the evolution from Service-Oriented Architecture to microservices and justify when each approach is appropriate for IoT systems
  • Decompose IoT Platforms: Break down IoT platforms into independent, loosely coupled services using domain-driven design principles
  • Determine Service Boundaries: Apply the two-pizza rule and domain analysis to establish optimal service granularity

144.2 Prerequisites

Before diving into this chapter, you should be familiar with:

  • Cloud Computing for IoT: Understanding cloud service models (IaaS, PaaS, SaaS) and deployment patterns provides essential context for where microservices run
  • IoT Reference Architectures: Knowledge of layered IoT architectures helps understand how services map to device, gateway, and cloud tiers
  • Communication and Protocol Bridging: Understanding protocol translation is essential for services that bridge device protocols to standard APIs
  • MQTT Fundamentals: MQTT is the primary messaging protocol for IoT microservices communication
MVU: Service Architecture Selection for IoT

Core Concept: Service architecture is a spectrum from monoliths (one codebase, simple) to microservices (many codebases, complex). The right choice depends on team size, scale requirements, and whether you’re integrating existing systems or building new ones.

Why It Matters: Premature microservices adoption is the #1 architecture mistake in IoT startups. A 5-person team managing 30 microservices spends 60% of time on infrastructure instead of features. Conversely, a 100-person team with a monolith faces deployment bottlenecks and scaling limits. Architecture must match organizational reality.

Key Takeaway: Follow the “Team Topology Rule”: start with a monolith until you have 3+ teams needing independent deployment. Use SOA for enterprise integration of legacy systems. Use microservices for cloud-native greenfield development at scale. Most IoT projects should start monolithic and extract services only when team coordination or scaling forces the split.

Diagram illustrating a multi-step decision process for adaptive frequency hopping channel management

Service Architecture Decision Flow: Start simple, evolve based on constraints
Figure 144.1: Service Architecture Decision Flow: Start simple, evolve based on constraints

Microservices are like having a team of specialists where each friend does ONE job really well!

144.2.1 The Sensor Squad Adventure: The Pizza Restaurant Problem

Imagine the Sensor Squad wanted to open a pizza restaurant! At first, Sunny the Light Sensor tried to do EVERYTHING alone: take orders, make dough, add toppings, bake pizzas, AND deliver them!

Poor Sunny was exhausted and pizzas were slow. “I can only make 3 pizzas per hour doing everything myself!” Sunny complained.

Then Motion Mo had a brilliant idea: “What if we each do just ONE thing we’re really good at?”

  • Sunny became the Order Taker (just takes orders, nothing else!)
  • Thermo became the Oven Master (just bakes, knows exactly when pizzas are done!)
  • Pressi became the Dough Maker (presses and stretches dough perfectly!)
  • Droppy became the Topping Artist (adds just the right amount of cheese!)
  • Signal Sam became the Delivery Driver (knows all the fastest routes!)

Now they could make 20 pizzas per hour! And when the restaurant got REALLY busy, they just added more Sunnys to take orders, without needing more of everyone else. That’s microservices!

144.2.2 Key Words for Kids

Word What It Means
Service One helper that does just ONE specific job really well
Microservice A tiny service that only knows how to do one thing (like ONLY taking orders)
API The special language services use to talk to each other (“Hey Thermo, bake pizza #5!”)
Container A special box that has everything a service needs to do its job

144.2.3 Try This at Home!

The Restaurant Game: Play restaurant with your family! Give each person ONE job only: - One person takes orders (writes them down) - One person “cooks” (counts to 10 for each order) - One person “delivers” (brings the paper to the customer)

Time how long it takes to complete 5 orders. Now try it with one person doing ALL jobs. Which was faster? That’s why computers use microservices!

144.3 Getting Started (For Beginners)

New to Service Architectures? Start Here!

This section explains the basics. If you already understand SOA and microservices concepts, skip to the technical sections below.

144.3.1 What is a Service? (Simple Explanation)

Think of a service as a specialized worker in a factory:

Traditional App Service-Based App
One giant program does everything Many small programs, each doing one thing
Like one person running a restaurant Like a team with chef, waiter, cashier
Change one thing, redeploy everything Change one service, deploy just that

Real IoT Example:

Instead of one monolithic IoT platform:

Comparison diagram of monolithic versus microservices architecture for IoT platforms
Figure 144.2: Monolithic vs microservices comparison for IoT platforms

144.3.2 SOA vs Microservices: The Evolution

Service-Oriented Architecture (SOA) came first (2000s): - Services share a common Enterprise Service Bus (ESB) - Centralized governance and contracts - Designed for enterprise integration

Microservices evolved from SOA (2010s): - Each service is fully independent - Decentralized data management - Designed for cloud-native deployment

Aspect SOA Microservices
Communication Enterprise Service Bus (ESB) Lightweight APIs (REST, gRPC)
Data Shared database common Database per service
Size Larger, coarse-grained Smaller, fine-grained
Governance Centralized Decentralized
Deployment Often shared servers Independent containers
Best For Enterprise integration Cloud-native apps
SOA with centralized ESB and shared database compared to microservices with API gateway and database-per-service
Figure 144.3: SOA with centralized ESB and shared database versus microservices with API gateway and database-per-service

144.4 The Business Case for Decomposition

When does the overhead of microservices pay off? Here is a concrete comparison.

Scenario: IoT platform supporting 50,000 devices, growing 30% yearly.

Factor Monolith (Year 1) Monolith (Year 2) Microservices (Year 2)
Team size 8 developers 18 developers 18 developers (3 teams of 6)
Deploy frequency 3x/week 1x/week (merge conflicts) 3x/week per team (9x total)
Deploy coordination 15 min standup 2-hour release meeting No cross-team coordination
Time to ship feature 2 weeks 5 weeks (testing bottleneck) 2 weeks (independent testing)
Incident blast radius Entire platform Entire platform Single service
Scaling cost $800/mo (3 large VMs) $3,200/mo (12 large VMs) $1,800/mo (scale only what needs it)

The tipping point: At 8 developers, the monolith worked well. At 18, coordination overhead consumed 40% of engineering time. After decomposition into 5 services, each team of 6 operated independently, and the coordination overhead dropped to under 10%.

Conway’s Law states that system architecture mirrors team structure. The coordination overhead for microservices scales quadratically:

\[C_{coord} = k \times \frac{N(N-1)}{2} \times \frac{1}{T}\]

where \(N\) = number of services, \(T\) = number of teams, \(k\) = hours per cross-service integration per month.

Example: 30 services managed by 3 teams, 1 hour per integration pair per month:

  • Total possible dependencies: \(\frac{30 \times 29}{2} = 435\) pairs
  • Coordination cost: \(1 \times \frac{435}{3} = 145\) hours/month
  • At $150/hour: $21,750/month in coordination overhead alone

The “two-pizza rule” (5-8 people per service) minimizes the \(N/T\) ratio – keeping services small enough for independent teams reduces this quadratic explosion.

Rule of thumb: Expect microservices overhead to cost ~20% of engineering effort (CI/CD, service mesh, monitoring). This pays off when monolith coordination overhead exceeds 25%, which typically happens at 12-15 developers or when deploy frequency drops below weekly.

144.5 Service Decomposition Strategies

Breaking a system into services requires careful thought. Poor decomposition leads to a “distributed monolith” - all the complexity of microservices with none of the benefits.

Architecture diagram showing IoT microservices with API gateway, message broker, and service mesh

IoT Platform Microservices Topology: Each service owns its data and communicates via async messaging
Figure 144.4: IoT Platform Microservices Topology: Each service owns its data and communicates via async messaging

144.5.1 Decomposition Approaches

1. Decompose by Business Capability

Align services with business functions:

Business Capability IoT Service Responsibility
Device Lifecycle Device Registry Provisioning, updates, retirement
Data Collection Telemetry Ingestion Receive, validate, route sensor data
Analysis Analytics Engine Process, aggregate, detect patterns
User Notification Alert Service Trigger and deliver alerts
Billing Usage Metering Track consumption, generate invoices

2. Decompose by Subdomain (Domain-Driven Design)

Domain-driven design decomposition showing core, supporting, and generic subdomains for IoT service boundaries
Figure 144.5: DDD subdomain classification: Invest heavily in core domains, use standard solutions for generic domains
Domain-Driven Design Principles

Core Domain: Your competitive advantage. Build custom, invest heavily. Supporting Domain: Necessary but not unique. Build or customize. Generic Domain: Same everywhere. Buy off-the-shelf or use SaaS.

For IoT platforms, telemetry processing and analytics are typically core domains, while authentication and billing are generic.

144.5.2 Service Boundaries: The Two-Pizza Rule

Amazon’s “two-pizza rule”: If a service team can’t be fed by two pizzas, the service is too big.

Horizontal spectrum showing service sizes from monolith to nano-services with team size guidelines

Service Granularity Spectrum: Finding the sweet spot between monolith and nano-services
Figure 144.6: Service Granularity Spectrum: Finding the sweet spot between monolith and nano-services

Signs your service is too large:

  • Multiple teams work on it
  • Deployments require coordination
  • Changes frequently cause unrelated breakages
  • Different parts have different scaling needs

Signs your service is too small:

  • Excessive inter-service communication
  • Simple operations require multiple service calls
  • Shared data requires distributed transactions
  • Team manages 10+ services

144.6 Common Patterns and Anti-Patterns

Understanding what works and what fails in IoT service architectures helps avoid costly mistakes.

Comparison table showing recommended patterns like API Gateway and database-per-service versus anti-patterns like shared database and synchronous chains

IoT Architecture Patterns vs Anti-Patterns: Learn from common successes and failures
Figure 144.7: IoT Architecture Patterns vs Anti-Patterns: Learn from common successes and failures

144.6.1 Patterns for IoT Success

Pattern Description IoT Application
API Gateway Single entry point for all clients Unified interface for devices, mobile, and web
Database-per-Service Each service owns its data Device service uses PostgreSQL, telemetry uses InfluxDB
Event Sourcing Store events, not state Complete audit trail of all sensor readings
CQRS Separate read and write paths High-throughput ingestion, separate query optimization
Saga Pattern Distributed transactions via events Device provisioning across multiple services

144.6.2 Anti-Patterns to Avoid

Anti-Pattern Problem Solution
Shared Database Services coupled through data; changes break multiple services Each service owns its database
Synchronous Chains Service A calls B calls C; if C fails, A fails Use async messaging (Kafka, MQTT)
Distributed Monolith Microservices that must deploy together Ensure independent deployability
God Service One service does too much Decompose by business capability

144.7 Practical Example: Smart Building IoT Platform

Let’s trace how a sensor reading flows through a microservices architecture.

Sequence diagram showing temperature data flowing from IoT sensor through API gateway, telemetry service, message broker, analytics service, and notification service to user dashboard

Smart Building Data Flow: Temperature reading from sensor to dashboard through microservices
Figure 144.8: Smart Building Data Flow: Temperature reading from sensor to dashboard through microservices

Key Points in This Flow:

  1. API Gateway: Single entry point handles auth and rate limiting
  2. Async Processing: Telemetry Service doesn’t wait for analytics; publishes event and returns
  3. Event-Driven: Message broker decouples services; adding new consumers is easy
  4. Parallel Processing: Analytics and notifications happen simultaneously
  5. Database Ownership: Telemetry Service owns time-series DB; Analytics Service queries via API
Design Insight

Notice that the Telemetry Service doesn’t call the Analytics Service directly. It publishes an event to Kafka. This means: - If Analytics Service is down, telemetry still gets stored - New services can subscribe to events without changing Telemetry Service - Services scale independently based on their own load

Common Pitfalls

Teams split an IoT platform into services by technical layer (UI service, API service, DB service) rather than by business domain (telemetry service, device management service, alerting service). The result is services that must all deploy together because a change in the API layer requires coordinated changes in the UI and DB layers. Decompose by domain boundary, not technical layer.

Microservices multiply operational complexity: each service needs its own CI/CD pipeline, monitoring, logging, and on-call rotation. A 3-person team building an IoT MVP should start with a well-structured modular monolith. Extract the first service only when a specific component needs to scale independently or be owned by a separate team. Following the ‘monolith-first’ rule ships MVPs 3-6 months faster.

Telemetry processing chains (ingest → validate → enrich → store → alert) implemented as synchronous REST calls create brittle chains where one slow service blocks all downstream processing. Use asynchronous messaging (MQTT, Kafka, NATS) for data flow between processing stages, reserving synchronous REST for queries that require immediate responses.

144.8 Summary

This chapter introduced the fundamental concepts of service-oriented architectures for IoT:

144.8.1 Key Concepts Covered

Concept Key Insight
SOA vs Microservices SOA for enterprise integration; microservices for cloud-native greenfield
Service Decomposition Align with business capabilities; use two-pizza rule for team size
Monolith-First Start simple; extract services when scaling or team coordination requires
Database-per-Service Each service owns its data; communicate via APIs and events
Event-Driven Architecture Async messaging decouples services and improves resilience

144.8.2 Architecture Selection Summary

Summary diagram showing three architecture choices: Monolith for small teams and MVPs, SOA for enterprise integration, and Microservices for large teams and cloud-native applications

Architecture Selection Quick Reference: Match architecture to your context
Figure 144.9: Architecture Selection Quick Reference: Match architecture to your context

Scenario: 200K-line monolithic IoT platform supporting 100K devices needs decomposition due to 18-developer team coordination bottlenecks.

Monolith Structure (before): - Device management: registration, authentication, firmware updates (35K lines) - Telemetry ingestion: MQTT broker interface, validation, storage (45K lines) - Rules engine: event processing, condition evaluation, action dispatch (50K lines) - Analytics: aggregation, dashboards, ML inference (40K lines) - Notifications: email, SMS, push, webhooks (30K lines)

Decomposition Analysis:

Component Deploy Frequency Need Scaling Profile Team Size Extract?
Device Mgmt 1x/month (stable APIs) Steady 3 devs No (low churn)
Telemetry 2x/week (high iteration) Spiky (10x daytime) 5 devs YES (independent scaling)
Rules Engine 1x/week (customer requests) Steady 4 devs YES (feature velocity)
Analytics 2x/month (research projects) Batch jobs 3 devs No (batch, not latency-sensitive)
Notifications 1x/week (new channels) Bursty 3 devs YES (isolation from failures)

Recommended Decomposition (3 services extracted):

Service 1: Telemetry Ingestion

  • Reason: 10x scaling need (50 msg/sec nights → 500 msg/sec daytime peaks)
  • Team: 5 developers (largest team, highest iteration)
  • Technology: TimescaleDB (optimized for time-series), Kafka for buffering
  • Benefit: Independent HPA scaling, deploys 2x/week without coordinating

Service 2: Rules Engine

  • Reason: Customer-facing feature requests drive weekly releases
  • Team: 4 developers
  • Technology: Same PostgreSQL, event-driven via Kafka
  • Benefit: Deploy rule changes without touching telemetry or notifications

Service 3: Notification Service

  • Reason: Failures in email/SMS should NOT crash telemetry ingestion
  • Team: 3 developers
  • Technology: Redis for queuing, circuit breakers for external APIs
  • Benefit: Bulkhead isolation – email provider outage doesn’t affect platform

Retained in Monolith: Device Management + Analytics (6 developers, 75K lines) - Lower change frequency (1-2x/month) - Shared data model (devices table) - No independent scaling need

Migration Timeline:

  • Month 1: Extract Telemetry (highest priority, scaling pain)
  • Month 3: Extract Rules Engine (deploy velocity pain)
  • Month 5: Extract Notifications (failure isolation)
  • Result: 3 services + 1 managed monolith (4 deployable units vs 1)

Key Insight: Don’t extract EVERYTHING – only extract components with clear independent needs (scaling, deploy velocity, fault isolation). Keep related functionality together.

Use this table to evaluate whether to extract a component as a separate service:

Factor Threshold for Extraction Weight Score Method
Deploy frequency mismatch >2x difference from platform average High (3x) Component deploys / Platform avg deploys
Scaling profile difference >5x peak-to-average ratio difference High (3x) Component peak/avg vs Platform peak/avg
Team ownership ≥1 full team (5-8 devs) Medium (2x) Team dedicated to component?
Technology mismatch Requires different datastore or runtime Medium (2x) PostgreSQL vs TimescaleDB, JVM vs Node.js
Failure isolation need Component failures affect critical path High (3x) Email down → telemetry fails?
External dependencies Depends on slow/unreliable 3rd party APIs Medium (2x) Calls external weather/payment/SMS APIs
Change frequency Changes >1x/month Low (1x) Commits per month to component

Scoring System:

  • Multiply each factor score by weight
  • Sum weighted scores
  • Score >15: Strong candidate for extraction
  • Score 8-15: Consider extraction if team coordination pain exists
  • Score <8: Keep in monolith

Example Calculation (Notification Service):

Deploy frequency: 4x/month vs 2x/month avg = 2x difference × 3 weight = 6
Scaling: Same profile (1x) × 3 weight = 3
Team: 3 devs (not full team) × 2 weight = 0
Technology: Same stack × 2 weight = 0
Failure isolation: Email down blocks alerts × 3 weight = 9
External deps: Calls Twilio, SendGrid × 2 weight = 4
Change frequency: 15 commits/month × 1 weight = 1
TOTAL: 6 + 3 + 0 + 0 + 9 + 4 + 1 = 23 (Strong extraction candidate)

Advice: Extract services with scores >15 and team >5 developers. Resist extracting low-score components just for “microservices purity.”

Common Mistake: Decomposing by Technical Layers Instead of Business Capabilities

The Error: Splitting monolith horizontally (API layer, business logic layer, data layer) instead of vertically (by domain).

Real Example:

  • Team decomposes IoT platform into 3 “microservices”:
    1. API Gateway Service: All REST endpoints (device, telemetry, rules, analytics)
    2. Business Logic Service: All processing logic
    3. Data Access Service: All database queries

Why This Fails (Distributed Monolith):

Adding new feature "Custom Alert Rules":
1. Update API Gateway (add POST /rules endpoint) → Deploy Service 1
2. Update Business Logic (rule validation logic) → Deploy Service 2
3. Update Data Access (new table queries) → Deploy Service 3

Result: Feature requires coordinated deployment of ALL 3 services (same as monolith)

The Problem: Technical layering creates coupling across service boundaries. Every feature change touches all layers.

Correct Approach (Vertical Slicing by Domain):

Decompose by business capability (device management, telemetry, rules, notifications):

Adding "Custom Alert Rules":
1. Update Rules Service only (API + logic + data all in one service)
2. Single deploy, no coordination

Result: True independent deployability

Comparison:

Aspect Layered (Anti-Pattern) Domain-Based (Correct)
Change scope Changes touch all layers Changes isolated to one service
Deploy coordination 3 services for 1 feature 1 service for 1 feature
Team ownership No clear owner (“frontend team”, “backend team”) Clear owner (“rules team”)
Database Shared across services (tight coupling) Database per service (loose coupling)
Failure isolation API layer down = everything down Rules service down ≠ telemetry down

How to Avoid:

  1. Decompose by business capability (device, telemetry, rules, notifications), NOT by technical layer
  2. Each service owns full stack: API + logic + data for its domain
  3. Team owns end-to-end: One team responsible for rules service from API to database
  4. Database-per-service: Rules Service has rules_db, Telemetry Service has telemetry_db

Lesson: “Microservices” split horizontally are just a distributed monolith with network latency. True microservices split vertically by business domain, enabling independent deployment.

Key Takeaway

In one sentence: Choose your service architecture based on team size, existing systems, and scale requirements - not based on what’s trendy.

Remember this rule: For small teams (<10 developers) and moderate scale (<50K devices), a well-structured monolith ships faster and is easier to maintain than premature microservices.

144.8.3 Checklist Before Choosing Microservices

Before adopting microservices, verify you can answer “yes” to these questions:

If you answered “no” to most questions, start with a modular monolith.

144.9 Knowledge Check

Challenge: Analyze an existing IoT codebase and identify service boundaries using the decision framework from this chapter.

Scenario: You have access to a GitHub repository for an open-source IoT platform (choose one): - ThingsBoard - IoT platform - Mainflux - Industrial IoT - Home Assistant - Home automation

Tasks:

  1. Clone the repository and explore the code structure
  2. Identify modules/packages - what are the major functional areas?
  3. Apply scoring framework:
    • Deploy frequency difference (check git commit history per module)
    • Scaling profile (which modules are compute vs I/O heavy?)
    • Technology mismatch (does one module need a different database?)
    • Failure isolation needs (which failures should not cascade?)
  4. Calculate scores for top 3 extraction candidates
  5. Justify - would you extract these services? Why or why not?

What to observe:

  • Do high scores correspond to clear architectural boundaries?
  • Are there modules that score high but are tightly coupled (anti-pattern)?
  • How does the existing module structure compare to your service boundaries?

Deliverable:

  • Table of candidate services with scores
  • Recommended extraction order with justification
  • Architectural diagram showing proposed service boundaries

Extension: Fork the repo and create a branch that separates one service (even if you don’t implement it fully, show the boundary).

144.11 What’s Next

If you want to… Read this
Design RESTful APIs and service discovery for IoT microservices SOA API Design
Build fault-tolerant IoT services with resilience patterns SOA Resilience Patterns
Deploy containerized IoT services with Kubernetes SOA Container Orchestration
Model complex IoT device behavior with state machines State Machine Patterns
Understand IoT reference architecture layers and patterns IoT Reference Models and Patterns