1421  Privacy Design Patterns and Data Tiers

1421.1 Learning Objectives

By the end of this chapter, you should be able to:

  • Apply privacy design patterns: data minimization, aggregation, local processing, anonymization
  • Classify IoT data using the Three-Tier Privacy Model
  • Apply proportional protection mechanisms based on data sensitivity
  • Implement tier-aware storage, sharing, and retention policies
  • Design privacy-preserving data flows that minimize tier elevation
  • Configure automated retention and access controls by privacy tier
NoteKey Takeaway

In one sentence: Privacy design patterns provide proven solutions to common privacy challenges, while the Three-Tier Privacy Model ensures proportional protection based on data sensitivity.

Remember this rule: Apply the privacy hierarchy - Eliminate data collection first, then Minimize, then Anonymize, then Encrypt. Not all data requires the same protection level.

1421.2 Prerequisites

Before diving into this chapter, you should be familiar with:

1421.3 Privacy Design Patterns

Privacy design patterns are proven solutions to common privacy challenges in IoT systems.

1421.3.1 The Privacy Hierarchy: Eliminate, Minimize, Protect

Best to Worst privacy practices:

%% fig-alt: "Privacy hierarchy pyramid showing five levels from best to worst practices: at the top is data elimination (don't collect), followed by data minimization (collect only what's needed), anonymization (remove identifying info), encryption (protect collected data), and at the bottom is unacceptable practice of collecting everything with just a privacy policy promise."
%%{init: {'theme': 'base', 'themeVariables': {'primaryColor':'#2C3E50','primaryTextColor':'#fff','primaryBorderColor':'#16A085','lineColor':'#16A085','secondaryColor':'#E67E22','tertiaryColor':'#7F8C8D'}}}%%
graph TB
    A["Privacy Hierarchy"] --> B["BEST: Don't collect the data"]
    B --> B1["Example: Fitness tracker calculates<br/>calories ON-DEVICE"]

    A --> C["BETTER: Collect minimally"]
    C --> C1["Example: Collect city-level location,<br/>not GPS coordinates"]

    A --> D["GOOD: Anonymize data"]
    D --> D1["Example: Remove names, IDs,<br/>aggregate before storage"]

    A --> E["ACCEPTABLE: Encrypt data"]
    E --> E1["Example: Collect everything,<br/>encrypt it, store encrypted"]

    A --> F["UNACCEPTABLE: Collect everything"]
    F --> F1["Example: Trust us,<br/>we have a privacy policy"]

    style A fill:#2C3E50,stroke:#16A085,stroke-width:2px,color:#fff
    style B fill:#16A085,stroke:#2C3E50,stroke-width:2px,color:#fff
    style C fill:#16A085,stroke:#2C3E50,stroke-width:2px,color:#fff
    style D fill:#7F8C8D,stroke:#2C3E50,stroke-width:2px,color:#fff
    style E fill:#E67E22,stroke:#2C3E50,stroke-width:2px,color:#fff
    style F fill:#E67E22,stroke:#2C3E50,stroke-width:2px,color:#fff

This view compares privacy protection techniques across multiple dimensions to help select the right approach:

%% fig-alt: "Privacy technique comparison chart plotting five protection methods against privacy effectiveness on vertical axis and implementation complexity on horizontal axis."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D'}}}%%
quadrantChart
    title Privacy Technique Selection Guide
    x-axis Low Complexity --> High Complexity
    y-axis Low Privacy --> High Privacy
    quadrant-1 IDEAL ZONE
    quadrant-2 ADVANCED OPTIONS
    quadrant-3 AVOID
    quadrant-4 USE WITH CAUTION
    Data Elimination: [0.25, 0.95]
    Data Minimization: [0.35, 0.85]
    Aggregation: [0.45, 0.75]
    Anonymization: [0.55, 0.65]
    Encryption: [0.65, 0.55]
    Policy Only: [0.85, 0.15]

Technique Selection Guide:

Technique Privacy Level Complexity Best For Limitations
Elimination Highest Lowest Non-essential data Reduces functionality
Minimization High Low Required data fields Still collects some data
Aggregation High Medium Statistical analysis Loses individual precision
Anonymization Medium Medium Research datasets Re-identification risk
Encryption Medium High Compliance needs Key management burden
Policy Only Lowest Highest Legacy systems Trust-dependent, breach risk

Privacy by Design asks: “Do we NEED this data, or just WANT it?”

1421.3.2 Pattern 1: Data Minimization

Principle: Collect only what’s absolutely necessary for the specified purpose.

Thermostat Example Collect (Necessary) Don’t Collect (Unnecessary) Rationale
Core function Temperature, timestamp User ID, location, Wi-Fi networks Anonymous device ID sufficient
Purpose HVAC control Behavioral profiling Only collect for stated purpose
Granularity Per-room temperature Individual occupant tracking Aggregate is sufficient

Implementation Checklist:

  • Document purpose for EVERY data point collected
  • Remove ALL optional fields from data collection
  • Use anonymous/pseudonymous identifiers (not user IDs)
  • Periodic review: “Do we still need this data?”

1421.3.3 Pattern 2: Aggregation

Principle: Combine individual data points to prevent identification while preserving utility.

Raw Data (Privacy Risk) Aggregated Data (Privacy-Preserving) Utility Preserved?
[22.1, 22.3, 22.2, 22.4] per minute Hourly average: 22.25 Yes (sufficient for optimization)
Entry/exit timestamps per person % occupancy per hour Yes (HVAC scheduling works)
Individual device MAC addresses Number of devices in network Yes (bandwidth management)

Aggregation Techniques:

  • Temporal aggregation: Minute -> Hour -> Day averages
  • Spatial aggregation: Room -> Floor -> Building averages
  • Statistical aggregation: Individual values -> Mean/Median/Count

1421.3.4 Pattern 3: Local Processing (Edge Computing)

Principle: Process sensitive data on-device; send only results to cloud.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D', 'clusterBkg': '#f9f9f9', 'clusterBorder': '#2C3E50', 'edgeLabelBackground':'#ffffff'}}}%%
flowchart LR
    A[Sensor Reading] --> B{Process Locally?}

    B --> |Yes - Privacy First| C[Local Processing<br/>on Device]
    C --> D[Decision: Turn on cooling]
    D --> E[No cloud transmission]

    B --> |No - Cloud Dependent| F[Send Raw Data<br/>to Cloud]
    F --> G[Cloud Processing]
    G --> H[Privacy risk:<br/>All data exposed]

    style C fill:#16A085,stroke:#2C3E50,color:#fff
    style E fill:#16A085,stroke:#2C3E50,color:#fff
    style F fill:#E67E22,stroke:#2C3E50,color:#fff
    style H fill:#E67E22,stroke:#2C3E50,color:#fff

Figure 1421.1: Local vs Cloud Processing: Privacy-First Edge Computing Decision Flow

When to Use Local Processing:

Scenario Local Processing Cloud Processing Recommendation
Voice wake word detection Always Privacy risk Local only
HVAC control decisions Sufficient Unnecessary Local preferred
ML model training Federated learning If anonymized Depends on data sensitivity
Firmware updates Needs cloud Required Cloud (minimal data shared)

1421.3.5 Pattern 4: Anonymization and Pseudonymization

K-Anonymity: Make each record indistinguishable from at least k-1 other records.

Original Data (Identifiable) K-Anonymized (k=5) Privacy Gain
Age: 25, ZIP: 94102 Age: 20-30, ZIP: 941** Each record matches >= 5 people
Name: John, Email: john@email.com [Removed], [Removed] Direct identifiers eliminated
Precise location: 37.7749, -122.4194 City: San Francisco Coarse location sufficient

Anonymization Techniques:

Technique Example Use Case Strength
Generalization Age 25 -> “20-30” Demographics Moderate (can be reversed with auxiliary data)
Suppression Remove name, email Direct identifiers Strong (irreversible if done correctly)
Pseudonymization User_12345 -> Random_ABC789 Temporary unlinkability Weak (reversible with key)
Differential Privacy Add statistical noise ML training data Strong (mathematically proven)

1421.4 Data Privacy Tiers for IoT Systems

IoT systems generate diverse data types with vastly different privacy implications. A Three-Tier Privacy Model provides a structured framework for classifying data and applying appropriate protection mechanisms.

1421.4.1 The Three Privacy Tiers

Principle: Not all data requires the same level of protection. Classify data into privacy tiers and apply proportional safeguards.

Tier Data Type Examples Protection Level
Tier 1: Public Aggregate, anonymized City traffic counts, weather averages, pollution levels Minimal (transparency focus)
Tier 2: Sensitive Identifiable patterns Energy usage, location history, device MAC addresses Encryption, access control
Tier 3: Critical Biometric, health, financial Heart rate, blood glucose, payment data, video/audio End-to-end encryption, explicit consent

1421.4.2 Processing Rules by Tier

Different privacy tiers require different handling throughout the data lifecycle:

Tier Storage Sharing Retention Consent Required
Tier 1 Cloud OK Open data (public APIs) Indefinite (archival value) Implicit (opt-out)
Tier 2 Cloud encrypted Partners only (contractual agreements) 1-3 years (compliance) Opt-out (notification required)
Tier 3 Edge preferred (minimize cloud transmission) Never shared (user controls only) Minimal (7-30 days) Explicit opt-in (granular consent)

1421.4.3 Privacy Tier Decision Flowchart

%% fig-alt: "Decision flowchart for classifying IoT data into three privacy tiers based on personally identifiable information and sensitivity level."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D', 'clusterBkg': '#f9f9f9', 'clusterBorder': '#2C3E50', 'edgeLabelBackground':'#ffffff'}}}%%
flowchart TD
    D[IoT Data Collected] --> Q1{Personally<br/>Identifiable?}
    Q1 -->|No<br/>Anonymous/Aggregate| T1[Tier 1: Public Data]
    Q1 -->|Yes<br/>Links to Individual| Q2{Health/Biometric/<br/>Financial Data?}
    Q2 -->|No<br/>Behavioral Patterns| T2[Tier 2: Sensitive Data]
    Q2 -->|Yes<br/>Highly Personal| T3[Tier 3: Critical Data]

    T1 --> P1[Cloud storage acceptable<br/>Open data sharing<br/>Indefinite retention]
    T2 --> P2[Encrypted cloud storage<br/>Partner sharing only<br/>1-3 year retention<br/>Access controls]
    T3 --> P3[Edge processing preferred<br/>Never share externally<br/>7-30 day retention<br/>Explicit consent required]

    style T1 fill:#16A085,stroke:#2C3E50,color:#fff
    style T2 fill:#E67E22,stroke:#2C3E50,color:#fff
    style T3 fill:#8B0000,stroke:#16A085,color:#fff
    style P1 fill:#d4edda,stroke:#16A085
    style P2 fill:#fff3cd,stroke:#E67E22
    style P3 fill:#f8d7da,stroke:#8B0000

Figure 1421.2: Decision flowchart for classifying IoT data into three privacy tiers

1421.4.4 Real-World Examples: Multi-Tier Data in IoT Systems

Most IoT devices generate data across all three tiers. Understanding which tier each data type belongs to is critical for privacy-by-design implementation.

1421.4.4.1 Example 1: Smart Energy Meter

Data Element Raw Value Privacy Tier Rationale Protection Applied
Total kWh (hourly) 3.2 kWh at 14:00 Tier 1 Aggregate consumption, no behavior inference Cloud storage OK, public reporting
Appliance signatures Dishwasher: 1.8 kW, 90 min cycle Tier 2 Reveals behavioral patterns (when you cook, clean) Encrypted storage, access logs
Occupancy patterns Nobody home 9am-5pm weekdays Tier 3 Security risk (reveals vacancy for burglars) Edge processing only, explicit consent

Privacy-by-Design Implementation:

  • Tier 1 data: Transmitted hourly to utility for billing (aggregate OK)
  • Tier 2 data: Processed locally, only share with explicit user consent for “efficiency tips”
  • Tier 3 data: NEVER transmitted; occupancy detection runs on-device only for automation

1421.4.4.2 Example 2: Fitness Tracker

Data Element Raw Value Privacy Tier Rationale Protection Applied
Daily step count 8,347 steps Tier 1 General activity level, hard to de-anonymize Cloud sync OK, leaderboard sharing
GPS location history Route from home to office Tier 2 Identifies home/work addresses, patterns Encrypted, user controls sharing
Heart rate variability (HRV) 65ms RMSSD Tier 3 Medical diagnostic data, health status Edge processing, medical-grade encryption

Privacy-by-Design Implementation:

  • Tier 1 data: Synced to cloud for goal tracking, social features
  • Tier 2 data: Encrypted before transmission, user controls public/friends/private
  • Tier 3 data: Processed on-device using edge ML, never uploaded without explicit medical consent

1421.4.4.3 Example 3: Smart Home Camera

Data Element Raw Value Privacy Tier Rationale Protection Applied
Motion event count 12 events today Tier 1 Anonymous activity level Dashboard display, cloud analytics
Motion event timestamps Motion at 07:23, 08:15, 12:34 Tier 2 Reveals daily routines Encrypted storage, 30-day retention
Video footage with faces Recording of family members Tier 3 Biometric identifiers, surveillance data Local storage only, end-to-end encryption if cloud backup

Privacy-by-Design Implementation:

  • Tier 1 data: Aggregate motion statistics for “home activity” dashboard
  • Tier 2 data: Encrypted timestamps for forensic review if needed
  • Tier 3 data: Local storage by default, optional encrypted cloud backup with explicit consent

1421.4.5 Privacy Tier Comparison Table

Comprehensive comparison across all privacy dimensions:

Dimension Tier 1: Public Tier 2: Sensitive Tier 3: Critical
Examples Traffic counts, weather, pollution Energy patterns, MAC addresses Biometrics, health, video
Encryption Optional (integrity) Required (AES-256 at rest/transit) End-to-end + hardware security module
Storage Location Cloud preferred Cloud encrypted acceptable Edge preferred, cloud only with consent
Sharing Open APIs, public datasets Contractual partners only Never (user controls exceptions)
Retention Indefinite (archival) 1-3 years (compliance) 7-30 days (minimize exposure)
Consent Implicit (opt-out) Notification (opt-out) Explicit opt-in (granular)
Access Control Public Role-based (RBAC) Attribute-based (ABAC) + MFA
Deletion On request Automatic after retention Automatic + secure erasure
Audit Logging Optional Required Mandatory real-time
Anonymization Already anonymous K-anonymity (k>=5) Not sufficient (avoid collection)
Regulatory Minimal compliance GDPR Article 32 (security) GDPR Article 9 (special categories)

1421.4.6 Implementation Guidance: Building Tier-Aware Systems

Step 1: Data Classification at Collection

Tag every data point with its privacy tier when first collected:

# Example: Privacy-aware data collection (pseudocode)
class SensorData:
    def __init__(self, value, data_type):
        self.value = value
        self.data_type = data_type
        self.privacy_tier = self.classify_privacy_tier()
        self.timestamp = now()

    def classify_privacy_tier(self):
        if self.data_type in ['aggregate_count', 'anonymous_stats']:
            return PrivacyTier.PUBLIC  # Tier 1
        elif self.data_type in ['location', 'usage_pattern', 'device_id']:
            return PrivacyTier.SENSITIVE  # Tier 2
        elif self.data_type in ['biometric', 'health', 'video', 'audio']:
            return PrivacyTier.CRITICAL  # Tier 3
        else:
            return PrivacyTier.CRITICAL  # Default to most restrictive

Step 2: Automate Retention Policies by Tier

Configure automatic deletion based on privacy tier:

Privacy Tier Default Retention Automated Action User Override
Tier 1 Indefinite Archive after 1 year (compressed) User can request deletion
Tier 2 1 year Auto-delete after 1 year User can extend to 3 years max
Tier 3 30 days Auto-delete after 30 days User can reduce to 7 days

Step 3: Audit Access by Tier (Logging and Accountability)

Tier 3 data requires comprehensive access logging:

Privacy Tier Access Logging Audit Frequency Alert Threshold
Tier 1 Optional (performance optimization) Annual N/A (public data)
Tier 2 Required (who accessed, when, why) Quarterly Unusual access patterns
Tier 3 Mandatory (full audit trail) Real-time EVERY access logged + user notification

Step 4: Design Data Flows to Minimize Tier Elevation

Anti-Pattern (Tier Elevation):

Tier 1 (step count) + Tier 2 (location) -> Tier 3 (exact home address)

When combining data creates higher-sensitivity information, the result inherits the highest tier of any input.

Privacy-by-Design Solution:

  • Separate storage: Store Tier 1 and Tier 2 data in separate databases
  • Delayed aggregation: Aggregate location to city-level (Tier 1) before combining
  • User consent gates: Require explicit consent before combining tiers

1421.4.7 Case Study: Smart City Parking System

A smart city deploys parking sensors to optimize urban parking. How should data be classified?

Data Classification:

Data Type Privacy Tier Collection Method Protection Applied
Total spaces available Tier 1 Aggregate count across all sensors Public API, mobile app display
Per-block occupancy Tier 1 Block-level aggregation (10+ spaces) Public dataset for urban planning
Individual space status Tier 2 Per-sensor binary (occupied/empty) Encrypted, city parking enforcement only
License plate recognition Tier 3 Camera + OCR (if deployed) Edge processing only, no storage

Privacy-by-Design Architecture:

%% fig-alt: "Smart city parking system architecture showing three data tiers flowing from sensors through processing to appropriate storage and sharing mechanisms."
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': '#2C3E50', 'primaryTextColor': '#fff', 'primaryBorderColor': '#16A085', 'lineColor': '#16A085', 'secondaryColor': '#E67E22', 'tertiaryColor': '#7F8C8D', 'clusterBkg': '#f9f9f9', 'clusterBorder': '#2C3E50', 'edgeLabelBackground':'#ffffff'}}}%%
flowchart LR
    S1[Parking Sensors] --> A[Aggregation Layer]
    A --> T1[Tier 1: City-Wide Count<br/>847 spaces available]
    A --> T2[Tier 2: Block Occupancy<br/>Block A: 12/20 occupied]

    S2[Optional: Cameras] --> E[Edge Processing]
    E --> T3[Tier 3: License Plate<br/>Processed Locally]
    E --> D[No Storage<br/>Deleted After Processing]

    T1 --> API[Public API<br/>Mobile Apps]
    T2 --> DB[Encrypted Database<br/>City Access Only]
    T3 --> Alert[Enforcement Alert<br/>No PII Stored]

    style T1 fill:#16A085,stroke:#2C3E50,color:#fff
    style T2 fill:#E67E22,stroke:#2C3E50,color:#fff
    style T3 fill:#8B0000,stroke:#16A085,color:#fff
    style D fill:#16A085,stroke:#2C3E50,color:#fff
    style API fill:#d4edda,stroke:#16A085
    style DB fill:#fff3cd,stroke:#E67E22

Figure 1421.3: Smart city parking system architecture with three-tier data classification

Privacy Benefits:

  • Public transparency: Tier 1 data drives citizen apps, reduces circling for parking
  • Operational efficiency: Tier 2 data helps city optimize enforcement routes
  • Privacy protection: Tier 3 license plates NEVER stored, processed only for real-time violation detection

1421.4.8 Best Practices: Tier-Aware Privacy Architecture

TipPrivacy Tier Implementation Checklist

Design Phase:

  • Document privacy tier for EVERY data element in system design
  • Create data flow diagrams showing tier segregation
  • Design separate storage systems for different tiers (defense in depth)
  • Default to Tier 3 if classification is uncertain

Implementation Phase:

  • Tag data with privacy tier at collection (metadata tagging)
  • Automate retention policies by tier (no manual cleanup)
  • Implement tier-appropriate encryption (Tier 3 requires end-to-end)
  • Configure access controls by tier (Tier 3 requires MFA + audit)

Operations Phase:

  • Monitor for tier violations (Tier 3 data in Tier 1 systems)
  • Audit access patterns (especially Tier 3 access)
  • User transparency (show users what tier each data type belongs to)
  • Periodic review (ensure tier classifications remain accurate)

Compliance Phase:

  • Map tiers to regulatory requirements (GDPR Article 9 = Tier 3)
  • Document tier justifications for auditors
  • Provide tier-specific privacy notices (explain why Tier 3 needs consent)
  • Enable user-initiated deletion by tier (separate controls)

Common Mistakes to Avoid:

Mistake Why It’s Wrong Correct Approach
Treating all data equally Over-protects Tier 1 (wasted resources), under-protects Tier 3 (compliance risk) Proportional protection by tier
Tier elevation without consent Combining Tier 1+2 -> Tier 3 without user awareness Require consent before combining tiers
Single database for all tiers Tier 1 breach exposes Tier 3 data Separate databases, separate encryption keys
No tier metadata Cannot automate retention, access control Tag every data point with tier at collection
Default to Tier 1 Assumes data is safe until proven sensitive Default to Tier 3, downgrade only with justification
WarningTradeoff: Data Minimization vs Analytics Capability

Option A: Collect only essential data required for core functionality, maximizing privacy protection

Option B: Collect broader data to enable analytics, personalization, and product improvement

Decision Factors: Choose aggressive minimization when processing sensitive categories (Tier 3), when regulatory scrutiny is high, or when breach costs outweigh analytics value. Choose broader collection when analytics directly improve user experience, when users explicitly consent for personalization, or when aggregate insights benefit all users. Key questions: Can the same insight be derived from aggregated or anonymized data? Can processing happen on-device instead of cloud?

WarningTradeoff: Edge Processing for Privacy vs Cloud Processing for Capability

Option A: Process sensitive data on-device/edge - eliminates cloud transmission of raw PII, GDPR data minimization compliance, latency ~5-20ms, limited ML model size (50-500MB), higher device cost (+$5-15 per unit)

Option B: Cloud processing with encrypted transmission - enables large ML models (GB-scale), cross-device learning, centralized compliance monitoring, latency ~100-500ms, requires robust consent framework

Decision Factors: Choose edge processing when handling Tier 3 critical data, when regulatory requirements mandate local processing, when network reliability is uncertain, or when real-time response is safety-critical. Choose cloud processing when ML model complexity exceeds edge capability, when cross-user pattern analysis provides significant benefit, or when device cost constraints prevent edge compute hardware.

NoteKey Concepts
  • Privacy Hierarchy: Eliminate > Minimize > Anonymize > Encrypt > Policy Only
  • Data Minimization: Collect only what’s absolutely necessary for the specified purpose
  • Aggregation: Combine individual data points to prevent identification while preserving utility
  • Local Processing: Process sensitive data on-device; send only results to cloud
  • K-Anonymity: Make each record indistinguishable from at least k-1 other records
  • Three-Tier Privacy Model: Public (Tier 1), Sensitive (Tier 2), Critical (Tier 3) with proportional protection
  • Tier Elevation: When combining data creates higher-sensitivity information, result inherits highest tier
TipChapter Summary

Privacy design patterns provide proven solutions to common IoT privacy challenges:

Four Core Patterns: 1. Data Minimization: Collect only necessary information, remove optional fields, use pseudonymous identifiers 2. Aggregation: Combine data temporally, spatially, or statistically to prevent individual identification 3. Local Processing: Process on-device first, send only results to cloud when necessary 4. Anonymization: Apply K-anonymity, differential privacy, or pseudonymization based on use case

Three-Tier Privacy Model: - Tier 1 (Public): Aggregate/anonymous data - cloud storage OK, open sharing, indefinite retention - Tier 2 (Sensitive): Identifiable patterns - encrypted cloud, partner sharing only, 1-3 year retention - Tier 3 (Critical): Biometric/health/financial - edge processing, never share externally, 7-30 day retention

Key Implementation Guidance: - Tag data with privacy tier at collection - Automate retention by tier - Design separate storage for different tiers - Monitor for tier violations (combining data that elevates sensitivity) - Default to Tier 3 when classification is uncertain

1421.5 What’s Next

Continue to Privacy Anti-Patterns and Assessment where you’ll learn:

  • Dark patterns and manipulative UX to avoid
  • Privacy theater vs genuine protection
  • Privacy Impact Assessment (PIA) framework
  • LINDDUN threat modeling for privacy
  • Development lifecycle integration

Continue to Privacy Anti-Patterns and Assessment ->