29  Kits: AI, Wireless & Energy

29.1 Learning Objectives

By the end of this chapter, you will be able to:

  • Evaluate Computer Vision Platforms: Compare OpenMV, NVIDIA Jetson Nano, and Google Coral for edge AI applications
  • Select Wireless Communication Kits: Choose between XBee, LoRa, and cellular kits based on range and bandwidth requirements
  • Understand Energy Harvesting Options: Evaluate EnOcean and SparkFun kits for battery-free sensor deployments
  • Match AI Requirements to Hardware: Determine appropriate GPU/TPU acceleration for ML inference workloads
  • Plan Mesh Network Deployments: Leverage XBee and LoRa for multi-hop sensor networks

29.2 Prerequisites

Before diving into this chapter, you should be familiar with:

Key Concepts

  • Development Kit (DevKit): Pre-assembled board combining microcontroller, sensors, and connectivity for rapid prototyping without custom PCB design.
  • Shield: Plug-in expansion board adding capabilities (Wi-Fi, motor driver, sensor) to a base development board via standard pin headers.
  • STEMMA QT / Qwiic: Standardised 4-pin I2C connector enabling daisy-chaining of compatible sensors without soldering.
  • GPIO Mapping: Documentation of which microcontroller pins connect to which peripheral functions on a development board.
  • Datasheet: Manufacturer’s technical document specifying a component’s electrical characteristics, timing diagrams, and application circuits.
  • Breadboard Prototype: Solderless first-stage prototype allowing rapid component layout changes before committing to PCB design.
  • Reference Design: Manufacturer-provided schematic and PCB layout demonstrating how to correctly integrate a chip or module.

29.3 Introduction

Computer vision, wireless communication, and energy harvesting represent advanced IoT capabilities that enable sophisticated applications. AI kits bring machine learning to the edge for real-time object detection and image classification. Wireless kits enable mesh networks and long-range sensor deployments. Energy harvesting kits eliminate battery replacement for maintenance-free operation. This chapter explores leading platforms in these domains.

Computer Vision/AI Kits add “eyes” to your IoT devices. They typically include: - Camera sensors - GPU or TPU acceleration - Pre-trained ML models - Python/TensorFlow support

Wireless Communication Kits connect devices across distances. They typically include: - Radio modules (XBee, LoRa, cellular) - Mesh networking capability - Configuration software - Antennas

Energy Harvesting Kits power devices without batteries. They typically include: - Solar panels - Vibration harvesters - Power management circuits - Energy storage (supercapacitors)

Example: Jetson Nano detects objects in video at 30 fps. XBee creates a mesh network across a warehouse. EnOcean powers a switch from button press energy alone.

“Three superpowers in one chapter!” cheered Lila the LED. “AI kits give devices eyes to see, wireless kits give them wings to communicate across distances, and energy harvesting kits let them run on sunshine and vibrations!”

Sammy the Sensor was fascinated by the AI cameras. “The Jetson Nano can recognize objects at 30 frames per second? That is like giving a security camera a brain that knows the difference between a person and a cat!” Max added, “And Google Coral does it with even less power using a special TPU chip. Perfect for edge AI where you need instant decisions without sending data to the cloud.”

Bella the Battery was most excited about energy harvesting. “Imagine never needing to be replaced! EnOcean switches harvest energy just from pressing a button. Solar panels charge supercapacitors during the day. Some kits even harvest energy from vibrations in factory machines. For me, that is the dream – infinite power from the environment!”

“And wireless kits like XBee and LoRa create mesh networks,” said Max. “XBee covers a building, LoRa covers a whole farm. They let sensors talk to each other across huge distances without any existing infrastructure.”

29.4 Computer Vision and AI Kits

29.4.1 OpenMV Cam H7 Plus

The OpenMV Cam H7 Plus is a self-contained machine vision module that puts camera sensing and AI processing on a single board the size of a postage stamp. Built around the STM32H743 microcontroller with an OV5640 camera sensor, it runs MicroPython natively and supports TensorFlow Lite models – all for around $75.

The development workflow is refreshingly simple: the OpenMV IDE provides a live camera preview, code editor, and one-click deployment. Pre-trained models for common tasks (face detection, colour tracking, AprilTag recognition) work out of the box, while custom TensorFlow Lite models can be trained externally and deployed to the device. An onboard IMU sensor adds motion awareness for robotics applications.

This makes the OpenMV ideal for visual inspection on production lines, barcode and QR code reading at warehouse stations, gesture recognition interfaces, and adding basic vision to hobby robots. Its standalone operation (no PC or network required) and fast prototyping cycle (edit code, see results immediately on the live preview) lower the barrier to computer vision projects dramatically.

The limitations reflect its microcontroller heritage: image resolution is constrained, AI models must be small enough for the available RAM, and without GPU acceleration, complex neural networks run at less than 1 frame per second. For simple vision tasks, however, the OpenMV’s price-to-capability ratio is hard to beat.

29.4.2 NVIDIA Jetson Nano Developer Kit

The NVIDIA Jetson Nano brings GPU-accelerated deep learning to the edge in a credit-card-sized form factor. Its quad-core ARM CPU paired with a 128-core NVIDIA Maxwell GPU and 4 GB RAM can run object detection models like YOLOv5 at 30+ frames per second – performance that was reserved for desktop workstations just a few years ago.

The development environment runs full Ubuntu Linux with CUDA/cuDNN acceleration, supporting TensorFlow, PyTorch, and NVIDIA’s specialised DeepStream SDK for multi-stream video analytics. The JetPack SDK bundles drivers, libraries, and sample applications into a single installer. A MIPI CSI camera connector supports industry-standard camera modules, while a GPIO header enables integration with sensors and actuators.

This makes the Jetson Nano the platform of choice for AI-powered security cameras, autonomous robot navigation, industrial quality inspection (detecting defects at production-line speed), and smart city analytics (vehicle counting, pedestrian tracking, traffic flow analysis). Critically, NVIDIA’s product line (Nano, Xavier NX, Orin) shares the same software stack, so prototypes built on the Nano can scale to production on more powerful hardware without rewriting code.

The power draw of 10W (20x the OpenMV) makes battery operation challenging, the $100–200 price point puts it above educational budgets, and the Linux-based setup requires comfort with command-line configuration that simpler platforms avoid.

29.4.3 Google Coral Dev Board

The Google Coral Dev Board takes a different approach to edge AI than the Jetson Nano: instead of a general-purpose GPU, it uses a dedicated Edge TPU (Tensor Processing Unit) coprocessor optimised specifically for neural network inference. Paired with an NXP i.MX 8M SoC, 1 GB RAM, Wi-Fi/Bluetooth, and a MIPI CSI camera connector, it delivers inference performance comparable to the Jetson Nano at 2–4W rather than 10W.

Development uses Mendel Linux with TensorFlow Lite and a Python API. Google provides a library of pre-compiled models (object detection, image classification, pose estimation, speech recognition) that run immediately on the Edge TPU, and custom models trained in TensorFlow can be compiled for the TPU using Google’s toolchain.

The Coral excels where power efficiency meets performance requirements: battery-powered AI cameras, always-on speech recognition devices, and edge inference nodes that must process data locally without cloud connectivity. Google’s ML ecosystem (Colab for training, TFLite for deployment, Coral for inference) provides an end-to-end workflow that simplifies the train-deploy-infer cycle.

The specialisation that gives Coral its efficiency also limits flexibility. Only TensorFlow Lite models run on the Edge TPU (no PyTorch, no custom CUDA kernels). The $150 price point and narrow use case (inference only, not training) mean it suits projects with clearly defined ML workloads rather than open-ended AI experimentation.

29.4.4 Computer Vision Kit Comparison

Feature OpenMV Cam H7 Jetson Nano Coral Dev Board
Processor STM32H7 ARM + GPU i.MX 8M + TPU
AI Acceleration None 128-core GPU Edge TPU
Price ~$75 $100-200 ~$150
Power 0.5W 10W 2-4W
Framework TFLite Micro TF, PyTorch TFLite
FPS (YOLO) <1 30+ 30+
Best For Simple vision Complex AI Fast inference

29.5 Knowledge Check

29.6 Wireless Communication Kits

29.6.1 Digi XBee3 Development Kit

The Digi XBee3 Development Kit provides a comprehensive platform for building mesh wireless networks. The kit includes multiple XBee3 modules (supporting Zigbee, BLE, 802.15.4, and cellular variants), USB interface boards for PC configuration, breadboard adapters for prototyping, and antennas optimised for each frequency band.

What distinguishes XBee from simpler radio modules is its built-in mesh networking stack. Nodes automatically discover neighbours, establish routes, and heal the network when nodes fail – capabilities that would require months of custom firmware development on raw radio modules. The XCTU configuration tool provides a graphical interface for network setup and a real-time network analyser that visualises topology and signal strength. For programmatic control, XBee3 supports MicroPython directly on the module and an API mode for integration with external microcontrollers.

XBee modules are the industry standard for sensor networks, remote monitoring, and multi-hop communication where self-healing mesh topology is essential. Their proven reliability (deployed in millions of commercial products) and excellent range (up to 1.5 km line-of-sight with high-gain antennas) make them a safe choice for professional applications.

The trade-offs are higher per-module cost than commodity radio chips, API complexity inherited from legacy protocol versions, and tight coupling to Digi’s proprietary ecosystem (mixing XBee with non-XBee mesh nodes is difficult).

29.6.2 LoRa Development Kit

LoRa (Long Range) development kits enable wireless communication across kilometres rather than metres, opening up applications that short-range protocols like Bluetooth and Zigbee cannot reach. A typical kit includes LoRa transceiver modules (based on the Semtech SX1276 or SX1262 chipset), Arduino-compatible development boards, antennas, and optionally a gateway for LoRaWAN network connectivity.

Development uses familiar Arduino libraries for device-side programming, with The Things Network providing a free community infrastructure for LoRaWAN connectivity. Devices can also communicate point-to-point using raw LoRa without a gateway, or use AT commands for simple serial integration with any microcontroller.

LoRa kits suit long-range sensor deployments (soil moisture across a farm, water level in remote reservoirs), smart city infrastructure (parking sensors, waste bin monitoring), and remote telemetry where cellular coverage is unavailable or subscription costs are prohibitive. The combination of kilometre-range communication with microamp-level sleep current enables multi-year battery operation – a 2,000 mAh coin cell can power a LoRa sensor transmitting hourly for 5+ years.

The bandwidth trade-off is significant: LoRa’s maximum data rate of around 50 kbps (and typically 0.3–11 kbps in LoRaWAN configurations) limits it to small sensor payloads. A gateway is required for LoRaWAN network access, and European regulatory duty cycle limits (1% transmit time) constrain how frequently devices can send.

29.6.3 Wireless Kit Comparison

Feature XBee3 LoRa
Range 100m-1.5km 2-15 km
Data Rate Up to 250 kbps Up to 50 kbps
Topology Mesh Star (LoRaWAN)
Power Low Very low
Price (module) $20-40 $10-25
Protocol Zigbee, 802.15.4 LoRa/LoRaWAN
Best For Mesh networks Long-range, low-power
Decision flowchart comparing XBee3 and LoRa wireless kits showing XBee3 with 100m-1.5km range, up to 250 kbps data rate, mesh topology, Zigbee protocol suited for mesh networks versus LoRa with 2-15 km range, up to 50 kbps data rate, star topology, LoRaWAN protocol suited for long-range low-power applications
Figure 29.1: Decision flowchart for selecting wireless communication kits based on range, topology, and power requirements.

29.7 Energy Harvesting Kits

Before exploring energy harvesting, use this interactive tool to understand power budgets and calculate energy requirements for your IoT devices.

29.7.1 EnOcean Development Kit

The EnOcean Development Kit represents a fundamentally different approach to IoT power: instead of batteries, devices harvest energy from their environment. The kit includes solar-powered sensor modules, kinetic energy modules (powered by the mechanical energy of pressing a button), temperature sensors, and wireless switches – all communicating via the EnOcean wireless protocol.

The technology is production-proven, not experimental. EnOcean switches are deployed in millions of buildings worldwide, harvesting just enough energy from a button press (50–100 microjoules) to transmit a wireless signal. Solar-powered temperature sensors use indoor lighting to charge a supercapacitor, then transmit readings periodically without any battery at all. Development uses the EnOcean protocol stack with API libraries and gateway integration for connection to building management systems.

This makes EnOcean the go-to platform for battery-free building automation sensors, self-powered light switches (no wiring needed – just stick on the wall), and maintenance-free deployments where replacing batteries in thousands of sensors would be impractical. The industrial-quality hardware and ISO/IEC 14543-3-10 standardisation provide the reliability guarantees that commercial building projects require.

The drawbacks are cost (modules are significantly more expensive than battery-powered alternatives), limited processing capability (energy harvesting provides microwatts, constraining computation), and a proprietary ecosystem that does not interoperate with Zigbee, Thread, or other mesh protocols.

29.7.2 SparkFun Energy Harvesting Kit

Where EnOcean provides a polished, production-ready energy harvesting solution, the SparkFun Energy Harvesting Kit takes the opposite approach: it provides raw components for experimentation and learning. The kit includes solar panels in various sizes, a vibration harvester (piezoelectric element), a thermoelectric generator (Peltier module used in reverse), energy storage circuits (supercapacitors and charge controllers), and buck/boost converters for voltage regulation.

Everything is Arduino-compatible and open source, with energy measurement tools that let students quantify exactly how much power each source produces under different conditions. You can measure that a small indoor solar panel produces 50 microwatts under office lighting, that a vibration harvester on a running motor generates 200 microwatts, and that a thermoelectric generator across a 10C temperature differential delivers 10 milliwatts – turning abstract datasheet numbers into concrete, measurable experience.

The kit excels as a research and educational platform: energy harvesting courses, self-powered sensor prototypes, and remote monitoring feasibility studies where the question is “can this environment provide enough energy?” rather than “deploy 1,000 units next quarter.”

The experimental nature means power output is inconsistent (dependent on environmental conditions that change hourly), integration requires electronics expertise (matching source impedance, managing charge cycles, handling intermittent power), and the resulting systems are not production-ready without significant engineering effort.

29.7.3 Energy Harvesting Comparison

Feature EnOcean SparkFun Kit
Energy Sources Solar, kinetic Solar, vibration, thermal
Maturity Production-ready Experimental
Price $100-300 $50-100
Protocol EnOcean (proprietary) Open (Arduino)
Power Output uW to mW uW to mW
Best For Production deployment Research/learning

29.7.4 Energy Harvesting Sources

Source Power Output Best Application
Indoor Solar 10-100 uW/cm² Building sensors
Outdoor Solar 1-10 mW/cm² Outdoor sensors
Vibration 10-1000 uW Industrial monitoring
Thermal (ΔT=10°C) 10-100 mW Machine monitoring
RF Harvesting 1-100 uW Passive RFID, NFC
Kinetic (button) 50-100 uJ/press Switches, controls

Solar panel sizing for a sensor node drawing 100 µA average requires accounting for panel efficiency and winter insolation. A 50 mm × 50 mm (25 cm²) panel at 15% efficiency under 800 W/m² outdoor light:

\[ P_{\text{max}} = 800\text{ W/m}^2 \times 0.0025\text{ m}^2 \times 0.15 = 0.3\text{ W} = 300\text{ mW} \]

With 4 hours effective sunlight daily (winter worst case):

\[ E_{\text{daily}} = 300\text{ mW} \times 4\text{ h} = 1200\text{ mWh} = 1.2\text{ Wh} \]

The sensor node at 100 µA (3.3V) consumes:

\[ E_{\text{node}} = 0.0001\text{ A} \times 3.3\text{ V} \times 24\text{ h} = 0.008\text{ Wh} \]

The panel provides 150× margin, sufficient for cloudy periods and charge/discharge inefficiency (typically 70-80%). Smaller 10 cm² panels (\(1-2\)) can sustain µW-level loads indefinitely.

29.8 Matching Requirements to Kits

29.8.1 Computer Vision Selection

The three computer vision platforms serve distinct niches along a complexity-cost spectrum. OpenMV is the right choice when your vision task is straightforward (colour tracking, QR code reading, simple object counting), you need standalone operation without a PC, your budget is around $75, and you prefer MicroPython’s simplicity. Jetson Nano becomes necessary when the application demands complex AI models (YOLOv5, ResNet, custom CNNs), real-time performance at 30+ fps, the full Linux ecosystem for development flexibility, or multi-camera setups. Coral occupies the middle ground: choose it when you need Jetson-class inference speed but at 2–4W instead of 10W, your workflow is already built around TensorFlow Lite, or Google Cloud integration is part of your architecture.

29.8.2 Wireless Selection

The wireless kit choice depends primarily on range and topology. XBee is the right platform when your deployment requires self-healing mesh networking (warehouse monitoring, building-wide sensor networks), you need multi-protocol flexibility (Zigbee, BLE, 802.15.4 on the same module family), or professional reliability is non-negotiable. LoRa becomes the clear winner when distances exceed 1 km, ultra-low power consumption is critical for multi-year battery life, data payloads are small and infrequent (a few bytes every few minutes), or the deployment is in a rural or agricultural setting where no cellular infrastructure exists.

29.8.3 Energy Harvesting Selection

Energy harvesting kit selection splits cleanly between deployment stage and learning stage. EnOcean suits production deployments where proven technology, industrial quality, building automation integration, and standards compliance matter more than cost. SparkFun suits research, experimentation, and education where the goal is understanding energy harvesting principles across multiple source types (solar, vibration, thermal) rather than deploying a finished product.

29.9 Advanced Quiz Questions

29.10 Worked Example: From Jetson Nano Prototype to Production at Verkada

Verkada, a building security company, used the prototyping-to-production pipeline to develop their AI-powered security camera line. Their journey illustrates the cost, timeline, and engineering decisions involved in transitioning from a development kit to a shipping product.

Phase 1: Proof of Concept (Jetson Nano, 4 weeks, $500)

The engineering team used 3 Jetson Nano kits ($150 each) with Raspberry Pi Camera Module V2 ($25) to prototype real-time person detection. Using NVIDIA’s pre-trained PeopleNet model on the Jetson Nano, they achieved 28 fps at 1080p with 92% detection accuracy. Total hardware cost: $525. Development time: 4 weeks (1 engineer).

Phase 2: Prototype Refinement (Jetson Xavier NX, 8 weeks, $3,200)

The Nano’s 10W power draw and passive cooling were inadequate for an enclosed outdoor camera (thermal throttling above 40C ambient). The team moved to the Xavier NX ($399), which provided the same inference performance at 15W but with hardware-accelerated video encode (needed for recording) and support for 2 simultaneous camera streams. They added LoRaWAN connectivity via a Semtech SX1262 module ($12) for low-bandwidth alert notifications when Wi-Fi was unavailable.

Phase 3: Custom Hardware Design (12 weeks, $180,000)

Component Dev Kit Production Cost Change
SoC Xavier NX module ($399) Ambarella CV25 ($18) -95%
Memory 8 GB LPDDR4x (module) 2 GB LPDDR4 ($6) Custom
Camera RPi Camera V2 ($25) Sony IMX327 ($8) -68%
Storage 128 GB microSD ($20) 32 GB eMMC ($4) -80%
Wireless USB Wi-Fi dongle ($15) Integrated QCA6174A ($5) -67%
Enclosure 3D printed ($8) Injection moulded ($1.20) -85%
PCB Dev board ($399) Custom 4-layer ($12) -97%
Total BOM ~$870 ~$54 -94%

The transition from NVIDIA’s Jetson platform to Ambarella’s CV25 was the most significant engineering decision. The Ambarella chip provided equivalent neural network inference performance for person detection at $18 vs. $399, but required 12 weeks of firmware porting – the TensorFlow models had to be recompiled for Ambarella’s CVflow architecture, and the video pipeline had to be rewritten from NVIDIA’s DeepStream SDK to Ambarella’s AMBA SDK.

Phase 4: Manufacturing Ramp (8 weeks, $350,000)

  • NRE (non-recurring engineering): $180,000 for PCB design, firmware, and mechanical engineering
  • Tooling: $85,000 for injection mould (enclosure + lens housing)
  • Certification: $45,000 (FCC, CE, UL)
  • First production run: 5,000 units at $54 BOM + $18 assembly = $72/unit
  • Retail price: $299 (4.2x markup covering R&D amortisation, warranty, cloud services, margin)

Key Decision Points for Kit-to-Production Transition:

Decision Prototyping Phase Production Phase Why It Changes
Processor Use dev kit SoC Switch to application-specific SoC 10-20x cost reduction at volume
Software Use vendor SDK (DeepStream, JetPack) Port to production platform Licensing, power, cost constraints
Connectivity USB dongles, breakout boards Integrated wireless SoC Size, reliability, cost
Enclosure 3D printed or off-shelf Injection moulded $8 -> $1.20 at 5,000+ units
Power USB/barrel jack, external supply Custom PMIC, PoE integrated Reliability, form factor

Timeline Summary: Proof of concept (4 weeks) -> Prototype refinement (8 weeks) -> Custom hardware (12 weeks) -> Manufacturing ramp (8 weeks) = 32 weeks total from first prototype to production units. The prototyping kits saved an estimated 6-8 weeks of initial development by providing working hardware and pre-trained models on day one.

:

29.11 Summary

  • OpenMV Cam H7 Plus provides affordable standalone machine vision with MicroPython programming, suitable for simple object detection, barcode reading, and robotics vision applications
  • NVIDIA Jetson Nano offers powerful GPU-accelerated edge AI with full TensorFlow/PyTorch support, enabling real-time object detection (30+ fps) for autonomous robots and industrial inspection
  • Google Coral Dev Board delivers fast ML inference with Edge TPU acceleration at lower power consumption, ideal for TensorFlow Lite deployments requiring efficient edge processing
  • Digi XBee3 enables professional mesh networking with Zigbee/802.15.4/BLE support, self-healing topology, and excellent reliability for sensor networks requiring multi-hop communication
  • LoRa Development Kits provide km-range communication with ultra-low power consumption, perfect for agricultural monitoring, smart city, and remote telemetry applications
  • EnOcean offers production-ready energy harvesting for battery-free sensors in building automation, while SparkFun kits enable energy harvesting research and experimentation with multiple sources
In 60 Seconds

Energy IoT instruments generation, distribution, and consumption to enable demand response, fault detection, and dynamic pricing that improve grid reliability and reduce peak load by 10-20% through coordinated smart meter and DER management.

29.12 Concept Relationships

Prerequisites: Specialized Prototyping Kits Overview - Understanding kit ecosystem architecture. Edge Computing - AI and ML concepts for computer vision kits. Wireless Protocols - LoRa, Zigbee, and mesh networking fundamentals.

Related Concepts: Kit Selection Guide - Framework for choosing AI/wireless kits. Energy-Aware Considerations - Power budgeting for energy harvesting systems. Computer Vision - AI algorithms for vision applications.

Builds Toward: Industrial and Wearable Kits - Specialized domain applications. Production Transition - From prototype kit to manufactured product.

29.13 See Also

AI/Vision Resources: NVIDIA Jetson Developer Zone - Official Jetson documentation and tutorials. Google Coral Documentation - Edge TPU guides and pre-trained models. OpenMV Documentation - MicroPython computer vision tutorials. Edge Impulse - Platform for training and deploying edge ML models.

Wireless Communication: Digi XBee Documentation - XBee module configuration and mesh networking. The Things Network - Free LoRaWAN network and community. LoRa Alliance - LoRaWAN specifications and certification. Semtech LoRa Developer Portal - Technical resources for LoRa development.

Energy Harvesting: EnOcean Alliance - Energy harvesting standards and products. Texas Instruments Energy Harvesting Design Guide - Solar and thermal energy harvesting circuits. Cymbet Energy Harvesting - Thin-film battery and harvesting solutions.

Community: Hackster.io Edge AI Projects - Community edge AI projects and tutorials. LoRa Developer Forum - LoRa technical discussions. Reddit r/EdgeComputing - Edge AI and computing community.

29.14 What’s Next

If you want to… Read this
Explore smart building energy management Smart Buildings and Homes
Understand demand response IoT systems Smart Cities and Urban IoT
Learn about cellular IoT for metering Application Domains Overview