Grove Vision AI V2 (WE2) without Arduino SSCMA library

@PJ_Glasso I hope you can help guide using the Grove Vision AI V2 module with a STM32WL MCU. This MCU is not supported by the Arduino SSCMA library that works with a XIAO ESP32C3 to communicate with the SSCMA-Micro firmware of the WE2. Therefore, I am trying to follow the SSCMA-Micro GitHub at_protocol. I am able to establish communications from my STM32WL MCU to the WE2, but cannot get the WE2 to properly INVOKE image capture and inferencing. Using Sensecraft, I confirm the model is loaded and Sensecraft will display the preview video and list results in its “Device Logger”. Using a test program to decipher the WE2 results, it seems to indicate the model size is “0”, perhaps suggesting the model is not being loaded into WE2 memory from Flash? Here, to show communications, is a snippet of Serial Monitor listing from my SMT32WL (A RAK3172 module) program (connected to the WE2 via Tx/Rx). I reset the STM32WL at time points 20:23:52:833 & 20:25:06:668.

0:23:30:459 ---- Sent utf8 encoded message: “i\r\n” ----
20:23:33:383 ---- Sent utf8 encoded message: “q\r\n” ----
20:23:52:833 → �RAK3172 ↔ Grove Vision AI V2 UART/AT/JSON Test
20:24:00:935 → Type ‘i’ for inference, ‘q’ for query, ‘s’ for sample
20:24:00:938 → [AT > WE2] AT+ID?[WE2 > RAW]
20:24:02:941 →
20:24:02:941 → [AT > WE2] AT+NAME?[WE2 > RAW]
20:24:04:944 →
20:24:04:944 → [AT > WE2] AT+STAT?[WE2 > RAW]
20:24:06:947 →
20:24:06:947 → [AT > WE2] AT+VER?[WE2 > RAW]
20:24:08:950 →
20:24:08:950 → [AT > WE2] AT+MODEL?[WE2 > RAW]
20:24:10:953 →
20:24:10:953 → [AT > WE2] AT+ALGOS?[WE2 > RAW]
20:24:12:956 →
20:24:12:956 → [AT > WE2] AT+SENSORS?[WE2 > RAW]
20:24:14:958 →
20:24:14:958 → Current Work Mode: LoRaWAN.
20:24:35:341 → [WE2 > RAW]
20:24:35:341 → {“type”: 0, “name”: “INIT@STAT?”, “code”: 0, “data”: {“boot_count”: 61, “is_ready”: 1}}
20:24:35:349 → [WE2 > PARSED]
20:24:35:349 → {
20:24:35:349 → “type”: 0,
20:24:35:353 → “name”: “INIT@STAT?”,
20:24:35:355 → “code”: 0,
20:24:35:355 → “data”: {
20:24:35:355 → “boot_count”: 61,
20:24:35:358 → “is_ready”: 1
20:24:35:361 → }
20:24:35:361 → }
20:25:06:668 → �RAK3172 ↔ Grove Vision AI V2 UART/AT/JSON Test
20:25:14:770 → Type ‘i’ for inference, ‘q’ for query, ‘s’ for sample
20:25:14:772 → [AT > WE2] AT+ID?[WE2 > RAW]
20:25:14:799 → {“type”: 0, “name”: “ID?”, “code”: 0, “data”: “79344b8a”}
20:25:14:805 → [AT > WE2] AT+NAME?[WE2 > RAW]
20:25:14:819 → {“type”: 0, “name”: “NAME?”, “code”: 0, “data”: “Grove Vision AI V2”}
20:25:14:825 → [AT > WE2] AT+STAT?[WE2 > RAW]
20:25:14:839 → {“type”: 0, “name”: “STAT?”, “code”: 0, “data”: {“boot_count”: 61, “is_ready”: 1}}
20:25:14:862 → [AT > WE2] AT+VER?[WE2 > RAW]
20:25:14:862 → {“type”: 0, “name”: “VER?”, “code”: 0, “data”: {“at_api”: “v0”, “software”: “2025.01.02”, “hardware”: “1”}}
20:25:14:868 → [AT > WE2] AT+MODEL?[WE2 > RAW]
20:25:14:900 → {“type”: 0, “name”: “MODEL?”, “code”: 0, “data”: {“id”: 1, “type”: 0, “address”: 4194304, “size”: 0}}
20:25:14:908 → [AT > WE2] AT+ALGOS?[WE2 > RAW]
20:25:16:912 → {“type”: 0, “name”: “ALGOS?”, “code”: 0, “data”: [{“type”: 7, “categroy”: 1, “input_from”: 1}, {“type”: 6, “categroy”: 1, "input
20:25:16:931 → [AT > WE2] AT+SENSORS?[WE2 > RAW]
20:25:18:925 →

And following is my test program to invoke image capture and inferencing and then decipher the JSON response from the WE2.

#include <Arduino.h>
#include <ArduinoJson.h>

// Serial port for Grove Vision AI V2
#define WE2_SERIAL Serial1
#define WE2_BAUD 921600
#define SERIAL_MON Serial

// AT commands
// Use protocol-compliant carriage return terminator
#define AT_INVOKE “AT+INVOKE=1,0,1\r” // Run model 1, once
#define AT_QUERY “AT+QUERY\r” // Query last result
#define AT_SAMPLE “AT+SAMPLE=1\r” // Capture image
#define AT_TIMEOUT 2000
#define JSON_BUF_SIZE 4096

char atResponse[JSON_BUF_SIZE];
size_t atResponseLen = 0;
bool jsonReady = false;

void sendATCommand(const char* cmd) {
WE2_SERIAL.print(cmd);
SERIAL_MON.print("[AT > WE2] ");
SERIAL_MON.print(cmd);
}

// Read response into buffer, look for JSON (ends with ‘}’)
void readATResponse() {
atResponseLen = 0;
jsonReady = false;
int braceCount = 0;
bool started = false;
unsigned long start = millis();
while (millis() - start < AT_TIMEOUT) {
while (WE2_SERIAL.available()) {
char c = WE2_SERIAL.read();
if (!started && c == ‘{’) {
started = true;
braceCount = 1;
atResponse[atResponseLen++] = c;
} else if (started) {
if (atResponseLen < JSON_BUF_SIZE - 1) {
atResponse[atResponseLen++] = c;
}
if (c == ‘{’) braceCount++;
if (c == ‘}’) braceCount–;
if (braceCount == 0) {
jsonReady = true;
break;
}
}
}
if (jsonReady) break;
}
atResponse[atResponseLen] = ‘\0’;
}

void printRawResponse() {
SERIAL_MON.println(“[WE2 > RAW]”);
SERIAL_MON.println(atResponse);
}

void parseAndPrintJSON() {
StaticJsonDocument<JSON_BUF_SIZE> doc;
DeserializationError err = deserializeJson(doc, atResponse);
if (err) {
SERIAL_MON.print(“[JSON ERROR] “);
SERIAL_MON.println(err.c_str());
return;
}
SERIAL_MON.println(”[WE2 > PARSED]”);
serializeJsonPretty(doc, SERIAL_MON);
SERIAL_MON.println();
// Print performance metrics if present
if (doc.containsKey(“perf”)) {
SERIAL_MON.print(“Preprocess: “);
SERIAL_MON.println(doc[“perf”][0].as(), 3);
SERIAL_MON.print(“Inference: “);
SERIAL_MON.println(doc[“perf”][1].as(), 3);
SERIAL_MON.print(“Postprocess: “);
SERIAL_MON.println(doc[“perf”][2].as(), 3);
}
// Print detection/class info if present
if (doc.containsKey(“boxes”)) {
JsonArray boxes = doc[“boxes”].as();
for (JsonObject box : boxes) {
SERIAL_MON.print(“Box: “);
SERIAL_MON.print(“target=”);
SERIAL_MON.print(box[“target”].as());
SERIAL_MON.print(”, score=”);
SERIAL_MON.print(box[“score”].as(), 3);
SERIAL_MON.print(”, x=”);
SERIAL_MON.print(box[“x”].as());
SERIAL_MON.print(”, y=”);
SERIAL_MON.print(box[“y”].as());
SERIAL_MON.print(”, w=”);
SERIAL_MON.print(box[“w”].as());
SERIAL_MON.print(“, h=”);
SERIAL_MON.println(box[“h”].as());
}
}
if (doc.containsKey(“classes”)) {
JsonArray classes = doc[“classes”].as();
for (JsonObject cls : classes) {
SERIAL_MON.print(“Class: target=”);
SERIAL_MON.print(cls[“target”].as());
SERIAL_MON.print(“, score=”);
SERIAL_MON.println(cls[“score”].as(), 3);
}
}
if (doc.containsKey(“points”)) {
JsonArray points = doc[“points”].as();
for (JsonObject pt : points) {
SERIAL_MON.print(“Point: x=”);
SERIAL_MON.print(pt[“x”].as());
SERIAL_MON.print(“, y=”);
SERIAL_MON.print(pt[“y”].as());
SERIAL_MON.print(“, score=”);
SERIAL_MON.print(pt[“score”].as(), 3);
SERIAL_MON.print(“, target=”);
SERIAL_MON.println(pt[“target”].as());
}
}
if (doc.containsKey(“keypoints”)) {
JsonArray keypoints = doc[“keypoints”].as();
for (JsonObject kp : keypoints) {
JsonObject box = kp[“box”];
SERIAL_MON.print(“Keypoint: box target=”);
SERIAL_MON.print(box[“target”].as());
SERIAL_MON.print(“, score=”);
SERIAL_MON.print(box[“score”].as(), 3);
SERIAL_MON.print(“, x=”);
SERIAL_MON.print(box[“x”].as());
SERIAL_MON.print(“, y=”);
SERIAL_MON.print(box[“y”].as());
SERIAL_MON.print(“, w=”);
SERIAL_MON.print(box[“w”].as());
SERIAL_MON.print(“, h=”);
SERIAL_MON.print(box[“h”].as());
SERIAL_MON.print(“, points=[”);
JsonArray pts = kp[“points”].as();
for (JsonObject pt : pts) {
SERIAL_MON.print(“[x=”);
SERIAL_MON.print(pt[“x”].as());
SERIAL_MON.print(“, y=”);
SERIAL_MON.print(pt[“y”].as());
SERIAL_MON.print(“, score=”);
SERIAL_MON.print(pt[“score”].as(), 3);
SERIAL_MON.print(“, target=”);
SERIAL_MON.print(pt[“target”].as());
SERIAL_MON.print(“] “);
}
SERIAL_MON.println(”]”);
}
}
// Do not print image data to save RAM
}

void setup() {
SERIAL_MON.begin(115200);
WE2_SERIAL.begin(WE2_BAUD);
delay(5000);
SERIAL_MON.println(“RAK3172 ↔ Grove Vision AI V2 UART/AT/JSON Test”);
delay(3000);
SERIAL_MON.println(“Type ‘i’ for inference, ‘q’ for query, ‘s’ for sample”);
// Optional: Confirm communication with Grove Vision AI V2
sendATCommand(“AT+ID?\r”);
readATResponse();
printRawResponse();
sendATCommand(“AT+NAME?\r”);
readATResponse();
printRawResponse();
sendATCommand(“AT+STAT?\r”);
readATResponse();
printRawResponse();
sendATCommand(“AT+VER?\r”);
readATResponse();
printRawResponse();
sendATCommand(“AT+MODEL?\r”);
readATResponse();
printRawResponse();
sendATCommand(“AT+ALGOS?\r”);
readATResponse();
printRawResponse();
sendATCommand(“AT+SENSORS?\r”);
readATResponse();
printRawResponse();
}

void loop() {
// Simple serial menu for manual testing
if (SERIAL_MON.available()) {
char c = SERIAL_MON.read();
if (c == ‘i’) sendATCommand(AT_INVOKE);
else if (c == ‘q’) sendATCommand(AT_QUERY);
else if (c == ‘s’) sendATCommand(AT_SAMPLE);
// Ignore CR and LF
else if (c == ‘\r’ || c == ‘\n’) { /* do nothing */ }
}
// Read and process response
readATResponse();
if (atResponseLen > 0) {
printRawResponse();
if (jsonReady) parseAndPrintJSON();
}
delay(100); // avoid busy loop
}

Hi there,

SO I can look and you probably saw the posts on this as well. You have it working in the standard setup (not using the ST32) ? can you make it work that way first, to verify the overall setup. Alos which Senscraft web site did you use (link)

Time permitting I can check it out, but looks like you have the right methodology, may be a step missing. You’ll get it , it works on others so:+1:

HTH
GL :slight_smile: PJ :v:

@PJ_Glasso Thanks for the prompt reply. The Seeed Sensecraft Vision Workspace link that works fine with the WE2 directly is Seeed Sensecraft Vision Workspace . I have yet to find a command or any explanation on how to trigger the WE2 to include the loaded model in memory and connect it with the INVOKE command. I shall continue my search for proper commands and timing and communications to work successfully with the SSCMA-Micro firmware of the WE2, when I am unable to use the Seeed Arduino SSCMA library. This has become a struggle, so any fresh troubleshooting suggestions are most welcome! A lighter, more generic version of the Arduino SSCMA library would be nice…

1 Like

@PJ_Glasso I need help with one aspect of working with the UART communication to the Grove Vision AI V2. The model (a pre-trained model such as the “Gesture Dection” or “Face Detection” or “Person Detection–Swift YOLO” is not loading into RAM upon power up or reset of the WE2 it seems.
This can be shown using PUTTY and a USB to Serial adapter connected to the Tx/Rx pins of the WE2. When I connect to the WE2 via PUTTY and issue an AT+RST\r command it replies:

{“type”: 0, “name”: “INIT@STAT?”, “code”: 0, “data”: {“boot_count”: 61, “is_ready”: 1}}

Which seems to imply the WE2 is ready.

When I then issue an AT+MODEL?\r command via PUTTY to the WE2, it replies as if no model is present or at least not ready in RAM (as “Size”:0).

{“type”: 0, “name”: “MODEL?”, “code”: 0, “data”: {“id”: 1, “type”: 0, “address”: 4194304, “size”: 0}}

Reconnecting the WE2 via USB to SenseCraft Workspace shows the model actually still is present and working.
Is some communication other than an AT command via the UART Tx/Rx of the WE2 necessary to initialize the model?
Or is there some critical timing issue I’m not accounting for?
Or is the I2C connection to the WE2 needed?
Or is the USB connection needed by the WE2 SSCMA_Micro firmware?
The WE2 device version is 2025.01.02.
I’m not finding additional troubleshooting avenues to pursue at the moment in any provided documentation or GitHub files.

Hi there,

So , just a hunch but t looks like the model is still in flash, but it’s not automatically loaded into RAM for inference just by powering up or resetting via UART. This would explain why the MODEL? command returns size: 0, even though it works via SenseCraft.

A few things to try:

  1. Send a Model Load command manually after reset — something like:
AT+MODELLOAD=1\r

or:

AT+INFERENCE=1\r

(This is a guess—the real command depends on the firmware version and AT command set supported. Seeed hasn’t published a complete AT reference doc publicly, which is a pain.)
(Exact syntax depends on the SSCMA_Micro firmware — Seeed’s docs don’t include the full AT command set, so check firmware release notes or sniff the connection.)
2. SenseCraft probably auto-loads the model over USB — you’re bypassing this when using UART directly. That’s why it works in SenseCraft but not with raw serial commands.
3. If you’re still seeing size: 0, try:

  • Power-cycling the device completely
  • Then issue your AT+MODEL? again
  • Followed by a model load command
  1. As of version 2025.01.02, the firmware shouldn’t require I2C or USB for model activation — but keep an eye on any hidden dependencies if using advanced features.

Let us know if trying MODELLOAD or similar gets you past the 0-byte issue!
SenseCraft likely sends a “MODEL LOAD” command silently when it connects, which isn’t happening when just using UART manually.

my guess,
HTH
GL :slight_smile: PJ :v:

I’ll ask Al see what I get and report back. try the above :+1:

this protocol document proves that YOU must manually run AT+INVOKE=1 over UART to load the model after boot/reset. Without this step, the module won’t load the model into RAM, and AT+MODEL? will show size 0.

So this isn’t a bug — it’s just a required initialization step the SSCMA-Micro firmware expects unless you’re using the Arduino wrapper (which calls it internally).

Al say’s this too…

Key Commands from the AT Protocol

:small_blue_diamond: AT+RST

Resets the module. You’ve already confirmed this returns:

json

{"type":0,"name":"INIT@STAT?","code":0,"data":{"boot_count":61,"is_ready":1}}

That means the firmware is alive and responding.


:small_blue_diamond: AT+MODEL?

Queries the loaded model. You saw:

{"type":0,"name":"MODEL?","code":0,"data":{"id":1,"type":0,"address":4194304,"size":0}}

:stop_sign: This means the model is not currently loaded into RAM or initialized for inference.


:small_blue_diamond: AT+INVOKE=<id>

According to the protocol, this is what you need to call:

AT+INVOKE=1

If successful, you’ll get a response like:

{"type":0,"name":"INVOKE","code":0,"data":{}}

And after that, issuing AT+MODEL? again should return a non-zero size, indicating the model is now loaded and active.


:small_blue_diamond: AT+RESULT?

Use this to get the inference results after invoking.


:wrench: Suggested AT Command Flow After Reset

After you connect via UART:

  1. Reset (optional):
AT+RST
  1. Invoke model (load into RAM):
AT+INVOKE=1
  1. Query model info:
AT+MODEL?
  1. Get results (after an image is captured):
AT+RESULT?

If no image has been captured or inference hasn’t run yet, AT+RESULT? might return empty or outdated data.

If you do this sequence:

AT+RST
(wait ~1 second)
AT+INVOKE=1
(wait ~200–500 ms)
AT+MODEL?

Then it should :crossed_fingers:show a valid size, and AT+RESULT? will start working after camera triggers.

LMK
HTH
GL :slight_smile: PJ :v:

@PJ_Glasso Troubleshooting continues. Using PUTTY with the WE2, I learned that although AT+MODELS? returns

{“type”: 0, “name”: “MODELS?”, “code”: 0, “data”: [{“id”: 1, “type”: 0, “address”: 4194304, “size”: 0}]},

AT+INVOKE=1,0,1 works with PUTTY!

I’ve discovered that the MCU I am using (RAK3172 with ARM core + SX1262 Radio using RAK Wireless RUI firmware) does not transparently pass “AT+” commands via the serial port. This seems to prevent sending AT commands to the Grove Vision AI V2, either by UART or I2C. As AT commands are required to work with the WE2, I am considering a alternate MCU. But first…

Question: Is it possible to add a user application (e.g. some rules based logic) directly on the WE2, in addition to the Inferencing model?

Hi there,

and Really good work , shaking it down. I asked that custom code question in the live stream when WE2 first showed up.

Answer: No, not directly.

  • The Grove Vision AI V2 is locked to model invocation and fixed AT-style interactions.
  • It does NOT run arbitrary user code unless you recompile the firmware (which is undocumented and unsupported). :dizzy_face:
  • You can load new models via SD card or serial, but not full custom logic beyond what the AT command set exposes.

So for any real-time decision-making, data filtering, or dynamic response, you’ll need to offload that to a host MCU — which is where If You are BOLD and capable,(seems you are) :grin: Tomorrow’s Announcement may make your Day.

  • You’re using a RAK3172 (STM32-based) MCU module with SX1262 LoRa radio and RAK’s RUI firmware.
  • The firmware does not transparently forward AT commands (UART/I2C), which blocks communication with the Grove Vision AI V2.
  • You are trying to send AT+MODELS? and AT+INVOKE=1,0,1 to the Vision AI camera.
  • You need a microcontroller that can parse logic AND handle AI accessory communication reliably.

Who Knew the best solution would be delivered on a Platter… Served up by those Awesome Seeedineers

Here is what’s on your menu… :stuck_out_tongue:

1. Dual-core MCU (Arm Cortex-M33 + Network core)

  • One core could handle LoRa or BLE communication, while the other handles AI model logic and AT command communication.
  • Massive step up from single-core STM32 MCUs like in the RAK3172.

2. Generous RAM & Flash

  • Enough space for rules-based logic, custom applications, and maybe even basic ML models locally (Edge Impulse has nRF54 support now).
  • Also large enough to buffer or translate AT command streams between interfaces.

3. No Vendor Lock-in

  • Unlike the RUI firmware from RAK, you are in full control of UART, I2C, GPIO, and timing behavior.
  • This makes it ideal for building bridges between external devices like the Grove Vision AI and a LoRa radio.

4. Flexible Peripheral Routing

  • Multiple UART/I2C/SPI interfaces can be routed freely.
  • Perfect for creating a transparent AT pass-through from USB or BLE to the Vision AI.

5. Works with Zephyr and PlatformIO

  • If they want modern RTOS capabilities with great toolchain support.
  • Debugging, OTA, and peripheral control are miles ahead compared to most AT-centric firmware like RUI.

That is Why the nRF54L15 Sense Would Work :shushing_face:
comes out tomorrow

Welcome to level 5 :raised_hand_with_fingers_splayed:

I asked AI for an Architecture : WOW…
You seated… Probably , :grin:

check it out.

[ Grove Vision AI V2 ]
        ↕ (UART AT commands)
[ nRF54L15 Sense ]
        ↕
   [ LoRa or BLE output ]
        ↕
[ Cloud / Dashboard ]

  • The nRF54L15 can poll the camera, parse responses (AT+MODELS?, AT+INVOKE, etc.), and take actions accordingly.
  • If the user wants to send AI events over BLE, LoRa, or even USB serial, the 54L15 can handle all that in one device.

the nRF54L15 Sense is an ideal upgrade path for this application. It solves the transparency issue, adds real compute power, and removes dependency on vendor AT firmware limitations.

If they’re open to leaving the RAK3172 behind and going full custom MCU, this is the move.

Here’s how to build a transparent AT bridge + logic controller using the nRF54L15 Sense with the Grove Vision AI V2 over UART. You’ll get full control of communication and can layer your own logic on top (e.g. rule-based triggers, BLE alerts, LoRa messages, etc.).

Project Goal:

Use the nRF54L15 Sense to:

  1. Talk to the Grove Vision AI V2 via UART using AT commands.
  2. Read & parse responses (e.g., models detected).
  3. Apply basic logic to the results.
  4. Optionally forward results over BLE, USB serial, or LoRa (if added via module).
Device Connection Notes
Grove Vision AI V2 UART TX ↔ RX, RX ↔ TX, GND
nRF54L15 Sense UART Use a hardware UART (e.g. UARTE0)
Optional USB Serial For debugging or pass-through console
Optional BLE Notify smartphone or central app

Dependencies

You’ll want to use the nRF Connect SDK (NCS) with Zephyr RTOS:

  • uart_async_api for UART comms
  • console or logging for debug output
  • bt_nus or ble peripheral for BLE if needed
  • k_work for deferred logic handling

Example: AT Command Bridge + Simple Logic

Here’s a simplified Zephyr C example (assumes UARTE0 connected to Grove Vision AI):

// REV 0.1a - AT Bridge & Trigger Logic - PJG + ChatGPT
#include <zephyr/kernel.h>
#include <zephyr/drivers/uart.h>
#include <zephyr/sys/printk.h>
#include <string.h>

#define UART_DEV_NODE DT_NODELABEL(uart0)
const struct device *uart_dev = DEVICE_DT_GET(UART_DEV_NODE);

#define CMD_BUF_SIZE 256
static char uart_rx_buf[CMD_BUF_SIZE];
static int rx_pos = 0;

static void send_at_command(const char *cmd)
{
    uart_tx(uart_dev, cmd, strlen(cmd), SYS_FOREVER_US);
    uart_tx(uart_dev, "\r\n", 2, SYS_FOREVER_US);
    printk("Sent AT cmd: %s\n", cmd);
}

static void process_ai_response(const char *resp)
{
    // Example logic: check if "person" was detected
    if (strstr(resp, "person")) {
        printk(">>> PERSON DETECTED <<<\n");
        // TODO: trigger BLE alert or LoRa packet here
    }
}

static void uart_cb(const struct device *dev, void *user_data)
{
    uint8_t c;
    while (uart_fifo_read(dev, &c, 1)) {
        if (c == '\n' || rx_pos >= CMD_BUF_SIZE - 1) {
            uart_rx_buf[rx_pos] = '\0';
            printk("RX: %s\n", uart_rx_buf);
            process_ai_response(uart_rx_buf);
            rx_pos = 0;
        } else {
            uart_rx_buf[rx_pos++] = c;
        }
    }
}

void main(void)
{
    printk("AT Bridge Booted. UART ready.\n");

    if (!device_is_ready(uart_dev)) {
        printk("UART not ready\n");
        return;
    }

    uart_irq_callback_user_data_set(uart_dev, uart_cb, NULL);
    uart_irq_rx_enable(uart_dev);

    k_msleep(500);

    // Start polling the AI module
    send_at_command("AT+MODELS?");
    while (1) {
        send_at_command("AT+INVOKE=1,0,1");
        k_sleep(K_SECONDS(10));
    }
}

Result:

  • UART initializes and communicates with Grove Vision AI.
  • Every 10 seconds, it sends AT+INVOKE to run inference.
  • If "person" is in the response, the MCU prints it — or triggers a response like BLE notify, buzzer, etc.

Optional Logic Layer

Add a logic queue, such as:

if (strstr(resp, "car") && !strstr(resp, "person")) {
   // Suspicious vehicle, no person
   trigger_alert();
}

WOWsa, I say it nailed it, but I love the added bonus:

Bonus Features You Can Add:

Feature How
BLE output Use bt_nus service to notify mobile device
LoRa packet Send a short encoded packet to HomeBase
OLED display Show current detection model name / state
Button override Add physical input to trigger AI mode switch

What to Prepare

  • :white_check_mark: Flash nRF54L15 with west flash (or CMSIS-DAP via PlatformIO soon)
  • :white_check_mark: Confirm Grove Vision AI is powered & UART wired correctly
  • :white_check_mark: Run PuTTY or serial to watch logs
  • :white_check_mark: Ready to add your BLE/LoRa hooks

Ready , set …GO! :face_with_hand_over_mouth:

HTH
GL :slight_smile: PJ :v:

Lik that home alone kid, “you hungry and wants some more”

the wiki is LIVE!
Get a serious Libation :grin: of your choice, Get the space Quiet and conducive to learning and taking over the world :v:

GL Mr. Phelps…This tape will self destruct in 5 seconds. Poof! :sunglasses:

@PJ_Glasso So close! I consider Nordic dual core low power device, PlatformIO, and Zephyr all potential positives!

However, one gotcha for our application is the need for US915 LoRaWAN. The nRF54L15 specifications I see for its radio are all 2.4 GHz, not 915MHz? Am I missing its support of 915MHz LoRa over the air?

I am open to your additional recommendation, with or without the RAK3172 as the LoRaWAN “modem” to implement a robust and effective WE2 application.

Hi there,

My thoughts are to use the XIAO SX1262 with the new XIAO

HTH
GL :slight_smile: PJ :+1:

probably why the RAK device gets any play at all it include the radio :v: