I have some confusion regarding the use of the XIAO ESP32-S3 Sense on the SenseCraft AI platform. After deploying a model to the board, does it continue working when it is not connected to the platform? I haven’t been able to make it work.
I have had no issues testing pre-trained models or the ones I trained myself while the board is connected. Everything works correctly, and the board flashes without errors. In the various tests I conducted, I set the built-in LED to turn on as a trigger, and it worked without errors.
However, if I disconnect the board and power it through the pins or the USB connector, I no longer see the LED turning on as it did before, as if the model were not running. Am I doing something wrong?
I have seen some videos showing the Grove VIsion AI 2 board working standalone, but I have never seen an ESP32-S3 Sense running without being connected.
So curious to know , do you have the “while serial” conditional in your code and it’s waiting for the USB to connect. Comment out that statement and go again…
I don’t have any code, just the ESP32-S3 Sense with the model loaded in SenseCraft. What I expected was to flash the model, disconnect the board from the PC running SenseCraft, insert it into another board, and have the model continue working just as it did when I was using SenseCraft.
For example, training a model in SenseCraft to recognize my face and activate a GPIO pin when it detects it, testing it, loading it onto the ESP32-S3 Sense, connecting the ESP32-S3 to another board with that GPIO controlling a relay, and having the relay activate when I bring my face close to the camera.
Is this how it should work, or am I misunderstanding something?
Meanwhile, I ran the same tests with a Grove Vision AI 2 module and a XIAO RP2040. Using the Arduino IDE, I installed the Seeed_Arduino_SSCMA and ArduinoJSON libraries according to the instructions and modified an example program to control the XIAO’s RGB LED based on the data returned by the model. The model recognizes faces, and I understand that it runs on the Vision module, while the RP2040 only receives the results and controls the LED.
In this video, you can see how it works (when the camera moves out of frame, it’s because it’s pointing at my face). The green LED means it recognizes me, and the red LED means it doesn’t. Both boards are powered by a battery.
So, the question is: how can I do something similar with the ESP32-S3 Sense? (even if it’s a regular LED and not an RGB one)
Yes, I believe you are confused or mixing the two things.
There needs to be some code for the Xiao to run to manipulate the GPIO’s (RGB LED included) Flashed locally.
Post the code…
Start there, If I can compile it with a S3 and it runs , which BSP are you or the Instructions using? I have a suspicion that could be at issue
?
What do you mean here?
First your original model (face detection to GPIO) works on the WEB page when connected to the web or just to a USB on the computer.
You want to run SenseCraft Locally? on a Xiao and use the face detection to GPIO locally and the GPIO is connected to a Second Xiao with a relay?
Ok give a link to what you are following also… we can help figure it out.
My confusion may come from the fact that I expect the model to run standalone on the ESP32-S3 Sense. However, based on my tests, it seems that this is not the case—that when using the ESP32-S3 Sense, the inference is performed in SenseCraft and not on the board itself. That’s why it stops working when I disconnect the board from the computer and power it with a battery.
Hi!
I’m still running tests to better understand how it works. This time, I used output via MQTT, as I found the following notice while carefully reading the documentation on the Wiki:
Caution: Keep in mind that the GPIO output functionality relies on the web-based connection between the SenseCraft AI platform and your XIAO ESP32S3 Sense board. If the connection is lost or interrupted, the GPIO level changing feature based on model detection will stop working. Ensure a stable connection throughout the process.
I was able to configure the EMQX broker as explained in the same Wiki and also managed to send requests and receive responses through the MQTTX client. To start sending inference results, the command AT+INVOKE=-1,0,0 must be sent.
With the ESP32-S3 Sense connected to SenseCraft, it works perfectly. If I close SenseCraft, it stops sending results, but if I send the command AT+INVOKE=-1,0,0 again, it starts sending responses once more. However, if I disconnect the board and power it with a battery, it reconnects to the MQTT broker and continues responding to my queries, but even if I send AT+INVOKE=-1,0,0, it no longer sends inference results.
I will keep testing, but this seems to indicate that the model does work on the ESP32-S3 Sense, but when power is removed, some information or configuration stored in RAM is lost, and the model no longer runs.
I would agree it would be GREAT if the model ran locally, I checked out the RAW camera based face detection demo & the code, that detects a face and draws a box around it(as-IS). It’s the Demo for the S3 camera, the wiki has more on it also.
After reading a lot about SSCMA-Micro (the firmware loaded on the ESP32-S3 Sense), everything indicates that the models reside in the board’s Flash and run locally. However, it seems that the firmware is not designed for the board to function standalone, and even though the model remains stored, some configurations are kept in RAM and are lost when power is removed.
The last test I did was using the board “as a sensor” using UART, as indicated in the Wiki (or at least I tried) . I flashed a model onto the Sense and added a XIAO C3, where the Seeed_Arduino_SSCMA library is running with the same program from the Wiki.
I made the TX/RX connections as shown, connected both GNDs, and plugged the USB-C into the C3 (and the Arduino IDE). Additionally, I connected the 5V pin of the C3 to the 5V pin of the Sense to power it.
I ran the program in the IDE, but no output appeared in the serial monitor. Apparently, when connected this way, the Sense also does not function standalone.
I would try the debugger if you can set that up.
Everyone is on vacation at seeed so no input from they,
I’m wondering if the models will run local on the reCamera?