Grove Vision V2 - getting images from camera

Hi,

I have a Grove Vision AI V2 board with a OV5647 camera and the packaged XIAO ESP32C3. I would like to collect image data for training a new neural network in a remote wifi-accessible location.

Is it possible to transfer live image data from the camera attached to the Grove Vision AI V2 to the XIAO ESP32C3? I’d like to collect training images; once the data is available to the ESP32C3, I know how to upload those to a server where I can later label and train on the dataset.

If this is possible, what API calls can I use to transfer images to the ESP32C3?

Thanks,

Jared

Hello, it seems that the image information of Grove Vision ai V2 will only be sent to the computer through the USB port and viewed on the serial monitor after conversion. To do this, the serial port signal must be connected to the ESP32 board before it passes through the serial port chip, but this The bit rate may be too high for ESP32, and you need to write a program to process image frames. I don’t think it is feasible.

Hi there,
I would agree with SeeedFrank, Perhaps you could describe in more detail what you are trying to achieve.
A Xiao ESP32C3 mounted on the Grove Expansion base fitted with the additional Flash chip could be utilized for frame buffering using SPI master Slave and DMA? maybe?
Just hip shooting so give more info please.
HTH
GL :slight_smile: PJ :v:

Actually, I think I’ve solved my problem!

By setting up an Arduino program on the XIAO ESP32C3 to communicate with the Grove Vision AI 2 over I2C, I can send commands which get a (processed) image back:

SEND: AT+SAMPLE=1?
RECEIVE: 
{"type": 1, "name": "SAMPLE", "code": 0, "data": {"count": 1, "image": "/9j/4AAQSkZJRgABAQEASABIAAD/ ... "}}

This gives me a base64-encoded 240x240x3 JPEG image from the camera on the XIAO board, which I can then export to a webserver. It’s not full 1080p@30fps, but in my application I’m looking to collect images for training over the course of minutes (or hours).

I’m hoping this is close enough to what gets passed from the image sensor to a trained neural network on the U55.

  • Jared
2 Likes

Hi there,
Nice work… That is really cool and it’s like a peripheral 30FPS ? wow that’s NOT bad at All.
GL :slight_smile: PJ :+1:

Hi PJ,

I doubt it’s 30FPS, but it’s certainly good enough for collecting images for training. My first goal is to collect and label a large number of sample images over the course of several days to train a new YOLO neural network and then upload that trained network to the Grove Vision AI 2.

Later on, inference can run at whatever frame rate the Ethos-U55 can sustain. 30 FPS would be neat, but I’m planning to track chickens while laying eggs - they tend to stay put while doing that - so even one inference per minute would be interesting for me.

Jared

Hi there,
Ah’ sweet I meant 3FPS :face_with_peeking_eye: LOL, although that would be GREAT!
Sounds like a cool project. I like those blue eggs :grin:
GL :slight_smile: PJ :v:

My objective is to seamlessly transfer live image data from the camera to the ESP32C3 for further processing and eventual upload to a remote server. I’m exploring unconventional methods of data transfer, aiming to push the boundaries of what’s traditionally done. I’m curious if anyone has experimented with alternative approaches or custom APIs to achieve this image data transfer.