SenseCAP A1101-Convert TensorFlow Lite to a UF2 file

Hello everyone and thank you in advance for your help. I have a problem configuring the visionAI senseCAP A1101 camera. After training, I have obtained the file “best_model.tflite” but according to the seeedstudio tutorial in the section “Can I train an AI model on my PC?” in its point “Step 9. Convert TensorFlow Lite to a UF2 file
UF2 is a file format, developed by Microsoft. Seeed uses this format to convert .tflite to .uf2, allowing tflite files to be stored on the AIoT devices launched by Seeed.” and I cannot get that conversion. Is there a tool to convert tflite to uf2? Is this type of extension necessary to be able to insert it into the camera’s storage unit?

1 Like

Hi there,

So curious if you searched the term “Uf2” ifso there are several results, One I created for the Xiao 52840 but you can convert any hex or Tflow lite or bin file to a uf2 format, It’s a bootloader file format basically. What is the MCU target? I don’t see it ?
Read up on those threads and ask whatever you need. BTW there are many converters out there as well. WEB pages even.

HTH
GL :slight_smile: PJ :v:

Thank you very much for your interest PJ Glasso, I am not an expert in the matter but I have to configure a seeedstudio camera (SenseCap A1101 vision AI Sensor) and well, I have not had any problem with the two models converted to uf2 that they provide on github; but I want to use my already trained model and following the Seeedstudio wiki for this camera I get to point 9, which says to convert the Tflite to uf2. The script called “uf2conv.py” I am not able to find the necessary one. I found on github: “yolov5-swift/uf2conv.py at master · Seeed-Studio/yolov5-swift · GitHub” the script, which mentions (-f VISIONAI), I tried it and a conversion was achieved; but when loading it into the camera the process stopped, saying that no device was found.
In short, I need a script or program that can convert the trained model.
Thanks

1 Like

Hi there,

I tried converting my model and deploying it to A1101 using the ‘uf2conv.py’ file you mentioned, and it looks like it’s working correctly and has the recognition box!

I did these steps:

  • flash it to A1101
  • test the model via demo link.

Although it looks strange to work with models, you can find a model called ‘user defined 1’ on the demo page.

2 Likes

Hi,

I’ve a UF2 from INT8_TFLITE model generated by SSCMA (train.py → quantize.py → export.py → uf2conv.py from yolov5-swift)

I updated the firmware to 2.0.1, drop erase_model.uf2 to my A1101.

On the app i’ve got my “user defined 1” in model, but when i’m trying to load it i got : Algorithm Mismatch With the Model.

I don’t understand where is the setting that i need to change to make my model compatible with A1101 ? Do i need to change my coco_annotation.json categories name with number ?

The model have been trained on the coco_mask dataset available on the SSCMA public datasets : Public Datasets | SSCMA. I use the settings : batch=32, workers =4, lr=0.01, img : 96x96, classes=2 for the training at 100 epochs.

My model need to use the image classification, to allow a binary return if a person wear or not a mask. The goal is to have a return 0 or 1.

1 Like

Hi there,

Wow, that is Awesome. You are steps away from having your own model on there. That is great! I hope Seeed can reply and lend a light on this…
Good Luck and keep going this is good stuff. :+1:

HTH
GL :slight_smile: PJ :v:

1 Like

Yes PJ, i’m close.

I test also with “Image detection” to process multiple people with the camera. I tested another approach with MobileNetV2 and keras.

I got result when i process an image (on my local server), but when i load the UF2 file from this script. I lost my user defined 1 in the “model” select.

While i keep the same setting for uf2conv.py in my script:

command = [
    "python", UF2CONV_SCRIPT,
    "-f", "VISIONAI",
    "-t", "1",
    "-c", TFLITE_MODEL_PATH,
    "-o", UF2_MODEL_PATH
]

I also hope Seed will update the wiki on :How to use your own model on A1101. Some detail about SSCMA and tensorflow 2.18 would be great.

1 Like

Hi there,

Excellent work, I’m rooting for it… :grin: :+1:

GL :slight_smile: PJ :v:

Hi,

I managed to resolve my problem.

I got an answer from seed support, they confirm we need to continue working with yolo-swift.

To export as UF2, use the uf2conv.py in edge : GitHub - Seeed-Studio/sscma-example-vision-ai: Example of Edgelab AI model deployment related to Vision AI

Here is the A1101 compliant documentation : yolov5-swift/notebooks/Google_Colab_Digital_Meter_Example.ipynb at master · Seeed-Studio/yolov5-swift · GitHub