Just placed my pre-order! I have two very basic inquiries:
Just placed my pre-order! I have two very basic inquiries:
Is it possible to upload a generative adversarial network (GAN) model, trained on a custom image set, and have video/audio data be processed in real time? I would like to start with video data of 720p, audio 44kHz, for a total of voice alteration, face detection, object recognition and finally image cropping. Does the SDK allow the use of custom-programmed models?
Another application I would like to ponder is semantic segmentation (i.e. separation of a live video scene into objects, foreground, background etc).
How does the compute power of the Sipeed MAix compare to other deep learning hardware like GPUs? Do you have a benchmark comparison like with an NVIDIA gaming GPU?
The wiki page is available now:
<LINK_TEXT text=“http://wiki.seeedstudio.com/Grove_AI_HA … _Computing”>http://wiki.seeedstudio.com/Grove_AI_HAT_for_Edge_Computing</LINK_TEXT>
As for the camera, you can use ov2640 and ov5640:
<LINK_TEXT text=“http://seeedstudio.com/OV2640-Fisheye-C … -4048.html”>http://seeedstudio.com/OV2640-Fisheye-Camera-p-4048.html</LINK_TEXT>
Have a nice day.
Yes, allow the use of custom-programmed models.
Kendryte K210 only have 0.3 TOPS compute power. There’s only so much you can do.
The narration in the note you mention is for Arduino based solutions. I was seeking Raspberry Pi based solutions since the HAT is mounted on a Raspberry Pi. Otherwise, wouldn’t it be better to leverage the Maixduino for the Arduino style examples.
I understand that
Raspberry Pi Raspbian supports Arduino IDE but what about Python (and rich set of libraries for AI/ML) for the AI HAT? Do we need to install the toolchain along the lines of the Maixduino with some different binaries? Is there some guidance for this type of exercise?
You can refer to
<LINK_TEXT text=“https://project.seeedstudio.com/SeeedSt … -pi-3e100f”>https://project.seeedstudio.com/SeeedStudio/face-count-and-display-using-grove-ai-hat-and-pi-3e100f</LINK_TEXT> thanks.
I wish you luck. I have been trying since I received my Grove AI HAT to do anything with it. So far, this is what I have learned:
Thanks again for your feedback. I too have learnt the hard way and can understand your predicament.
After reading the Wiki I thought it would be flawless and so patiently spun my wheels trying to improve my self-documentation so that I could blog about it. I was about to place orders for the camera and TFT display but since I have a Maixduino I said to myself that I should do work systematically and not simply splash even nominal sums (camera, TFT) around.
If I had read your note earlier I could have saved myself some time and gone back to work on the Maixduino for now till things are fixed for the AI HAT.
Please do post advice in the future if you get things working. I am not an expert at your level so I need advice as much as I can soak. Many, many thanks for the details.
Tried to follow the instructions you provided at
<LINK_TEXT text=“https://project.seeedstudio.com/SeeedSt … -pi-3e100f”>https://project.seeedstudio.com/SeeedStudio/face-count-and-display-using-grove-ai-hat-and-pi-3e100f</LINK_TEXT>
No luck! The install did not do anything and here the output for the second variation you suggested:
</s>$ sudo ./face-detected.sh
Anything else that I could be
checking or testing or installing? I would like to put the AI HAT to some use.
As I said before, I managed to build the Kendryte toolchain for the Jetson nano. Last night I tried building the hello_world example in the standalone SDK and was successful. Haven’t tested it though.
As for the Pi3. Hours of trial and error to get the Kendryte toolchain to build.
I was eventually successful. Install and build gcc-5 g+±5 and gcc-5-base.
After doing the configure dance, make it with sudo -H make CC=gcc-5 CXX=g+±5 -j2
You can add more cores if you want but I was having thermal issues so I went for two cores.
I haven’t had a chance to test the standalone SDK yet. I’m phone posting so I hope the make came out right on here.
gcc5 is the earliest version that supports c++11. Gcc6 was segfaulting. Clang tripped over on a file that didn’t have a default method or something defined. I don’t really remember.
I’ve had some progress.
I grabbed the files which the board definition downloads from the ~/.arduino15 directory on my PC, removed the kendryte-gnu-toolchain files, and copied the rest over to my Pi 3.
I took the toolchain which was built in /opt/kendrtye-gnu-toolchain/ and copied it to ~/.arduino15/packages/Seeeduino/tools/riscv64-unknown-elf-gcc/8.2.0/
It is not enough to symlink because arduino seems to add things to the toolchain directories.
The Arduino ide complains about it not being a supported platform but installed anyway in the terminal.
Using the Arduino IDE I was able to build some examples using the k210 toolchain. Unfortunately I haven’t got k-flash working yet. Java gets a whole bunch of exceptions.
So it’s not a magic bullet, but it is progress.
I’m working on it.
So far I have a working json for armhf and aarch64. I can make an archive of my toolchain and manually dump it to the correct directory in the .arduino15 tree, but there’s something that doesn’t work right with the special archive I made for the .json.
Hopefully I’ll get that worked out.
After using my json then deleting the faulty toolchain and dumping the original in the same place it works. I just built the blink sketch for the Grove AI HAT on my Pi 3 and uploaded it! So there is hope.
edit: I ran the character analysis example sketch and it worked fine too.
e again: I can’t post links yet. if you feel like a gamble, go to github, search for user “experimentech” and navigate to “kendryte-toolchain-arm”.
Thanks for sharing thread! Keep up!
I haven’t uploaded it yet but I succeeded in building nncase (ncc) for arm64. I built it on my Jetson Nano. I did this because I wanted a single solution for training, converting and flashing neural nets for k210. At least I know it is possible to build nncase on other platforms now.
I can say it needs a few things installed from source to work
Two things from QuantStack: xtl and xtensor from GitHub.
Another library called clipp (muellan/clipp on GitHub).