How to run custom model inference with non-image input on Grove Vision AI Module v2?

I want to build a sequential pipeline to run a heavy computer vision neural network on a group of Grove Vision AI Module v2 chips by dividing the heavy neural network into a set of smaller models, such that each Vision AI Module v2 runs only a few layers.
I plan to connect a group of Grove AI modules v2 to a XIAO in a master/multi-slave I2C connection for communication. And then, I will build custom TensorFlow lite models with each model having a set of layers of the original model, such that a given model processes the output of the previous vision module and provides input to the next vision module.
The problem I’m facing is that the TensorFlow lite models will be processing matrices as inputs not images (except the first model in the pipeline). And whenever I try to flash the vision module with a model that doesn’t take images as input, it doesn’t work.