Should it be possible to train a model with a small dataset that only contains the commands that are applicable to an app and run it offline on a mobile device as inference api?
So it will convert speech to text and get the intents and params.
With this the model is small and performance can be fast, right?
Some sample utterances:
Show me all customers of today
Create a registration of 10 minutes
Update status of customer 1