On-device STT support for Mycroft

Hello, please what is Mycroft AI’s plan to support on-device (offline) STT? I read here that Mycroft is teaming up with Mozilla to build DeepSpeech which isn’t quite ready for production use. So, in the absence of DeepSpeech, Google TTS is being used as the STT component of Mycroft. In a privacy-preserving use case where users don’t want their speech to be sent to any server for transcribing, Mycroft is failing to preserve users privacy.

Does the roadmap include plans to support offline/on-device STT where non of the user’s voice data will be transferred to any server?

Thanks!

With the release of DeepSpeech 0.6 there is also support for TensorflowLite which allows “realtime inference” on a RaspberryPi 3/4. And Mycroft is already prepared to use a (local) DeepSpeech-Server.

For me the major roadblock for using DeepSpeech is the availability of a DS-model with a Word-Error-Rate low enough for real-world application.

1 Like

I agree with you on this. I actually run on-device inference with DeepSpeech on Android yesterday and I must say it is painfully slow and fantastically inaccurate but that’s understandable. It was only released in v0.6 and it is in active development. It can only get better with time.

2 Likes