So, I apologize if this has been asked before. I was looking at selene-backend and I didn’t see anything about a Speech-to-Text API. Is that currently not a thing for personal backends or is there something extra we need to do in order to set it up?
Edit: I’m dumb and didn’t read far enough. I saw the part about Google STT API key. Can we use our own STT setup instead? Keeping data private from big tech companies like Google is a large reason for looking into Mycroft.
I just did this recently but have to tell you, the accuracy is terrible on DeepSpeech at the moment, basically unusable. For me this is a combination of two things, my Australian accent, and a lack of modelling data for deepspeech. The former is here to stay but the latter will get better with time as more people contribute data to the project. If you’re in any kind of position to encourage others to contribute to the Mozilla Common Voice project, do so: