I use Mycroft on a PI3.
When I call the wake word and I can see that the wake word is recognized in the CLI it takes 1.3 seconds till I hear the sound over my 3.5mm headphones to tell me that mycroft is listening. In some video I can see that it can react faster.
Can some one tell me if the Pi3 is the bottleneck or what can I improve?
but this takes ages (stillt installing)… have already a working API Key since I use ti with my OpenHAB system.
by the way is it possible to send the recognized text by Mycroft to a url? I’m think of this Scenario: Mycroft passes the recognized text to the REST url of OpenHAB this then decides via a “rules” whether it should play the text via TTS (of the OopenHAB) to the pulseaudio speakers or not.
(I know there is a OpenHAB skill but I like to have the text recognized by Mycroft as text in a OpneHAB rule)
This is a great tip on the timebased schedular. Will add it to the docs.
Yeah the Mark II prototypes are using the Google Cloud Streaming STT. There is also a built in support for DeepSpeech Streaming STT and the new IBM Watson service provides Streaming STT but Mycroft Core hasn’t yet been updated to support it.
For every answer mycroft generates a new mp3 that is then played to the pulseaudio server.
Now for example mplayer needs around 2 seconds to open the tunnel to the pulseaudio sever and then play the mp3. How can we improve this?
One way would be to have a stream open the whole time mycroft is running with an unhearable audio sound. And once there is a new answer to play through the speakers the connection is already up and running and the new “mp3” can be injected into the the same stream…