I was wondering when is the cloud coming into play?! I have mentioned that I watched the video about the architecture by Steve Penrod, but I am still a bit confused.
Which components of the architecture interact with the cloud and how?!
By default the big thing that interacts with the cloud is the Speech to text (STT), other things that also does it is Skill Settings and Device Configuration (settings can be changed on the home.mycroft.ai website).
There’s also a couple of skills that uses various cloud services, weather (Open weather map) and general queries (Wolfram Alpha).
Most of these things can be disabled, a local STT provider can be set up using Deepspeech or Kaldi, weather / wolfram skills can be disabled. etc.
I am coming back to this after some time @forslund, but I was wondering if the cloud usage is symmetric. If the cloud is used for STT, I was wondering if the TTS component is also using the cloud?!
I forgot to mention my thanks last time so thank you for your previous answer!
If tou use the voice British Male it is a local mimic voice. Then TTS is done on you device. The other voiceses are made in the cloud. The American Male is mimic2 and made on Mycroft backend, and Google voiceses are…made by Google.
It is also posible to configure other TTS so if you run a TTS localy, you can setup mycroft to use that.