Build an open future with us.

Invest in Mycroft and become a community partner.

Easiest way to use Mycroft completely offline


#1

I have backed on the new Mycroft and want to use it ideally completely offline.
I have a central home automation server, that fetches all important data from sensors or apis that should be controled by mycroft.

But I want to avoid that my speech is sent to a server outside my controlled home-subnet.
So the question is, what is the easiest way to make mycroft work offline (without an extra server or with a self-hosted one.)


#2

Hi there @skeltob, this is one of our most requested features - completely offline use. Unfortunately we don’t have good documentation on how to do this at the moment. I do know that some of our Community members have done this in the past, including @Jarbas_Ai and may have some guidance.


#3

I’ve been waiting for a year on this and it still has not happened which is a complete shame considering the post about Google/Alexa and Siri recently:
Alexa_Siri_Google Hidden command attacks


#4

Are there plans to implement such functionality in the future (even if it was an optional setting), @KathyReid?


#5

It’s definitely something we want to do @gregory.opera, however it requires a bit of work on our side. Our existing home.mycroft.ai platform is scaled to support tens of thousands of users, and runs across several virtual hosts - probably not all that usable as a local / personal backend. So we need to work on scaling that down.

The other layers to this problem are;

  • Speech to text - really this is the biggest blocker at the moment. Until we can get DeepSpeech to a point where it can run (or at least a vocabulary subset can run) on an embedded device, then we’re going to be stuck with cloud-based STT, irrespective of which cloud that runs on. There have been some substantive efforts by the DeepSpeech community toward this objective.

  • Skill support - most Skills need some form of internet connectivity as they’re connecting to third party APIs.

  • Configuration settings - at the moment, configuration of Devices is done via Skill Settings at home.mycroft.ai so we would need to find a way to do configuration locally.


#6

I’ve been waiting for a year on this and it still has not happened which is a complete shame considering the post about Google/Alexa and Siri recently:
Alexa_Siri_Google Hidden command attacks 18

If I understand it correctly, that attack could still work on a completely offline solution.


#7

it is possible to run it offline, but there is no out of the box solution…

if you want to get your hands dirty

  • remove all metrics
  • disable pairing
  • disable remote config
  • find an offline STT (pocketsphinx is not good…)
  • many skills need internet and won’t work

compare changes from my fork (slightly outdated, no py3)


#8

I have been running deepspeech locally with the ‘pretrained’ model on a separate computer in my house recently.

It was fairly easy to set up and to point Mycroft at it. The server does not have a GPU, so it’s not as fast as it could be, but I think the gain in local network speed makes it not that different from the cloud service, which is kind of slow too, in my opinion. I will probably get a GPU based server at some point, but don’t expect a huge improvement in speed, because non-GPU is already usable for the short commands I use.

The big hit I’m taking is with accuracy. I have to speak slowly, right in front of the mycroft, and leave gaps of silence between words.

I’m currently starting to research ways to better train the local service. I have not gotten very far

My pipe dream would be for the mycroft community to be able to share and asimmilate incremental training gains without sharing any audio. That’s way over my head at this point, though


#9

You know about the DeepSpeech trainer, to improve accuracy:
https://home.mycroft.ai/#/deepspeech


#10

I do now haha - thanks

Will that training somehow be accessible for local deepspeech services to get? Or would I be able to install the trainer software itself local?

I’ll definitely take some time and listen/train whether or not it’s doable locally - great project


#11

If you manage to release something on Windows any time soon, I’m fairly certain that it’d be really easy to do something with C# or VB.NET that uses the System.Speech.Recognition namespace in the Common Language Runtime. Its accuracy isn’t the best, but it is definitely a functional baseline. And it doesn’t seem to need pauses between each word like DeepSpeech apparently does.


#12

I think well trained deepspeech doesn’t require spaces between words. Just the combination of my voice and the ‘pretrained’ model that Mozilla distributes seems to result in that


#13

You can also contribute to project common voice. This is the data that Deepspeech is using in the end. In this way it will get at least used to your accent and tone of voice.