Will mycroft have anything set up for disillusioned snips makers due to sonos buyout?

Wondering if there will be something like a quick-start for snips.ai users looking to switch since we know sonos won’t be giving us the support and tools we need to keep making cool stuff. What a slap in the face…


I only had a brief look at Snips but I think major differences are:
there is no such thing as the Snips-console in Mycroft
Mycroft does not support satellites (yet - maybe some time in the future).

There is no Snips-to-Mycroft migration document, but a good starting point would be the Mycroft documentation: https://mycroft-ai.gitbook.io/docs/


Another difference is that stt/tts are provided by cloud services at the moment. Offline usage is possible but requires a big amount of computing power (more that any pi can currently provide) and is not that easy to setup as in snips.

Welcome mushu!

It is sad to hear they are closing repositories and shutting people out. Mycroft is 100% open source so you can be confident that whatever happens to the company, you can continue to use the entire stack forever more!

I haven’t used Snips myself, but from conversations with other users it sounds like Snips and Mycroft aren’t directly comparable. Mycroft is intended to be a complete voice assistant meaning it handles any speech that it receives rather than a limited range of pre-trained phrases.

As someone checking out Mycroft for the first time, I’d love to hear of any road blocks you run into, things that don’t quite make sense, or that could just be clarified further.


Thanks for the welcome. The thing that I liked about Snips was that everything was done without Google/Amazon/Bing/etc getting any of your queries or learning activities. It did require the servers at Snips central to do the heavy lifting, which are being shut down now sadly, but otherwise it was completely private. But I guess Mycroft doesn’t have any learning set up like that so we would have to “crunch our own numbers”. I’m also looking at Rhasspy https://github.com/synesthesiam/rhasspy for the privacy aspect.

Mycroft is a universal voice assistant, it should be able to “understand” everything you tell or ask him, not only “pre compiled” utterances like Snips does/did.

You can run Mycroft completely on a local installation, e.g. using espeak for TTS and Kaldi or DeepSpeech for STT. This comes with the price of a very robotic and high word error rate which would make it not “production ready” yet. The available language models for local solutions are simply not good enough compared to Google or Amazon cloud services - but the latest DeepSpeech release looks very promising…

1 Like