The Mycroft Personal Server - Starting the Conversation


bunch of different things to review, but maybe just doc it in the wiki for now, along with cert chain stuff.


Most folks will also want to get


I had to add pem file to the venv cacerts.pem to get it to verify the cert chain on the browser without issue.
cat fullchain.pem >> /opt/mycroft/.venv/lib/python3.x/site-packages/certifi/cacert.pem

You can find your path with something like this:
source /path/to/mycroft/.venv/bin/activate && python3 -c "import requests; print(requests.certs.where())"


Hi All,
Interesting reading this thread.
a friend of mine was playing with pycroft not so long ago and it go me thinking.
I’ve been striving for a while to stop using as many “cloud” services as I can in favour of my own services.
To which, I now have an implementation of Nextcloud with Certbot.
This allows me to do all the collaboration and access to personal files etc that I wish.
Also, SSH tunnels are great for interacting with services at home, like remote desktop to my main pc.
Anyway, I digress.
I stumbled across this thread after reading the mycroft docs and searching for “mycroft server”
I’m looking for something that I could install to my main server, which is based on a i7-4790k.
I also have GTX 550 kicking around in case I need to add some GPU power to the mix.
Once that’s deployed, there would need to be a client application that I could install on:
aforementioned desktop, a laptop and mobile phone, that would connect back to my home server
(either through SSH tunnel, I could get off my ass and finally setup a vpn, or maybe even a HTTPS tunnel?)
Am I in the right place? is this the sort of thing your talking about? If so, where do I start?
I see some of you are talking about "personal backend/frontend) which sounds like the sort of thing I’m looking for.

Anyway, thanks for reading.


I have a newly refurbished server already ordered to play around with. adding a local mycroft server would make things faster and easier. I am definitely up for beta testing the environment on my server.


yes you are in the right place… i am new to mycroft but from what i am reading, is its in the works. primarily if people would like it. im not sure how many they would like before they do that kind of coding.


Hey, welcome to the Forum byteback :slight_smile:

The “personal-backend” project is a community run initiative, and I know they are always looking for more contributors. You can see the current state of the project on Github. However please take note of the big warning that this is still “UNDER CONSTRUCTION”.

A few of our long term contributors have it running however it is not yet a ‘plug and play’ solution. You will need to configure various aspects, and debug as you go. If you aren’t familiar with Linux and Python then it may not be the right time to dive into it just yet.

If none of that deters you, let us know how it goes :grinning:


I’ve been playing more with Deepspeech (.5alpha) lately. It works well on CPU (i7-4770,8gb), doing 9+ translations per second, on a customized model. I will retract my advice about needing a GPU for that.


For now, yes, that’s correct a gpu would be nvidia based. Later this year, it looks like some tf-lite gpu support for opengles3.1 capable gpus will be released, but various tools will still have to be updated to support that.


Having read the forum on the Mycroft personal server, I have to admit that I am not running any version of Mycroft at all. My only computing device is my Android phone, a Motorola Moto E4.

I expect to buy a new phone in the second half of this year, one that can do trillions of machine learning operations per second. It is possible that billions of new phones like this will be purchased worldwide in the next few years. The digital agents that are running on these devices may define what machine intelligence on planet Earth is.

My interest in Mycroft right now is as a potential benchmarking tool. Can a version of Mycroft be packaged as a benchmarking tool that can be run on all of the devices that are being designed now, and that will be manufactured by the end of this year?

If so, it should be publicized in the mass media to high heaven.

With more attention, maybe more people will begin contributing and testing modules for Mycroft so it grows, speech recognition, language translation, identity verification, privacy protection, intelligent software-defined radio module, intelligent IPv6 router module, and more.

It will define machine intelligence on planet Earth.



You can make a version of mycroft be whatever you can program it to be.


sorry ive been absent from the community, ive had other stuff going on… i saw this was updated so i came to read it. i have not checked the rest of the forum yet, but i am running debian 9 on a proliant server and would like to know about getting the server files added to it, and could i run multiple raspberry pi’s linked to the central server and sync them aswell? like have a raspberry pi in different parts of my house but all the info sync’d simple example would be i say “hey mycroft play a song”, and i move to another room and i can say “hey mycroft play another song” or something like that


I personally just want to be able to run the stack all on my own. That sounds awesome! I am not everyone, but I don’t really care about making it more paired down, keeping it parallel with setup sounds ok to me.

One thing to me is I would still like to contribute to the open-data set.

I think it should be the exact same except I have to run some more docker apps and change mycroft-core’s config to point to loopback:${whateverPortWeUse}

I like the road map. The issue about simplified STT and TTS is something I have issues with. Having pretrained deepspeach model would be awesome, that would really add accessibility.


There’s a pre-trained DS model available with each release they do. You can contribute to the mozilla open voice dataset which is used for training DS as well, TTS engines are all around, you can still run one of the cloud ones or your own locally if you’d like. Getting the higher quality local TTS does take some more resources.

@Draveyn while you can connect multiple devices to a local backend, they’re still treated separately. That level of coordination doesn’t exist quite yet.


I guess my ignorance might be showing, but I believe you can opt in to the “Mycroft Open Dataset” from the homepage (for precise and eventually deepspeach as well) as well as contributing to directly. That is what I mean.


Yes. If you run your own local stuff, then you won’t be sending back to the mycroft servers.


i will be tackling skill settings this month, i might add a way to forward data to mycroft servers if you opt-in. this way even if you use the personal server you can share STT and wakeword files


@baconator what about having the raspberry pi running websockets to a central ai system? possible having the server do everything? my server is a quad octa-core at 3.6 ghz each im sure it could handle everything.


If you can get that working, then please share!


I will be looking into it, just not this week. i will also be looking into 3d modeling, probably have the raspberry pi host a client version of 3d model that will pull everything from the server.