The Mycroft Personal Server - Starting the Conversation


#106

i finally got around to installing it onto my server. now i can take a look.i will let you know when i get something


#107

i keep getting a 404 error when trying to start the backend saying pipy.org/simple/personal-mycroft-backend is not available. i am using the mycroft git was something changed?


#108

What gets tthe error? Can you share logs from that piece?


#109

that is when doing the install via pip… when installing via git the error i get is (draveyn@Draveyn-Server:~/personal-backend$ from personal_mycroft_backend.backend import start_backend
from: can’t read /var/mail/personal_mycroft_backend.backend)


#110

why is backend trying to read from a mail directory that is empty to begin with?


#111

I believe you can set that in the configuration.


#112

I looked at settings.py however it just asks for smtp details for mail server. also when looking at making a gui for mycroft, problem i see is… it would be alot easier to make a stand alone gui system. due to the pairing process and sending and receiving commands. tho i do understand its better to send commands to a db that hosts all responses.


#113

GUI for the the backend? Or for the front end? There’s already a gui project for mycroft itself, you can find that elsewhere on their github.


#114

Is there a How to Start guide that details the build steps to get up and running?


#115

Clone the repo.
Edit the config to suit your local setup.
use the example scripts to start the front and back end pieces.


#116

gui for the client, the only other one ive seen is payable i was going to do one using unreal engine.


#117

There are instructions to run the Mycroft GUI in KDE Neon Unstable which is what the Mark II displays will use, however it sounds like you’re talking about more detailed 3D avatars?


#118

@ gez-mycroft i will definitely check that out, thanks. does the backend requires a mail server?

i just want the backend to be… well the backend for mycroft itself? i dont need it for anything else.


#119

You should be able to use an external mail service by entering the relevant SMTP configuration. Whatever mail service you use should have some documentation on setting up a mail client that will contain these details eg server, port number, username structure (eg whether they want “draveyn” or "draveyn@mailservice.com").


#120

Hi. I’m new to Mycroft and I’m very interested in the personal server because I don’t have reliable, always-on Internet.

I haven’t read all 114 posts, but I wonder if a personal server light might be a useful goal. In the section re: What a personal server isn’t, I read “You don’t have to be a High Geek but it will require some significant computational resources, like an always-on PC with a high-quality GPU.” A number of the other posts mention implementing TensorFlow and the massive compute resource required. But I have an old Samsung S5 and Motorola G6 on which I can run Sky Code UK’s voice offline translator when I travel. It takes a few seconds for the translation to appear, but the SST response is almost instantaneous. Now if a G6 with its teeny, tiny processor can do SST with almost no latency, why can’t such a functionality be incorporated into a light-weight server? I think most users would be willing to pay the less than 10 USD for a license.

Clearly, the personal server light I envision would have to occasionally go out to the web for some information (e.g. Is flight AA 1234 on time?), but most of the time no web access would be needed (e.g. do something with my home automation system, or play one of my music files).

Thoughts?


#121

Your feelings for latency and accuracy will drive the majority of choices. Less accurate/more latency == less resources needed. A high end server isn’t necessary for many things, just makes some of them faster. Better STT engines require more local resources, and a gpu helps the two major ones (kaldi/deepspeech) but isn’t necessary. More life-like TTS engines can use a gpu as well, but again, aren’t necessary. Wikipedia can be run locally on minimal resources if one is so motivated. I run on an old desktop box with a low-end gpu. It does pretty well.


#122

Another couple of things I should have mentioned earlier:

The STT capability for a “personal server light” would only have to be able to understand the utterances from perhaps two or three people, not the entire range that deepspeech has to process, so theoretically it can be much simpler and require less resource.

The second required feature of the PS light STT would be to allow the user to correct the STT’s responses such that it improves its accuracy over time. Since it’s dealing with a much, much smaller sample group, accuracy might improve rather quickly.