Build an open future with us.

Invest in Mycroft and become a community partner.

South west UK joins the party


#1

Hi Everyone,

My name is Dave, county of Wiltshire in the UK, I’ve just joined, and I’ve been spending the last couple of weeks tinkering with the Mycroft application for Linux. It’s running on Ubuntu 16.04.02 and is, well, amazing me on what I can do with it and what it can do.

So to give a bit of background, I am an electronics engineer that has migrated into the worlds of System Engineering and now into embedded software engineering. This I have been doing for the last 25 years. My primary language is C but I have dabbled in Python on a few projects and am picking it up fairly quickly.

I have three skills that I have written, what are very mission specific, so at the moment they have no value for the Github community. At least not yet. This is what I have so far.

Light Bar
I bought a string of 20 RGB LED globes from Ebay a while ago. Each one has an 2811 controller chip in it on a daisy chained bus. Using a Arduino Iooky-likey (ESP8211) I created a controller for the lights, which included a web based interface for control. I then fitted the modules onto a wooden batten which is now high up above me in my office. The programming allows various colors and brightnesses. The obvious step was to create a skill to form post commands to the light bar by spoken command. This was my first skill to create and I completed this by first writing the Python code to drive the web interface, outside of Mycroft, before integrating it into the skill template.

Shower room control
I had problems with my shower room, whereby members of the family would leave the room wet and not open a window etc. Mould problems occurred as a result. I recently refitted the entire room and didn’t want the same problem to occur. So, another ESP8211 and some more electronics later and I have a method whereby the temperature and humidity in the room is monitored. As necessary the gas fired central heating is brought on, along with an extraction fan. The design goes through several steps of treatment to dry the room out, along with the towels. Again the design has a LAN based interface to allow monitoring and to instruct overrides on both the central heating and extraction commands, if required. And yet again the Mycroft system offered the opportunity for spoken requests for the status of the system and commands to the overrides.

Weather station
I have an interest in the weather and operate a fairly advanced private weather station. This is connected to a server which runs the Cumulus software by SandaySoft. The output from this is a data logger, a website interface and injection into various global weather communities. One of the website features is gauges that read a simple JSON file,from the server, every 30 seconds. Opportunity here. My skill reads the JSON file and then allows various requests to verbalise the data from the weather station. You can ask for specific information or for a full report. The full report takes Mycroft about 2.5 minutes to verbalise as there is a lot of information there. Granted, this one could be useful for others to play with.

In general, Mycroft is on my PC with the wake up words “Hey Jarvis”. Its got to be fun! I have also installed the system onto my Ubuntu laptop.

Where I want to go next is to be able to bring the voice processing local, as the “satellite delay” does ruin the experience somewhat. I have a couple of pretty high spec HP Enterprise servers to hand, which are crying out for something to get their teeth into. They both run Ubuntu server 16.04. Therefore progression on this front would be of great interest to me.


#2

I think you underestimate the value of your work. I find it’s extremely handy to be able to look at something another person’s done and either utilize that or even just base other projects from the concepts therein.
Particularly as ESP8266’s and 8211’s are fairly common items, skills that demonstrate their usability would be something I think could have value to others.

The weather one looks interesting in that it might be usable if one wanted to replace the current weather skill if they have locally enabled devices. Though I might get to the point of exhaustion after two and a half minutes of that being read to me. :slight_smile:

Deepspeech benefits more from a serious GPU than CPU if you’re going that route. Also look into building your own precise model, though I suspect your wakeword will get a larger project hosted effort soon enough.


#3

A huge welcome, @Darmain, great to have you here and hope you’re keeping cool in the heatwave over there at the moment. You’ve done some amazing integration work with Mycroft! We’d love to hear more!

In terms of processing voice locally, that will be a challenge due to the compute resources required. Over time we want to be able to run DeepSpeech locally, but that is some time away.

It’s worth having a look at our Roadmaps to see where STT and standalone server is going.


#4

Hi Baconator. Thanks for your reply and thoughts. It is more of a case of finding the level with my work. By all means I am happy to share, after all it is in the spirit of open source code and community contribution. I have a Github account now. I just have to get figured out with pushing code up to it.


#5

Hi @KathyReid, thank you for the welcome. Yes it is warm here. We are regularly seeing 29 degrees C, must be about 84 F over the last two months. We have seen hotter in recent years but its the length of time the heat has been here that makes this different. The weather station is watching and recording for me.

With regard to the speech processing hardware. Granted, most people wouldn’t own the machines with the horse power required. Being a bit of a geek I have recently purchased two HP Proliant power houses. One sports 16 x 2.55GHz cores plus 48 Gb RAM, the other has 24 x 2.93 GHz cores and 32 GB RAM. I am curious how that matches up to the equivalent processing power required for one user on your data centre. In practice there is one fundamental flaw in the plan of local processing, and that is the electrical power consumption. Each of these machines draws 300 watts, which all finds its way out as heat, not to mention the roar of the 8 cooling fans. Needless to say that in the home this is not desirable for 24 hour running. One project I was playing with saw one machine “always on”. It worked out that it was using the same amount of power per day the the rest of the house used. That plan got rapidly changed as a result.


#6

Latest news. I have created a Github account under the name “DarminTheDonkey”. Its a name… I have also successfully pushed my lightbar skill project. While this does not contain the lightbar sketch for the NodeMCU ESP8266, or the circuit diagram for it, it does demonstrate the technique to write to a webpage using form data. Might be of value to others.

Repo address - https://github.com/DarmainTheDonkey/mycroft-lightbar.mycroftai


#7

Super helpful! Thanks for sharing!


#8

Hi. Am I doing something wrong here? Speech to text keeps having “moments”.

Me - Switch off the light bar please.
Mycroft - I don’t understand.
Mycroft debug console - Switch off the lights bar fireplace.

Yeah, I can kind of understand why it had issues.

I can come up with many examples but when it has this kind of fit it takes me about 10 attempts to get the result I was looking for. I try speaking as clear as I can, speaking naturally, speaking slowly and so on. It seems to work better with natural speech.

Other times it works first time, every time.

The really odd thing is it will not wake up for my wife or daughter, no matter how hard they try. Then my mother in law came round. Wife grumbles about me locking her out of my project!!

Mother-in-law then says “Hey Jarvis”.
Mycroft - “BEEP”.
Me - {Oh no, I’m dead!!}

Before I need my last will and testament, is there any way of addressing this?


#9

Hi @Darmain,
If I understand correctly you are using “Hey Jarvis” as wake-word.

As far as I know “Hey Jarvis” is not trained for the Precise STT engine yet. Only “Hey Mycroft” is trained for Precise (and “Hey Christopher” being just in training) while for all other wake words are the PocketSphinx STT engine is used, which is not as good as Precise…

Maybe you change the wake-word to “Hey, Mycroft” and try if this yields better results?

Best regards,
Dominik


#10

Thanks for that Dominik, I will certainly give that a try.


#11

@Dominik Thanks for the advice. I can gladly report that following the change “Mycroft” is now responding to both Wife and Daughter. The rumours of my death have been greatly exaggerated.


#12

Good to hear! We definitely need more female voices, and younger voices, as part of our open data training set, in order to train the neural network to correctly recognise them :+1: