Creating my first skill with essentially no experience - Mycroft MagicMirror skill

Good points, @dmwilsonkc.

Although the installation is not straightforward, and can’t be achieved using just requirements.sh or requirements.txt, it’s likely that if you’re at the point of building a Magic Mirror type project, then you’re likely to be OK with the building process and its manual nature.

The other alternative I see here is to make an image of the RPi with Magic Mirror and mycroft-core installed, but that’s a lot of work.

@KathyReid Making an image of the Pi with both installed should not be that hard. The only thing is I will have to do it from scratch as all of the images I have made for myself as “backups” contain my personal information and keys to different API’s. Another tough thing will be finding time, it’s summer now and my kids are very active. I’ll see what I can do. Oh, and the two installs require just more than will fit on an 8gb card. So I’ll have to learn how to crunch down the partition of a 16gb card to just the right size to make the image as small as possible.

1 Like

@KathyReid @forslund I’ve finally spent a little time to correctly configure the ipwhitelist in the config.js of my MagicMirror and can include the instructions in the README for the magic-mirror-voice-control-skill repo.

I should be able to change the code for the skill to allow users of any Mycroft (Mark 1, Picroft, Mark 2, etc.) to set the ip of their MagicMirror and not require the instance of Mycroft to be located on the same RPi. I have not done that yet, however I will work on the code changes some this weekend. This will make the skill available to a much wider audience.

It will probably take some time to figure out the best way for users to “tell” Mycroft what the ip address of their MagicMirror is. obviously the CLI would be easiest, but I would really like Mycroft to first check to see if the ip has been set when it first loads the skill. If the ip has not been set, have Mycroft ask the user by saying something like “to activate the magic-mirror-voice-control-skill you must first tell me your mirror’s ip address” expect.response=true. Then the user could utter “192 dot 168 dot 1 dot 120” for example, or just type it into the CLI. If the ip has already been set, just continue to load the skill, but add error handling if the ip changes or the Mirror is unavailable.

Any thoughts?

2 Likes

There is also the option of using the skill settings at home to enter the ip address via text. Not as cool but should be functional.

1 Like

That’s a good point Ake - more info here

@KathyReid @forslund I have successfully written the code changes to the magic-mirror-voice-control skill. The skill will now work with any instance of Mycroft. Whether it be a Picroft, Mark 1, or any other version of Mycroft. The skill will check to see if the MagicMirror’s ip has been configured by looking in the skill’s directory for the ip.json file. If it does not exist, Mycroft will tell the user that to activate the skill the user needs to tell Mycroft what the MagicMirror’s ip is and will then save the ip.json file with the address that the user utters. Of course the user could always type the appropriate command into the CLI if they wish.

I have included error handling that will verify that it is a valid ip address, and that the MagicMirror is accessible at that ip address through the MMM-Remote-Control module.

I am still unsure how to test the skill to have it added to the Mycroft skills repo. Short of someone trying it out on an existing Mycroft platform that also has a MagicMirror with a properly installed MMM-Remote-Control module available on their home network.

While I appreciate the suggestion of using the home.mycroft.ai to allow users to configure the ip, I know personally I would rather do that locally and by speaking directly to Mycroft instead of having to store that information somewhere else. Also, the simplicity of being able to have mycroft automatically reload the skill if I update the ip by voice command seemed preferable to me as well.

What do you suggest my next steps should be?

Cheers!
Dave

1 Like

I’ve posted an example of the magic-mirror-voice-control-skill on the MagicMirror Forum:

Yet another AI for MagicMirror: This time it’s Mycroft.

I’m hoping to get more feedback about other improvements that can be made.

FYI

1 Like

Incredible work, @dmwilsonkc!

Testing

This is a challenge that we’ve been discussing a lot within the Skills Team. Several Mycroft Skills use third party devices or services that cannot easily be tested without either having additional equipment available or knowing the third party API well. To work around this, we decided that if there was one other person (ie not the Skill Author) in the community to “vouch” for the Skill then we would accept it. I don’t know here whether there are any other people in the community running Magic Mirror or interested in doing so - don’t suppose you have anyone in mind? If we can get them to “vouch” for the Skill then the next step would be to submit this Skill to the mycroft-skills repo, 18.02 branch.

Updating the IP address

Using Skill Settings it would be possible to allow the user to configure the IP address on home.mycroft.ai, but I don’t know whether this setting is mutable using any methods from the MycroftSkill class within a Skill. @forslund do you know if this is possible?

@KathyReid I may have been a little confusing in my earlier post. I have already configured the skill to update the MagicMirror’s IP address without using home.mycroft.ai. All the user needs to do is say “hey Mycroft”… “Set mirror IP address to 192 dot 168 dot X dot XXX” or type it into the CLI and Mycroft will verify that it is a legitimate IP address and save the ip.json file in the skills directory and reload the skill. When the skill reloads it verifies that Mycroft can actually communicate with the MagicMirror on that IP address. If not, it asks the user to verify the IP address and then stops. If it does connect, it tells the user that it has successfully connected to the MagicMirror. So no need to have the user go to home.mycroft.ai.

This is assuming that the only time a user is trying to issue commands to the MagicMirror is when they are at home on their local network. This skill will not work over the Internet… Unless the user can send commands to Mycroft located at home via text or app via a mobile device. Mycroft and the MagicMirror need to be on the same local network. And the MagicMirror’s ipwhitelist needs to be configured to accept incoming requests from Mycroft’s IP address.

I just wanted to avoid using the home.mycroft.ai if possible. This way, it’s all stored on the local machine, and can be changed by just telling Mycroft to change it if need be.

1 Like

@forslund @KathyReid I just wanted to give you another update. I had a chance to attend the Maker Faire Kansas City this weekend. My son and I had a lot of fun hanging out with @eric-mycroft and @steve.penrod and talking with others about Mycroft. We worked to get the magic-mirror-voice-contol-skill working on a Mark 1, but figured out I needed to make a small code change, that has now been completed. So we should be good to go, we just need to have it tested.

The next step for this skill will be to create a visual interface on the MagicMirror itself with the Mycroft logo and text that shows what Mycroft hears and what Mycroft is responding with. I’m thinking something like MMM-Kalliope. Here is a short video. I think I can use the existing MMM-Remote-Control skill to send notifications to a “MMM-Mycroft” module to do something similar.

My only stumbling block is really how do I send all utterances and responses to the MMM-Mycroft module? I know how to send them from within the magic-mirror-voice-control-skill, but how do I send all of them from every skill that gets triggered, or when the wake word is heard? Any thoughts anyone?

1 Like

Sounds really cool. Looking forward to hearing more of your progress!

1 Like

Hi there @dmwilsonkc thank you so much for the update!

Just to confirm, you mean that this PR on the mycroft-skills repo? If so, we’ve marked that one as an ‘override’ so we can override the normal testing that goes on with Skills, because the Skill is difficult to replicate without installing the Magic Mirror component. Normally what we do here is get someone else from the Skills Team, or a Community member, to vouch for the Skill if we can’t test it, but I’m not sure if there is anyone else out there running the Magic Mirror Skill?

I’m not sure how to go about sending all the Utterances and responses to the MMM-Mycroft module. @forslund may have some ideas on this one?

@KathyReid Thanks for the reply. So… I had a chance to visit the PlexPod and the Mycroft office space in Kansas City which was totally cool. I have been talking with @eric-mycroft and he asked if I could drop by Maker Faire Kansas City and visit with others about my experience with writing a Mycroft skill (with essentially no experience). Eric took the time to get a working copy of the MagicMirror installed on a RPi. He also installed the magic-mirror-voice-control-skill on a Mark 1 which was attached to the same network as the MagicMirror on the RPi. It did connect, but there was an issue that has since been corrected in the code. At some point, Eric can do a git pull from my repo to the Mark 1 and test it. He’s a busy guy, and I’m not in any hurray, but he will test it at some point.

I am now looking to extend the MagicMirror - Mycroft connection by creating a MMM-Mycroft module for the MagicMirror, which will provide a better Mycroft experience when it comes to the MagicMirror. Similar to the MMM-Kalliope module i referenced in the earlier post. My hope is to get some of the MagicMirror builders interested in Mycroft, and using it for a robust AI experience on their mirror builds. It seems to me that the MagicMirror community are the type of people that would be interested in an open-source, privacy-centric, AI.

I am stuck however. I’m not sure what the best way is to have essentially the same interaction that the bottom half of the CLI has, but displayed on the MagicMirror. So, once the wake word is triggered, display text of what Mycroft hears, and then the text response. I know how to get the text to the mirror, by sending a notification to the MMM-Remote-Control module, but how do I get Mycroft to send the notifications everytime the wake word is triggered and everytime Mycroft responds?

@forslund do you have any suggetions?

Super helpful! Thanks @dmwilsonkc, I will see if @eric-mycroft can test it for us - although I know he has some vacation time coming up, so it may need to wait until he returns from holidays.

Yeah, he’s been super helpful. I’m in no hurray. I’m sure it is a much deserved vacation. He was working pretty hard on Sunday.:grinning:

Hey friends. I have every intention of adding Magic Mirror to our dashboard TV here at the office and linking up the Mark I for control. I am OOO Thursday-Thursday coming up. May break some time off tonight to give it a go, otherwise it’ll probably be July 6 or later.

1 Like

@KathyReid @forslund @eric-mycroft Here’s another update on the magic-mirror-voice-control-skill. Well… actually I didn’t make any changes to the skill, but I did add a Mycroft Icon on the MagicMirror and text of what Mycroft hears and text of what Mycroft responds. At this point, it’s a complete hack, not a skill. I’m attaching a short video to give you an idea of what the interaction between the user and Mycroft look like on the MagicMirror.

On the MagicMirror, I am using the MMM-kalliope module to receive the text from Mycroft through a requests.post to the MagicMirror’s ip address at the default port/kalliope.

Now here’s the hack, and I would of course prefer another way, but I’m kind of stuck on how to do this in a skill. I added a couple of lines of code to the /home/pi/mycroft-core/mycroft/client/speech/main.py:

def handle_wakeword(event):
    LOG.info("Wakeword Detected: " + event['utterance'])
    ws.emit(Message('recognizer_loop:wakeword', event))
    voiceurl = 'http://192.168.3.126:8080/kalliope'  #<----- Add these lines, using your mirror's ip
    voice_payload = {"notification":"KALLIOPE", "payload": "Listening"}  #<-------
    r = requests.post(url=voiceurl, data=voice_payload) #<-------

def handle_utterance(event):
    LOG.info("Utterance: " + str(event['utterances']))
    context = {'client_name': 'mycroft_listener'}
    if 'ident' in event:
        ident = event.pop('ident')
        context['ident'] = ident
    ws.emit(Message('recognizer_loop:utterance', event, context))
    utterance = str(event['utterances'])        #<--------- I added these 6 lines of code
    utterance = utterance.replace("['", "")    #<--------- to send what Mycroft hears
    utterance = utterance.replace("']", "")    #<--------- formatted without ['example']
    voiceurl = 'http://192.168.3.126:8080/kalliope'  #<----- to a hard coded url
    voice_payload = {"notification":"KALLIOPE", "payload": utterance}  #<-------
    r = requests.post(url=voiceurl, data=voice_payload)   #<----- again, I know this is not ideal

It does work as you can see from the video, but the timing is slow. It is convenient from the perspective that if Mycroft doesn’t hear you quite right, you realize what it heard.

The other hack was to the /home/pi/mycroft-core/mycroft/client/text/main.py:

def handle_speak(event):
    global chat
    utterance = event.data.get('utterance')
    utterance = TTS.remove_ssml(utterance)
    voiceurl = 'http://192.168.3.126:8080/kalliope'     #<- I only needed to add three lines
    voice_payload = {"notification":"KALLIOPE", "payload": utterance}  #<--- here
    r = requests.post(url=voiceurl, data=voice_payload)   #<---- and here
    if bSimple:
        print(">> " + utterance)
    else:
        chat.append(">> " + utterance)
    draw_screen()

Again a hard coded url, at this point it is just a hack.
Is this even possible using a skill???
if so, how would the skill get triggered??
I’m looking for a little advice.

Cheers!
Dave

I am creating a skill that is forwarding the remainder portion of my utterance to another computer through a udp connection. The computer “daemon” will be running adapt - parser to parse the reminder in the daemon. The mycroft skill will parse the original command “ask the computer to…” Then pass on the remainder.
Not sure this is helpful or not. Good Luck. If I ever get some time I will be playing with the magic mirror project.

@pcwii Thanks for the idea. I’m currently trying to capture every utterance including when the wake word is detected. I just am not sure if that can be done in a skill. The UDP idea could really be useful with multiple devices I think. In my example, multiple mirrors. So I could say show weather on the entry way mirror, for example, and have a broadcast message to all mirrors, but have them programmed to listen for their own notification. Cool idea.

How would you see UDP working to display “Listening” for example, every time Mycroft hears the wake word? That’s my current dilemma.

The only way I can see this working is to modify the wake word listener code to send a UDP packet once the wake word is triggered. Maybe somewhere around when the listening sound is played. Not sure in the code where this is and if we can find a clean method to enable / disable this in the mycroft config we may be able to submit a pull request for it. I actually feel there is some value in creating a process to “echo” the sst to other devices so that it can be processed there. This would permit a workable communications channel to/from other “mycroft compatible” devices. It is fairly easy to remotely send mycroft commands using websockets (https://github.com/pcwii/testing_stuff/blob/master/send_to_mycroft.py) but I don’t think it is as easy to intercept commands it is processing.