Creating my first skill with essentially no experience - Mycroft MagicMirror skill


#1

So, as you can tell from the title, I have very little experience in writing code. I don’t know Python or Java. But I am a fast learner/tinkerer and am committed to seeing this through. I am looking for any suggestions that might help me or anyone willing to put in a little time to answer a few questions when I run into problems. I figure this is the best place to start.

I have a Raspberry Pi 3 b running Debian Jessie (not lite) with MagicMirror and Mycroft both installed and operating without issues. Both start automatically on boot and operate independently. For the skill I am proposing to work, a user would need to have a number of things on their Pi.

First I started with a Raspberry Pi 3 b and a 16gb micro sd card.
then get the Debian Jessie image here:

http://downloads.raspberrypi.org/raspbian/images/raspbian-2017-07-05/

after using etcher to put the image on the sd card, into the pi and boot. At the linux terminal window:

  • sudo apt-get update

  • sudo apt-get upgrade

then from the command line or linux terminal window follow installation instructions for MagicMirror
https://github.com/MichMich/MagicMirror

Then once you follow all the instructions and have the MagicMirror working, then install the Mycroft-Core
files and directions here: https://github.com/MycroftAI/mycroft-core/tree/master

I did run into an issue with the libfann-dev library explained here with solution: Trying to install both Mycroft-Core and MagicMirror on the same PI

So… I thought I would try to build on an existing MagicMirror module called MMM-voice with the Hello-Lucy modifications to receive commands from a Mycroft skill.

That’s where I am now. The MMM-voice module for MagicMirror is installed and works if I stop Mycroft. It’s an issue with having one microphone. (Error) audio open error: Device or resource busy. Essentially two applications can’t share the microphone without coming up with a solution to that (possibly using dsnoop). But my thinking is just to have Mycroft use the microphone and build a skill that passes commands to the MMM-voice module (java).

I believe building the Mycroft Skill to do that will be rather straight forward using commands like the Hello-Lucy modifications. i.e.:
Hide Clock
Hide Email
Show Clock
Show Email
Swipe Left
Swipe Right
Show Page One
Hide Page One
Show Modules
Hide Modules
etc. etc.

The .voc files would probably be:
ActionKeyword.voc
containing:
Show
Display
Hide
Conceal

Modules.voc
containing:
Alarm
Clock
Email
News
Weather
etc. (putting all of the Module names in the modules.voc file)

sample1.intent.json
contains:
{
“utterance”: “Hide Clock”,
“intent_type”: “ActionKeywordIntent”,
“intent”: {
“ActionKeyword”: “HIDE”,
“Module”: “CLOCK”
}
}

As far as Dialog, Mycroft could respond "Hiding Clock"
So that’s kind of where I am. Two real big questions

  1. will I need to build an intent for each command? or would there be a way to use an array?

  2. What would be the best way to have Mycroft send the command “HIDE_CLOCK” to the MMM-voice module to be processed? socketmessage? or to reuse the code written by fewieden and bring that into the Mycroft environment?

Whatever advice the community is willing to provide would be extremely helpful.

Cheers!


#2

Hi,

I would use a single intent using the voc files you’ve suggested and just checking which was actually used. Something like:

    @intent_handler(IntentBuilder('').require('ActionKeyword').require('Modules'))
    def handle_mm_command(self, message):
        action = message.data['ActionKeyword']
        module = message.data['Modules']

        if action in ['show', 'display']:
            self.show(module)
        elif action in ['hide', 'conceal']:
            self.hide(module)

I can’t speak for how to best connect to the magic mirror though. Looks like there is some websocket api (Not sure about this, I’m not very good with js) that could be used directly maybe or go through the hello lucy thing.


#3

Thanks! That is awesome! Makes sense to me. I just need to wrap my head around what each of the pieces of code do.

What are the sample1.intent.json files used for?

My pan is to use Mycroft for the voive recognition portion given the microphone resource busy problem. The MMM-voice module has all the code to control the MagicMirror display properties so I see no reason to re-invent the wheel there. My thought is to have the MMM-voice module listening to a port instead of the microphone, and just send the commands to the port from mycroft. The MMM-voice module uses pocketsphinx for the voice recogniton and wake word detection is no where near as good as Mycroft. So my idea is to use fewieden’s code and modify it to listen to a port for the commands and bypass the pocketsphinx portion of this code. The Hello-Lucy is just modifications to fewieden’s code to allow for individual module controls and pages of modules.

So here’s a question that will show how much I need to learn. As far as the
sample1.intent.json
contains:
{
“utterance”: “Hide Clock”,
“intent_type”: “ActionKeywordIntent”,
“intent”: {
“ActionKeyword”: “HIDE”,
“Module”: “CLOCK”
}
}

Will I need to create a number sample*.intent.json files to match the number of commands for the modules? Or will I just need a few?
In other words, how does that work?

Thanks!


#4

The sample1.intent.json files are used for automatic skill testing basically and doesn’t define the intent.

You don’t need to create these the intents are defined together with the code, in my example above the decorator of the method declares the intent.

    @intent_handler(IntentBuilder('').require('ActionKeyword').require('Modules'))
    def handle_mm_command(self, message):

This means that if an utterance contains a word from both ActionKeyword.voc and Modules.voc the handle_mm_command method should be called.

Check out this doc for some more details: https://mycroft.ai/documentation/skills/introduction-developing-skills/


#5

@forslund So I just wanted to post a little bit about where I am with this. The creator of the MMM-voice module for MagicMirror helped me understand more about how the Mycroft MagicMirror skill should work. There are options as to how to have Mycroft control modules (SHOW_CLOCK) or (HIDE_CLOCK) or (UPDATE_MIRROR) for example. The MMM-voice module has that functionality, but there is a problem trying to run both the MMM-voice module and Mycroft at the same time referred to above about the two processes trying to use the microphone resource at the same time. He gave me a few suggestions. The one idea that seems to make the most sense to me is to have Mycroft act as a “remote control” of sorts. There is an existing MMM-Remote-Control module that works by having a remote.html webpage running on the MagicMirror webserver at localhost:8080/remote.html. Mycroft can pass commands to the MMM-Remote-Contol module by using the GET method. For example:

# importing the requests library
import requests
 
# api-endpoint
URL = "http://localhost:8080/remote"
 
# parameters given here
action = "HIDE"
module = "module_3_CLOCK"
 
# defining a params dict for the parameters to be sent to the API
PARAMS = {'action':action, 'module':module}
 
# sending get request and saving the response as response object
r = requests.get(url = URL, params = PARAMS)
 
# extracting data in json format
data = r.json()

I have had issues with MagicMirror’s ipwhitelist getting the MMM-Remote-Control module to work with my cell phone for example, but using the Chromium browser on the Pi’s pixel desktop, accessing the remote.html and passing commands via the browser like: localhost:8080/remote?action=HIDE&module=module_3_CLOCK
work perfectly.

Obviously part of the requirements of this skill would be to have MMM-Remote-Control installed and working and the request package for Python via pip install requests.

Before I really start writing the skill, are there any suggestions or better ideas for this skill?
No matter what it seems that there will have to be a module for the MagicMirror installed to allow Mycroft to pass commands to. Unless there is a way to start Mycroft from the node_helper.js as fewieden suggests here.