Creating my first skill with essentially no experience - Mycroft MagicMirror skill

So, as you can tell from the title, I have very little experience in writing code. I don’t know Python or Java. But I am a fast learner/tinkerer and am committed to seeing this through. I am looking for any suggestions that might help me or anyone willing to put in a little time to answer a few questions when I run into problems. I figure this is the best place to start.

I have a Raspberry Pi 3 b running Debian Jessie (not lite) with MagicMirror and Mycroft both installed and operating without issues. Both start automatically on boot and operate independently. For the skill I am proposing to work, a user would need to have a number of things on their Pi.

First I started with a Raspberry Pi 3 b and a 16gb micro sd card.
then get the Debian Jessie image here:

http://downloads.raspberrypi.org/raspbian/images/raspbian-2017-07-05/

after using etcher to put the image on the sd card, into the pi and boot. At the linux terminal window:

  • sudo apt-get update

  • sudo apt-get upgrade

then from the command line or linux terminal window follow installation instructions for MagicMirror
https://github.com/MichMich/MagicMirror

Then once you follow all the instructions and have the MagicMirror working, then install the Mycroft-Core
files and directions here: https://github.com/MycroftAI/mycroft-core/tree/master

I did run into an issue with the libfann-dev library explained here with solution: Trying to install both Mycroft-Core and MagicMirror on the same PI

So… I thought I would try to build on an existing MagicMirror module called MMM-voice with the Hello-Lucy modifications to receive commands from a Mycroft skill.

That’s where I am now. The MMM-voice module for MagicMirror is installed and works if I stop Mycroft. It’s an issue with having one microphone. (Error) audio open error: Device or resource busy. Essentially two applications can’t share the microphone without coming up with a solution to that (possibly using dsnoop). But my thinking is just to have Mycroft use the microphone and build a skill that passes commands to the MMM-voice module (java).

I believe building the Mycroft Skill to do that will be rather straight forward using commands like the Hello-Lucy modifications. i.e.:
Hide Clock
Hide Email
Show Clock
Show Email
Swipe Left
Swipe Right
Show Page One
Hide Page One
Show Modules
Hide Modules
etc. etc.

The .voc files would probably be:
ActionKeyword.voc
containing:
Show
Display
Hide
Conceal

Modules.voc
containing:
Alarm
Clock
Email
News
Weather
etc. (putting all of the Module names in the modules.voc file)

sample1.intent.json
contains:
{
“utterance”: “Hide Clock”,
“intent_type”: “ActionKeywordIntent”,
“intent”: {
“ActionKeyword”: “HIDE”,
“Module”: “CLOCK”
}
}

As far as Dialog, Mycroft could respond "Hiding Clock"
So that’s kind of where I am. Two real big questions

  1. will I need to build an intent for each command? or would there be a way to use an array?

  2. What would be the best way to have Mycroft send the command “HIDE_CLOCK” to the MMM-voice module to be processed? socketmessage? or to reuse the code written by fewieden and bring that into the Mycroft environment?

Whatever advice the community is willing to provide would be extremely helpful.

Cheers!

2 Likes

Hi,

I would use a single intent using the voc files you’ve suggested and just checking which was actually used. Something like:

    @intent_handler(IntentBuilder('').require('ActionKeyword').require('Modules'))
    def handle_mm_command(self, message):
        action = message.data['ActionKeyword']
        module = message.data['Modules']

        if action in ['show', 'display']:
            self.show(module)
        elif action in ['hide', 'conceal']:
            self.hide(module)

I can’t speak for how to best connect to the magic mirror though. Looks like there is some websocket api (Not sure about this, I’m not very good with js) that could be used directly maybe or go through the hello lucy thing.

2 Likes

Thanks! That is awesome! Makes sense to me. I just need to wrap my head around what each of the pieces of code do.

What are the sample1.intent.json files used for?

My pan is to use Mycroft for the voive recognition portion given the microphone resource busy problem. The MMM-voice module has all the code to control the MagicMirror display properties so I see no reason to re-invent the wheel there. My thought is to have the MMM-voice module listening to a port instead of the microphone, and just send the commands to the port from mycroft. The MMM-voice module uses pocketsphinx for the voice recogniton and wake word detection is no where near as good as Mycroft. So my idea is to use fewieden’s code and modify it to listen to a port for the commands and bypass the pocketsphinx portion of this code. The Hello-Lucy is just modifications to fewieden’s code to allow for individual module controls and pages of modules.

So here’s a question that will show how much I need to learn. As far as the
sample1.intent.json
contains:
{
“utterance”: “Hide Clock”,
“intent_type”: “ActionKeywordIntent”,
“intent”: {
“ActionKeyword”: “HIDE”,
“Module”: “CLOCK”
}
}

Will I need to create a number sample*.intent.json files to match the number of commands for the modules? Or will I just need a few?
In other words, how does that work?

Thanks!

The sample1.intent.json files are used for automatic skill testing basically and doesn’t define the intent.

You don’t need to create these the intents are defined together with the code, in my example above the decorator of the method declares the intent.

    @intent_handler(IntentBuilder('').require('ActionKeyword').require('Modules'))
    def handle_mm_command(self, message):

This means that if an utterance contains a word from both ActionKeyword.voc and Modules.voc the handle_mm_command method should be called.

Check out this doc for some more details: https://mycroft.ai/documentation/skills/introduction-developing-skills/

2 Likes

@forslund So I just wanted to post a little bit about where I am with this. The creator of the MMM-voice module for MagicMirror helped me understand more about how the Mycroft MagicMirror skill should work. There are options as to how to have Mycroft control modules (SHOW_CLOCK) or (HIDE_CLOCK) or (UPDATE_MIRROR) for example. The MMM-voice module has that functionality, but there is a problem trying to run both the MMM-voice module and Mycroft at the same time referred to above about the two processes trying to use the microphone resource at the same time. He gave me a few suggestions. The one idea that seems to make the most sense to me is to have Mycroft act as a “remote control” of sorts. There is an existing MMM-Remote-Control module that works by having a remote.html webpage running on the MagicMirror webserver at localhost:8080/remote.html. Mycroft can pass commands to the MMM-Remote-Contol module by using the GET method. For example:

# importing the requests library
import requests
 
# api-endpoint
URL = "http://localhost:8080/remote"
 
# parameters given here
action = "HIDE"
module = "module_3_CLOCK"
 
# defining a params dict for the parameters to be sent to the API
PARAMS = {'action':action, 'module':module}
 
# sending get request and saving the response as response object
r = requests.get(url = URL, params = PARAMS)
 
# extracting data in json format
data = r.json()

I have had issues with MagicMirror’s ipwhitelist getting the MMM-Remote-Control module to work with my cell phone for example, but using the Chromium browser on the Pi’s pixel desktop, accessing the remote.html and passing commands via the browser like: localhost:8080/remote?action=HIDE&module=module_3_CLOCK
work perfectly.

Obviously part of the requirements of this skill would be to have MMM-Remote-Control installed and working and the request package for Python via pip install requests.

Before I really start writing the skill, are there any suggestions or better ideas for this skill?
No matter what it seems that there will have to be a module for the MagicMirror installed to allow Mycroft to pass commands to. Unless there is a way to start Mycroft from the node_helper.js as fewieden suggests here.

@KathyReid @forslund Sorry for tagging the two of you, but you have been so helpful in the past, I thought you’d be the ones to ask. I am almost finished with the Magic-Mirror-Voice-Control-skill. There is very little code left to write, only minor clean up and commenting really. So my question is how do I test the skill on my mycroft before I submit it? I have read Automatic testing of your Mycroft skill, but I am very confused.

Any guidance would be much appreciated!

Cheers!

Hi,

The automatic testing framework was just merged so update your Mycroft installation (git pull in the mycroft-core directory)

(If you haven’t updated in a while this will update to python3 and you’ll need to run dev_setup.sh)

Ok at this point you should be able to run the tests by

first activating the virtualenv
source .venv/bin/activate

Now you should be able to run the skill tests located in test/integrationtests/skills

The main one is discover_tests which will search the system for skills and run all of them. This should be run using pytest:

pytest discover_tests.py

(since the upgrade to python3 a couple of the tests are broken but will be fixed shortly)

there’s also single_test.py which can be used to test a single skill. currently run using

python single_test.py PATH_TO_SKILL

Creating tests;
The skills has a test folder where the tests are added. The simplest type of tests are the intent tests, these are described by json files in the intent folder.

These look something like this:

{
  "utterance": "what's your ip address",
  "intent_type": "IPIntent",
  "intent": {
    "IP": "IP address"
  },
  "expected_response": ".*[0-9]+ dot [0-9]+ dot [0-9]+ dot [0-9]+.*"
}

This test will send the utterance “what’s your ip address” to the skill system. It will then verify that an adapt intent called IPIntent will be triggered and that a keyword called “IP” contains the text "IP address"
Then it will make sure the skill responds with something matching ".*[0-9]+ dot [0-9]+ dot [0-9]+ dot [0-9]+.*", a regular expression that will match ipv4 addresses.

In the documentation you mention further examples are available like “expected_dialog” which makes sure a specific dialog file is used as a response.

The skill acceptance auto tester will be offline until tomorrow or wednesday while I upgrade it for python3 and the new msm.

3 Likes

I tried to git pull from the mycroft-core folder, but I got this error

remote: Counting objects: 8, done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 8 (delta 6), reused 3 (delta 2), pack-reused 0
Unpacking objects: 100% (8/8), done.
From https://github.com/MycroftAI/mycroft-core
   1f630f6..a353c43  dev        -> origin/dev

*** Please tell me who you are.

Run

  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: empty ident name (for <pi@mycroft.(none)>) not allowed

What does that mean?

I have never seen this before with a git pull to update.

It may be that it’s doing a merge for some reason. That requires the email/name when tagging the merge commit.

Have you done any local changes to mycroft-core? Hmmm, no you shouldn’t have since you don’t have any email/name set.

Are you on master or dev? (check with git branch)

1 Like

Yeah, I figured that out after I set the global values with email and user name and ran the update. My mycroft installation was from the master branch. The merge did occur as the update was from the dev branch. So far it hasn’t seemed to cause a problem, however, I haven’t put mycroft through it’s paces. What do you suggest I do now?

I do have an image that I could revert to without losing much. Or I could just continue on with what I’m doing.
What do you suggest?

For now I think you can keep going with the current state. Otherwise you can probably do a git checkout dev && git pull to move over to a fresh dev branch

1 Like

@forslund Well, as I mentioned in the title of this post, I have no experience writing code.

I loaded my skill onto the pi in the /opt/mycroft/skills/ folder and right away it found some simple mistakes. Most of them were easy to fix thanks to the cli pointing them out to me, but this one stymied me.

0:02:59.006 - mycroft.skills.core:load_skill:144 - ERROR - Failed to load skill: Magic-Mirror-Voice-Control-skill
Traceback (most recent call last):
  File "/home/pi/mycroft-core/mycroft/skills/core.py", line 120, in load_skill
    skill = skill_module.create_skill()
  File "/opt/mycroft/skills/Magic-Mirror-Voice-Control-skill/__init__.py", line 316, in create_skill
    return MagicMirrorVoiceControlskill()
  File "/opt/mycroft/skills/Magic-Mirror-Voice-Control-skill/__init__.py", line 39, in __init__
    self.load_data_files(dirname(__file__))
  File "/home/pi/mycroft-core/mycroft/skills/core.py", line 885, in load_data_files
    self.load_vocab_files(join(root_directory, 'vocab', self.lang))
  File "/home/pi/mycroft-core/mycroft/skills/core.py", line 894, in load_vocab_files
    load_vocabulary(vocab_dir, self.emitter, self.skill_id)
AttributeError: 'MagicMirrorVoiceControlskill' object has no attribute 'emitter'

Not sure what I did wrong, but any help would be appreciated.

Here’s a link to a temporary repository for the skill:
Magic-Mirror-Voice-Control-skill

You’re loading the data files in the __init__ method. The emitter needed to complete this is not available at this stage.

Also this is something you don’t need to do yourself. It’s done during skill loading for you. so just remove line 40 and it should work.

2 Likes

@forslund What about the .json files? Again, this is just my inexperience, but I am not sure how to make sure the skill knows where they are. They are required for the skill to work properly. Will they be accessible to the skill without adding additional code to point to where they are? I’ve been looking at the other skills to get ideas from what other developers have done, but I’m still learning.

Also, I can’t thank you enough for your help. You’ve been so helpful and I sincerely want you to know how much I appreciate it.

Thanks!

the AvailableModules.json?

The code is executed from wherever the main mycroft instance is started from so you need to specify the complete path:

    with open(join(self._dir, 'AvailableModules.json')) as f:
        ...

(join can be imported from os.path)

1 Like

@forslund Thanks!!! I will make these code changes and let you know how it goes. Hopefully I will report back to you a fully functioning skill. I won’t be able to make the changes for a couple of days, spare time is getting harder to come by. Once I’ve completed the skill and have it functioning I may reach out to you to see if the skill acceptance auto tester is up and running and how I submit the skill.

Thanks again!

3 Likes

@forslund Well no such luck. Changing to Python3.4 is going to be more work than I thought
New load skill error:

:56:37.972 - mycroft.skills.core:load_skill:144 - ERROR - Failed to load skill: Magic-Mirror-Voice-Control-skill
Traceback (most recent call last):
  File "/home/pi/mycroft-core/mycroft/skills/core.py", line 127, in load_skill
    skill.initialize()
  File "/opt/mycroft/skills/Magic-Mirror-Voice-Control-skill/__init__.py", line 62, in initialize
    r = requests.get(url=url, params=payload)
  File "/home/pi/mycroft-core/.venv/lib/python3.4/site-packages/requests/api.py", line 70, in get
    return request('get', url, params=params, **kwargs)
  File "/home/pi/mycroft-core/.venv/lib/python3.4/site-packages/requests/api.py", line 56, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/pi/mycroft-core/.venv/lib/python3.4/site-packages/requests/sessions.py", line 488, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/pi/mycroft-core/.venv/lib/python3.4/site-packages/requests/sessions.py", line 603, in send
    adapter = self.get_adapter(url=request.url)
  File "/home/pi/mycroft-core/.venv/lib/python3.4/site-packages/requests/sessions.py", line 685, in get_adapter
    raise InvalidSchema("No connection adapters were found for '%s'" % url)
requests.exceptions.InvalidSchema: No connection adapters were found for 'localhost:8080/remote' 

I will look into the differences between the requests module for 2,7 and 3.4
2.7 syntax:
url = ‘localhost:8080/remote’
payload = {‘action’: action, ‘module’: module}
r = requests.get(url=url, params=payload)

I have no idea how the syntax changes for 3.4.

Will keep you posted. Suggetions welcome.

Thanks

@forslund Well… I feel like a schmuck. I forgot the http:// in front of the url.

I’ve typed it into the web browser on the RPi so many times without the http:// to check the responses from the MMM-Remote-Control module that my laziness carried over into the code.

I will keep you posted.

Thanks

The requests call seems right, are you sure there’s just not a missing http:// in the url?

Looks abit like it:

requests.exceptions.InvalidSchema: No connection adapters were found for 'localhost:8080/remote'

Edit:

I see you found it :slight_smile: (sorry for not reading all the way down)

1 Like

@forslund Changing the url to url=‘http://localhost:8080/remote’ fixed that problem. But it introduced me to a new problem that has to do with the data that is returned from the MMM-Remote-Control module. According to the README for the Remote-Control module:

Format of module data response

The response will be in the JSON format, here is an example:

{
“moduleData”:[
{“hidden”:false,“name”:“alert”,“identifier”:“module_0_alert”},
{“hidden”:true,“name”:“clock”,“identifier”:“module_1_clock”,“position”:“bottom_right”},
{“hidden”:false,“name”:“currentweather”,“identifier”:“module_2_currentweather”,“position”:“top_right”}
],
“brightness”:40,
“settingsVersion”:1
}

so when this code is run:

    url = 'http://localhost:8080/remote'
    payload = {'action': 'MODULE_DATA'}
    r = requests.get(url=url, params=payload)
    data = r.json
    with open ('Moudule_Data.json', 'w') as f:
        json.dump(data, f, indent = 2)

It returns this error:

Traceback (most recent call last):
File “/home/pi/mycroft-core/mycroft/skills/core.py”, line 127, in load_skill
skill.initialize()
File “/opt/mycroft/skills/Magic-Mirror-Voice-Control-skill/init.py”, line 65, in initialize
json.dump(data, f, indent = 2)
File “/usr/lib/python3.4/json/init.py”, line 178, in dump
for chunk in iterable:
File “/usr/lib/python3.4/json/encoder.py”, line 429, in _iterencode
o = _default(o)
File “/usr/lib/python3.4/json/encoder.py”, line 173, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: <bound method Response.json of <Response [200]>> is not JSON serializable

I have searched google for this issue:

Some of what I found suggests that it could be as simple as:
data = str(data)
before the json.dump method

other solutions suggest:
data = r.json()
instead of:
data = r.json

Have you run into this with the conversion to 3.4 yet? Any ideas?

Thanks!