Creating a new skill walkthrough

I’ve had a play with what’s available so far and it’s all looking good. Now I’d like to start adding skills.

While I’m happy to poke around with it until it breaks in an effort to understand it, I have limited available time these days, and I think this area of the documentation could use some improvement. That’s not a criticism, the docs look great, and I can see that they also have a repository for folks to make pull requests to (which is where I’m going with this), but for now, they’re just a little devoid of any actual content :slight_smile:

So I thought I’d write down my experiences with the goal that other people will be able to follow them. However, first I’d like some pointers myself.

For the sake of having an example to work with, let’s say I wanted to create a skill that simply repeats back what I said (my kids have a toy that does this already and they love it, seriously). So what I want is to be able to say: “Mycroft, say hello” and Mycroft will reply “Hello”. In fact, let’s make that “Hello world” so we can use the traditional over-simplified example.

As I understand it, I will need to write an adapt intent to listen out for the word “say”, then I will need to write a skill that takes in everything after the word “say” and gets Mycroft to repeat it. I will then need to place this in a folder with an init.py file and put that in the mycroft/skills/ directory.

This isn’t going to be quick (nothing to do with the difficulty or otherwise of the task, more that I don’t have a lot of free time these days), so before I get started with that, is there anything else I will need to do/ be aware of?

3 Likes

@Autonomouse : Go ahead Sensei ! Show us the light :bulb:

Like you, I’d like to start adding Skills. There is a Framework/SDK for that here. But I don’t know how to start, so your help is welcome :pray:.

I had planned to start with a Skill that can communicate with an ESP8266. Since I want to make it easy to use, it’ll require a configuration file easy to modify. And I though that Mycroft could do it.
So, finally, I’ll start by making a Skill that can read and write in a configuration file. All configuration files actually, like ~/mycroft/mycroft.ini.

Example :

  • Me : “Mycroft, what’s my location ?
  • Mycroft : “You are in Lawrence
  • Me : “Set my location in Belgium
  • Mycroft : “You are now in Belgium

I think this Skill could help others Skill-maker.
With this Skill, I hope I can easily add a new ESP8266 module just asking Mycroft. :wink:

That sounds cool! Maybe take a look at the enclosure client: mycroft-core/mycroft/client/enclosure at master · MycroftAI/mycroft-core · GitHub

The source is not out yet for the Arduino, but it’s an example of a client that interprets events on the message bus, and communicates via a serial UART as well.
I’m not sure if this will help or not:

Mycroft core services communicate via a tornado.io websocket message bus. This service can be started in the source with:
./start.sh service

Skills are loaded into another service that is run by the script:
./start.sh skills

It is a client that attaches to the service. It listens for events, parses their intent and triggers the skills.

The wake word recognizer, speech-to-text, and text-to-speech handling resides inthe client :
./start.sh voice

I’ve not tried it yet, but the skills SDK will give you a framework to write another client that will attach to the Mycroft message bus service, listening for and sending events to the rest of the stack. On the device, the enclosure client runs and does just this.

Feel free to correct any errors or add emphasis if you know better than I.

I’m a little confused on how to even get the skills environment working. It said on the Skills SDK page “install the skills SDK via the instructions here” however it leads you to a page that says welcome to the wiki work in progress back on github. The next line of instructions to a here also leads to the same place. I cant get this to work" sudo /etc/init.d/mycroft-skills stop" and all of the mycroft-skills-container commands involve having a mycroft skills container working. I can do ./start.sh skills and launch bash to watch the log, but the sdkdoc doesn’t work, and you cant do anything from the bash skills window except reading the process tree summary.

I also am having trouble understanding the way the current skills are laid out. I am new to python, so this may impact that. But that influences understanding the existing skills and writing new ones in that format, which was what i understood from a skills example off that site.

When viewing the current skill tree, they don’t seem to leave a lot of room for mycroft to apply changes to its own skills, let alone storing those changes, or editing other skill commands later. Say for example, when I say set my name as Phate. It wikis my name is faith the movie, or a definition with a lot of numbers for my name and statistics, or what a name is. I ask it what my name is and it asks me my name which is all well and good, but doesn’t store the value or relation. Also a correction interface would be nice, such as no, don’t do that, do this- (insert example)-. I wanted to start working on the relations as I am trying to learn python, and build a relation tree, as well as the ability to store users he is working with, (and passwords), so he can know what that persons answers, preferences, and common commands are and have them readily available on a tere-two or three, while basic uncalled commands would be at the lower end waiting to be added to the preferences and brought up or down a scale depending on frequency of use.

1 Like

I think that you can start a skill in a container by using ./start.sh skill_container . That should be better documented though, i agree.

As for context and modifying configuration attributes, we don’t currently have anything in place for that. I think one of our team members may be working on context soon, but I’m not entirely sure. If you have ideas for that, feel free to share them.

yeah, just did that, it ran in the shell, but the mycroft-skills-container commands don’t work still. Im going to save my files delete mycroft and try again. Could be the fact that I set it up under phatez/Mycroft_AI/mycroft/mycroft-core as the tree.

I thought I changed all the relevant path data in all the files or at least most. I’ll try it this time with it being phatez/mycroft-core as recommended in the gnome shell readme file. Wouldn’t mind having the GUI interface that is seen briefly in the background here. But being able to debug mycroft from the terminal during interaction is key, that way I can see exactly where the problem is when I am tinkering with his brain (muah haha…)lol, sorry, couldn’t think of a better phrasing for that and I’m running late…again… just like every other day ^.^


*** edit: added update ***

Yup, tried using the docker instructions this time, and wasn’t able to get that to work, tried the Mycroft-AI-gnome-shell-extension and that failed. So I think ill revamp an old dell dimension 9200, I got when I fixed someones comp, and dedicate it solely to mycroft.

Same here. I’m Javascript guy and don’t know python. I just want to create some skills using Javascript, are there any examples? And since Mycroft is open, how can I have Mycroft send the user utterance query to my existing skills in Watson instead?