Multi-turn skills in Mycroft


Does Mycroft support multiturn/conversational skills? Any examples or documentation on this for me to learn from?




This is a work in progress. In the current mainline there is the ability to tell Mycroft to listen immediately after saying anything. Inside a skill you would invoke this with self.speak("utterance", expect_response=True) … see

That just allows the user to talk back to Mycroft without having to say “Hey Mycroft”.

For real conversational interaction, there is some ongoing work to implement a converse() method. That is in a forthcoming pull request:

This will allow a Skill to have a preview of utterances, before invoking the Adapt intent parser. Only recently used skills receive the converse() notification and it is passed to them in order of use. So you could code in a skill that does something like this:

def handle_cancel_alarm():
    self.speak("are you sure you want to cancel?", expect_response=True)

def converse(self, utterances, lang="en-us"):
    if utterances == "yes":
        # do whatever
        self.speak("Alarm canceled")

That is obviously simple-minded, but with this code the following interaction would work:

User: Hey Mycroft, cancel the alarm please
Mycroft: Are you sure you want to cancel?
User: Yes
Mycroft: Alarm canceled

There will be more tools to make building and managing conversations easier in the future, but this is the foundation.



Thank you.

What I am aiming at is excatly what your example dialog does. For example, in the ISS skill I wrote, I would like to modify it to:

  1. Only give the name associated with the coordinates, and then ask if the user wants to know the coordinates. I feel providing both is a bit of cognitive overload.

  2. I would also like to have Mycroft ask the user if they want to know more about the location identified:

Mycroft: The ISS is over the Sulu Sea, would you like to know more about the Sulu Sea?
User: Yes
Mycroft then does a wiki, google, whatever, and speaks back some info.

As I see the current status, there are 2 ways to do this:

Option 1, with the expect_response flag set to True, I could then allow the user to call another skill without saying “Hey Mycroft”, but this would be rather messy if I just needed a “yes/no” answer as I would need to implement a “yes/no” skill.

Option 2, the converse() method, but this is not yet deployed so be patient. :slight_smile:




I just wanted to show interest in this, as I would be very interested to test any improvement on this. I would expect to wake up Mycroft with “hey mycroft” and after some interaction, make Mycroft sleep with something like “thanks mycroft”.

We will try first with basic interaction, lets say make mycroft do some skills until we say “thanks mycroft” and then we can improve the conversation skill, which I found the angular stone of any AI


I too am very keen to see this implemented and rolled out into the Mycroft release.

Let me share my motivation for this: I’m interested in building–OK, not so much interested as pestered/encouraged by a client who wants–an interface between Mycroft and a software layer that can either form/execute pandas commands or interact with databases. Not so much a dictation tool as something that takes queries and performs the appropriate analysis. Mycroft is currently the leading implementation engine for this largely because it looks promising as a long-term solution for something that can run without cloud services (once OpenSTT is available).

But, what I have in mind definitely requires a system that supports an ongoing conversation.

So, without being pushy or anything…any ETA on the aforementioned multi-turn feature?

Who knows–maybe I can get the client who wants to talk to databases to pay me to help with implementation…


multi turn skills are possible

by AdaptContext -
by Converse Method - TODO get example
by new PR get_response -


Thanks @Jarbas_Ai!

I just did a git pull to get my core up-to-date. With hope I’ll be hacking a multi-turn skill soon!