Skills which have the same trigger

I currently have problems with several skills that have the same trigger as other skills “play” for example. how could we solve that:
play Bon Jovi:
do you want to use youtube, mpd or Pandora:
pandora:
play Bon Jovi on pandora:
play Bon Jovi on pandora:
play Bon Jovi on pandora:

Hi Andreas,

It’s a great question, we are implementing some common frameworks for this exact purpose. There are already common play and common query skills implemented.

The basic way that it works is if a user invokes a specific player eg play Bon Jovi on Pandora then it goes straight to Pandora. If it’s more ambiguous or more than one skill might be able to respond then it retrieves a confidence score from each skill, being how confident that service is that it can provide what the user wants. So Spotify might have a greater range of music, but if you have a pre-made “work out station” on Pandora, then it should prioritise that over some random work out playlist on spotify.

Checkout the Common Play Framework documentation for details: https://mycroft.ai/documentation/skills/common-play-framework/

I’ve found that some general words like “Play Queen” don’t work. So there’s no doubt still some tweaking to do. Would be great to hear how well this is working in practice for others too.

What’s been your experience?

i use mycroft in german and therefore the maintenance of the translation is difficult. If you put a hard limit on the translation you move away from the natural language. There are also other skills that have similar contents like Wiki, Wolfram Alpha … or Fhem, Hue … You could also set a priority flag similar to Fallback where the client can adjust to mycroft ai.
I do not want to do that like alexa with “Switch TV on Harmony”. but occasionally to be asked because I did not remember the right sentence, I think ok.

1 Like

I have to read:
https://community.mycroft.ai/t/multiple-intents-match-an-utterance/5533/18

Hi @gras64. There are two ways of approaching this problem. The Common Play, Common Play (and I understand is coming, Common Turn On/Off) mechanisms act at the meta-skill level, as “super” skills to select between actual intents. Or what I was looking in issue 5533 at was bringing this mechanism into the core Mycroft code, doing this selection as part of the normal intent selection process.

The Adapt engine is asked to find the single best intents to match an utterance. (Actually, the best for each possible parse, but let’s just go with one parse.). I’ve added code to return a list of possible matching intents, ordered by confidence in syntactic pattern match. From that, there are at least two possible choices:
a) try each intent in turn until one succeeds; or
b) ask each intent for a semantic confidence and try them based on that

I’ve tried the first approach as I haven’t investigated a use case for the second as yet. Intents can be modified to signal success or failure by adding an explicit return. Current intents all succeed (normal ones, not fallback ones), so no changes are needed for backward compatability. It adds a new message type on the bus and relatively small changes to the intents code. Beta code is at Github.