Multiple wake words to help define intent

I’ve been messing with Mycroft for about a week now, focusing on using it mostly for my home automation setup. While I was using it, I thought it’d be great if it was possible to assign a number of wake words, and have mycroft pass that wake word when talking with other services.

As a use case, if I had a mycroft in each bedroom, the intent of me saying ‘turn on the fan’ is dependent on that room. If in my bedroom I called it Jarvis, it could turn on my fan. If the guest room is being used I could have them refer to it as mycroft, and it would operate the fan in the guest room using the same ‘turn on the fan’ command.

A similar setup would be neat for a number of other applications. I just started using mycroft with the remember the milk app. If I could add things to my list by calling it Jarvis, and my better half could add things to her list by calling it Mycroft, that’d be super useful. I like to hear the Fox news report, and she wants to hear CNN. My pandora stations are different then hers. So on and so forth. I imagine there are a ton of similar skills that are tailored for an individual but are being used in a household setting.

My thoughts on this would be.
if you have multiple mycrofts then each mycroft would have its own home.mycroft.ai page. You could write a skill (fan-control-skill) that has a web configuration component that would have a “location” designation therefor each instance of the skill installation would use the default “location” designation that has been configured for its instance of the home.mycroft.ai page. This would still give you the option to specify the location if for some reason you needed to turn off the fan in a different location from another mycroft. If the location was specified in the utterance it would override the default location from the home.mycroft.ai page and either send the command to the correct mycroft via messagebus or it would directly control that fan.
Just my ramblings…:rofl:
have fun and welcome aboard.

I dont think having one hime.mycroft.ai for each device in the house would be the way to go. It will easyly become a mess and lot of work to administrating.

I think each devise should have a room location, and that room location and skills could (if theu needed that) get that location. Then the automation skill could could use room location when desiding what to do.

Thanks @pat1121, great suggestion.

At a broader level, I think the problem you’re trying to solve with the multiple Wake Words feature request is really -

can my voice assistant recognize different people in a household

Having different Wake Words is one way to solve this general problem.

Google have gone down a different route here, and they are instead using vocal pattern recognition to identify different members of a household. IMHO this is probably a more elegant way to solve the problem, because it allows multiple household members to use multiple Devices, but still be recognised. For example, Diana and Nadia live together and Diana likes rock music while Nadia likes classical music. If Diana says “Hey Mycroft, play Discover Weekly in the living room” then her Discover Weekly is likely to be different to Nadia’s.

At an even broader level, a voice interaction occurs with a rich context;

  • The user has performed previous actions. Should context be inferred from this? We do some of this in Conversational Context.
  • The user has preferences, a location, a time of day, a day of the week. Should context be inferred from this? For example, usage patterns are likely to be different on weekdays and non-weekdays.
  • The user has a pattern of usage, based on days, weeks and months of using a voice assistant. How should this inform context?

Best, Kathy

Yes, this is exactly what I’d like to do, in combination with location awareness (like my fan example). The location seems like it’ll be easier to accomplish since there will likely be a completely separate mycroft device located in the area it’s receiving instructions for.

Correct me if I’m wrong, but using vocal pattern recognition seems like it would require storing a lot of information about the person using mycroft, and it would go against a lot of what made mycroft appealing to me. If I didn’t mind a creepy recording device in my house keeping tabs on me I’d already have a google or amazon device. I like mycroft a lot more because it isn’t the other options.

Is there a planned path forward for mycroft to have features that maintain the privacy of users but can still differentiate between them?
`

Hi there @pat1121, some great questions. There are a couple of approaches here.

One approach to vocal pattern recognition is to “train” a new Wake Word for each person in a household. This would require storage of vocal samples, but not necessarily a lot of other personal or identifying information.

If we were to store this information on a server, then it raises all sorts of privacy issues, particularly if the voice samples can identify the user.

You can see our forward technical planning for features on our Roadmaps.