Wake words for multiple devices

Greetings. I found out about Mycroft earlier today and am stoked about it. Since I’m a big fan of Linux, but dubious about “smart” devices from Amazon, Apple, Google, Facebook, etc, this seems like a great project for me to sink my teeth into.

I’ve got a few questions, though. I have a laptop, a home office PC, and a media center PC. Would it be better to set a different wake word for each device, or set the same wake word for all devices? I guess I’m asking whether to treat each device as it’s own entity, or act as if they are parts of a greater whole (i.e. HAL, JARVIS, etc)?

Any thoughts?

Hi there @lafnlab,

Wake Word is a per-account setting, rather tha a per-device setting, so if each of your devices was under the same account you would only be able to set them to the same Wake Word through home.mycroft.ai.

From a voice user interface perspective, a single Wake Word that you’re used to tends to work best.

Interesting use case actually!

Kind regards,
Kathy

I can see where a single wake word would be easier from the voice user perspective, but it would be something that would be nice to have configurable as well. There’s a device configuration for each Mycroft instance on the portal. It would be nice to have a list of “default” settings where all devices would default to wake words “Hey Mycroft” and all devices would have blue eyes. There could be a configuration section available for changing defaults, but also have the capability of not using those defaults on individual units. I would love to have two Mycroft Mark I units sitting right next to each other and have a different voice, different eye color, and different wake words. Maybe that’s just me being nerdy, but it sounds like so much fun.

While it’s not user friendly, you can change the wake word per device logging into the device via ssh and inserting the following json into /home/mycroft/.mycroft/mycroft.conf:

{
  "listener": {
    "threshold": 1e-90,
    "phonemes": "HH EY . M AY K R AO F F",
    "wake_word": "hey mycroft"
  },
  "tts": {
    "module": "mimic",
    "mimic": {
      "voice": "ap"
    }
}

You can also change the eye color per device via voice (hey mycroft, set the eye color to {color}).

2 Likes

I have this exact same issue. In this case I have two units sat side by side. Yeap, it gets confusing, especially when I paired the second unit. Then its saying “you can say things like HEY MYCROFT…”. First unit “What?” :rofl:

Anyway, is this fix above still valid with mimic 2? I tried this on my second mark 1 and it didn’t work. More over the config file had been rewritten, back to what is was.

The fix above will not work with your current setup, unfortunately. From April 2018, we switched to Padatious as our Wake Word engine, from the phoneme-based PocketSphinx. The example above is for PocketSphinx.

Padatious works differently to Pocketsphinx, in that Padatious is trained on the waveform of the Wake Word, whereas PocketSphinx is looking for the phoneme.

It is possible to have a different model for Padatious on different devices, but we don’t have an easy to use / easy to configure way to do this yet. Eventually we want to get to the point where people can train their own Padatious model. For example, I’d use a model trained on women with an Australian accent - as it would be more accurate than the current model, which has a lot of male voice samples.

1 Like

Hi @KathyReid, That’s a shame. As you might imagine, having the same “wake word” gets highly entertaining with these two. :joy:

1 Like

I got a good laugh out of that!

Seriously though, you might want to consider training your own Precise model. I’ve done a bit of training on a model that says “G’day Mycroft” in an Aussie Lady Voice, because my voice isn’t as well recognised by Padatious as say a mid-Western American man, because we have more voice samples from mid-Western American men.

Hi @KathyReid, My plan is the the left unit will be the front line unit with the latest official releases and a few of my own skills. The right one will be the test bed for anything and everything. So on that basis I would need to train the right right one. Oh course the fundamental question is “How?”. I see the documents about Precise but this I suspect is a little more advanced.
Thanks,
Dave

Yes, it’s definitely a bit more advanced I’m afraid. For now what I would do is tune the microphone pickup on your dev unit down a little; that means it’s less likely to hear the Wake Word.

That said, if you want to give Padatious a try we’re very willing to help with that too.

Best, K.

1 Like

Let’s go for it @KathyReid. Twins I can work with. Clones are impossible. :joy:

Sure, my colleague @Wolfgange had some documentation on how to do this, I’m just hunting it up :slight_smile:

Here’s how to to train your own wake word. If you have any trouble, feel free to let me know or ask on the ~machine-learning Mattermost channel.

2 Likes

Thank you for that. I have a job lined up for the weekend. :grinning:
Once the model is converted to TensorFlow is that it committed for use?

This is on my Todo list as well, let us know how it goes and any pitfalls/traps I should watch out for.
currently collecting my wake word samples with a voice recording app on my phone from whoever will contribute.

2 Likes