Mycroft Community Forum

"Porcupine" wake-word detection extremely sensitive

Hi,

I tried training my Precise custom wake-word but after struggling for a while decided to try out other options.

  1. I trained a Porcupine model and on their web interface the model performs very well. 2) I installed the mycroft-porcupine-plugin by Åke Forslund, 3) I then added the new config in user-config.

Independently on the sensitivity thresholds I set, Mycroft now wakes up at each and every sound or even a little background noise.

Thanks for any insights.

Hey there, can you post the config you have used?

@forslund can you confirm that plugin is working?

{
“max_allowed_core_version”: 21.2,
“listener” : {
“wake_word”: “subject”
},
“hotwords”: {
“subject”: {
“module”: “porcupine_wakeword_plug”,
“local_model_file”: “/home/[user]/.mycroft/[model].ppn”,
“sensitivity”: 0.1,
“trigger_level”: 9
}
}
}

I had the same problem when using numaya or snowboy plugin. The fix was to change line 306-307 in the mycroft-core/mycroft/client/speech/listener.py. From this :

self.microphone = MutableMicrophone(device_index, rate,
                                           mute=self.mute_calls > 0)

To this:

self.microphone = MutableMicrophone(device_index, rate, 
                                          mute=self.mute_calls > 0, chunk_size=2048)

For some reason 1024 chunk size (which is the default) caused extreme sensitivity. I wonder if you are having the problem because of this.

Hi, thanks for sharing. Nope however, didn’t work for me. Now only get a stream of messages in the client stating “high audio latency” but the hotword is as sensitive as before.

Posting the solution by @forslund here (who was very helpful) in case somebody would end up in the same situation. He updated the plugin (now 0.2.0) to work with latest version of Porcupine and the user-config should look like this:

{
“listener”: {
“wake_word”: “[your wakeword]”
},
“hotwords”: {
“[your wakeword]”: {
“module”: “porcupine_wakeword_plug”,
“_module”: “porcupine”,
“keyword_file_path”: “[path to model]”
}
}
}

Resolved! Thanks everyone!