How to implement Precise trained model

Hi all,

Like described here Train your own model, I’ve trained the Precise model with my own wakeword.
Now I’ve the following files :

  • <my_file>.pb
  • <my_file>.pb.params

Where should I put these ? I’ve searched for similar existing files in order to replace them but didn’t found anything.

Any suggestion ?
Thanks you in advance

This needs to be specified in your mycroft.conf file - see here for a default

and here for documentation on mycroft.conf;

You will also need to host the model somewhere - at the moment we use rawgit, but this service is shutting down.

@Wolfgange - is there anything I’ve missed?

Best, Kathy

1 Like

Is it possible to develop a repository for community developed wake words? Might be hand for those people who want to change their wake words but not train their own model.

Thank you, it’s working !
Just have edited these lines with my own serve files service :

// Settings used for any precise wake words
"precise": {
        "dist_url": "https://github.com/MycroftAI/precise-data/raw/dist/{arch}/latest",
        "model_url": "https://<my_own_server>/{wake_word}.tar.gz"
      },

Files are then downloaded inside home directory in .mycroft/precise folder.

1 Like

Excellent! Thanks for confirming @Hexliath

There’s a recent update to precise to make packaging the models up a bit easier as well.

So, there’s two ways to publish/use the model:

  • Locally: Add {"hotwords": {"my hotword": {"local_model_file": "~/someplace/model.pb"}} to ~/.mycroft/mycroft.conf. This will point it to a local file when you use the wake word. my hotword (can specify it in the config or use the web UI)
  • Remote: Package as a my-hotword.tar.gz and my-hotword.tar.gz.md on a github repo. By default, it looks here, but you can change that using {"precise": "model_url": "https://<server-or-raw-github-repo-url>/{wake_word}.tar.gz"} (Note: keep the {wakeword} part as is; it will be replaced by the program)
    • Also: for an easy way to deploy a model.net into a .tar.gz and md5 on a public github repo you can use the export.sh script in the repo. I just pushed a new version so you can do he following: ./export.sh models/my-model.net https://github.com/myusername/my-data-repo branch-name to upload the tar.
2 Likes

Hello! We are having to achieve the same. We trained the model, created the .pb model. Now what has to be done from my understanding is preventing the configuration frim being overridden through the web server, and then tell mycroft.conf which model we want to use. We are working in a .venv and mycroft runs in a docker. Our first problem is that there are many mycroft.config files in the systen. At the moment we are editing the one inside the .venv, located at home/pi/mycroft-core/mycroft/configuration. This file contains a section where it says if you want to use a local model, edit this line. We did this, but without success. In the system there is an old configuration with a custom word via pocketsphinx that is not used, a hotword named finance-cloud via precise that has no trained model, and now a trained one via precise named hello_financecloud. Unfortunately for the .pb model there was an error in typing (hello_fiancecloud.pb). I am attaching a sreenshot from the respective .conf file in mycroft/configuration.

Hi, have you found a solution? I am facing the same problem. The hot word config is overwritten by some Mycroft server setting and standard Mycroft is being used. I have a perfect model trained for precise, but I can not use it :frowning:

This is now available: https://github.com/MycroftAI/Precise-Community-Data

for local wakeword config, you have to add the listener config and hotword bits. If it’s system wide, you can put it in the /etc/mycroft/mycroft.conf file if you prefer:

    "wake_word": "yourwordhere",
    "wake_word_upload": {
      "disable": false,
      "url": "http://127.0.0.1:4000/precise/upload"
    },
  // Hotword configurations
  "hotwords": {
    "yourwordhere": {
        "module": "precise",
        "phonemes": "U R FO NE M Z HE R E",
        "threshold": "1e-30",
        "local_model_file": "/home/pi/.mycroft/precise/yourwordhere.pb"
        }
    }

Thanks for a prompt answer. I have the following section in the config:
“listener”: {
“device_name”: “pulse”,
“sample_rate”: 16000,
“record_wake_words”: true,
“save_utterances”: true,
“wake_word_upload”: {
“disable”: true,
“url”: “https://training.mycroft.ai/precise/upload
},
“mute_during_output”: true,
“duck_while_listening”: 0.3,
“phoneme_duration”: 120,
“multiplier”: 1.0,
“energy_ratio”: 1.5,
“wake_word”: “hallo scotty”,
“stand_up_word”: “wake up”
},
“precise”: {
“dist_url”: “https://github.com/MycroftAI/precise-data/raw/dist/{arch}/latest”,
“model_url”: “https://raw.githubusercontent.com/MycroftAI/precise-data/models/{wake_word}.tar.gz
},
“hotwords”: {
“hallo scotty”: {
“module”: “precise”,
“lang”: “en-us”,
“local_model_file”: “~/Projects/mycroft-precise/hallo-scotty2.pb”,
“sensitivity”: 0.5,
“trigger_level”: 2
},
“wake up”: {
“module”: “pocketsphinx”,
“phonemes”: “W EY K . AH P”,
“threshold”: 1e-20,
“lang”: “en-us”
}
},

But it does not work. In the PreciseHotword init config I still see:
{‘module’: ‘precise’, ‘phonemes’: ‘HH EY . M AY K R AO F T’, ‘threshold’: ‘1e-90’}
Where does it come from? I can not see any ‘phonemes’ definition as stated above.

I am using mycroft.conf in: mycroft-core/mycroft/configuration/mycroft.conf

try adding wake word to listener config.

I have created a skill for it. there are still some bugs but it gets better every day.Wake Word Skill Trainer . all settings are also set in it.

There is ‘wake_word’ in the listener config. At the end.

Hm, so my local config is always overwritten by the remote config. When I commented out the remote config in the Configure class, suddenly I had in the configuration wake_word defined by me. Something is telling me that this is not the right way …

Not sure if they’re in the original, or if it’s just a formatting thing on the forums here, but the quote marks seem to be non-standard characters. The trailing comma at the end will cause problems, so make sure that’s cut out. Also I’m not sure whether you’re running this on a desktop, Picroft or Mark 1, but I’d try using the full path for the model file location just to be safe.

{
  "listener":{
    "device_name":"pulse",
    "sample_rate":16000,
    "record_wake_words":true,
    "save_utterances":true,
    "wake_word_upload":{
      "disable":true,
      "url":"https://training.mycroft.ai/precise/upload"
    },
    "mute_during_output":true,
    "duck_while_listening":0.3,
    "phoneme_duration":120,
    "multiplier":1.0,
    "energy_ratio":1.5,
    "wake_word":"hallo scotty",
    "stand_up_word":"wake up"
  },
  "hotwords":{
    "hallo scotty":{
      "module":"precise",
      "lang":"en-us",
      "local_model_file":"/home/user/Projects/mycroft-precise/hallo-scotty2.pb",
      "sensitivity":0.5,
      "trigger_level":2
    },
    "wake up":{
      "module":"pocketsphinx",
      "phonemes":"W EY K . AH P",
      "threshold":1e-20,
      "lang":"en-us"
    }
  }
}

If there’s anything wrong in the JSON it will ignore it, so good to use a validator tool.

am assuming this is at ~/.mycroft/mycroft.conf?

Configure class merge all available configs. Also remote one. When I commented out the remote one, I started to have my wake_word in the config. If I had problem with the Json, there would be error immediately. Such a lesson I learned few weeks ago :slight_smile:

SOLVED. I was modifying Default config. The default config is overwritten by the remote config which is overwritten by the user … So putting the listerner config into user config did solve the problem. Ugly …

1 Like