Replacing cloud services for STT by local ones


A simple solution could be:

  1. download only “lclient” directory # local client
  2. mv client rclient # remote client
  3. ln -s lclient client

in this way, switch from one client to the other is so easy as modify the softlink to “rclient” or “lclient”.


Hello @PasabaPorAqui,

what about the subfolder lspeech within your client folder (lclient). Is it automatically used or do i need to use symbolic links too?

I have used PR184 from mycroft-core repository, maybe which also modifies client files. Maybe this is not a good idea.


@PasabaPorAqui I guess you are starting lspeech/ within the ‘ voice’ command?

There are so many changes in the last two months on mycroft-core that i would advice all which want to try your code to clone your complete repository. Maybe you can update your repository sometime in the future to match those changes?

First of all we need to reproduce that this is working well for others.


Updated my branch with latest official source, first regression test passed.

The differences between my branch and official one are:

  1. addition of “mycroft/client/lspeech” directory with the local client

  2. in, changed line:

“voice”) SCRIPT=${TOP}/mycroft/client/speech/ ;;


“voice”) SCRIPT=${TOP}/mycroft/client/lspeech/ ;;

changes not related to local STT:

  1. mycroft/skills/ optional base class for skills, with some improvements.

  2. msm is removed (to enforce locality)

I do not recommend to clone my own branch, I do not guaranty any stability nor continuity. Clone official one and merge manually the previous changes.

Changes in configuration (in bold, mandatory ones for local STT)

“lang”: “es”,

“url”: “”,
“update”: false

“wake_word”: “vivienda”,
“threshold”: 1e-20,
“standup_word”: “vivienda”,
“standup_phonemes”: “b i b i e n d a”,
“standup_threshold”: 1e-30,
“producer”: “pocketsphinx”,
“grammar”: “jsgf”,
“wake_word_ack_cmnd”: “aplay /home/pma/actual/tools/R2D2a.wav”,
“msg_not_catch”: false,
“debug”: true

“module”: “espeak”,
“espeak”: {
“lang”: “es”,
“voice”: “m1”

as you can see, in order to increase recognition success ratio, I use initially a non-free speech grammar, stored in file “es.jsgf”. Skills can switch this grammar to any other one during its execution. Current content is:

#JSGF V1.0;

grammar prueba;

public <prueba> = <cmnd1> | <cmnd2> | <cmnd3> | <cmnd4> | <cmnd5> ;
<cmnd1> = apaga la música ;
<cmnd2> = pon música ;
<cmnd3> = avisa <when> | avisa ;
<cmnd4> = graba un mensaje ;
<cmnd5> = televisión pon canal <n_0_100> ;
<when> = en <n_0_100> ( minuto | minutos ) ;

<n_0_9> = cero | un | dos | tres | cuatro | cinco | seis | siete | ocho | nueve ;
<n_10_29> = diez | once | doce | trece | catorce | quince | dieciséis | diecisiete | dieciocho | diecinueve | veinte | veintiuno | veintidós | veintitrés | veinticuatro | veinticinco | veintiséis | veintisiete | veintiocho | veintinueve ;
<n_10n> = treinta | cuarenta | cincuenta | sesenta | setenta | ochenta | noventa ;
<n_0_100> = <n_0_9> | <n_10_29> | <n_10n> [y <n_0_9>] ;


Thank you for you help which i appreciate but this turns out to be a big mess for me:-)

  • It starts with installing pocketsphinx using mycroft-core git:

    • One has to uncomment the pocketsphinx part in the so that it installs correctly to mycroft virtualenv but you will get further error messages when installing and running
      • you have to uncomment TOP directory in scripts/ otherwise it will crash
      • and the speech recognizer is located now in a subfolder called ‘recognizer’ which you need to modify there too
  • with the default settings (linked to client/speech) hot word recognition works but everything else will fail as there is no working local pocketsphinx configuration

  • using your lspeech and configuration modifications 1) - 3): This does not work for English language either:

    • one has to link *.dict and *.lm file to those found in the pocketsphinx model folder but i am not sure as the lm is called *.lm.bin in this case. Do we need to name them en-us.{dict,lm,bin} or en.{dict,lm,bin}
    • it looks like that your pocketsphinx audio consumer has Spanish hardcoded?

Independent of that i am getting error messages like

Traceback (most recent call last):
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 196, in <module>
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 188, in main
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 250, in run 
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 209, in start_async
    self.config, self.lang, self.state, self)
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 76, in __init__
    self.wake_word_ack_cmnd = s.split(' ')
AttributeError: 'NoneType' object has no attribute 'split'

I am still sorting out things. It would be cool if at least they would make a local recognition work with English language in their own git code.


yes, add new features to any software could be a bit messy, just for programmers that decide spend enough time. Same for using and testing “beta” features as this one.

About your points:

  • first one is about install official mycroft, better to handle as it is. As pocktsphinx is used always by the wake word feature, it should install by default.

  • I’ve developed and tested local STT in Spanish. English must follow the same steps that initially has been done for Spanish, and now has been done for German: install pocketsphinx package for English language, etc.

  • last point is caused by the lack of “wake_word_ack_cmnd” entry in the config. I fixed the error in order to made this entry fully optional. This entry is used to say mycroft how to answer to the wake word. In my case, after wake word is recognized, mycroft mades the famous R2D2 sound.


@Thorsten: Really good additions to the wiki page. Is is a pity we can not change the title.

When we have it a few more tested, we can think about add a new chapter about the local client or open its own wiki page.


@PasabaPorAqui Thank you for this easy fix in your code.

I need to figure out below issue and probably need more debug information with the English language model which could be related to the used dump format (*.lm.bin) as German and Spanish do not use the dumped format

ERROR: "pocketsphinx.c", line 233: Cannot redirect log output
Traceback (most recent call last):
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 196, in <module>
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 188, in main
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 250, in run
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 209, in start_async
    self.config, self.lang, self.state, self)
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 82, in __init__
    self.decoder = Decoder(self.create_decoder_config(model_lang_dir))
  File "/home/user/.virtualenvs/mycroft/local/lib/python2.7/site-packages/pocketsphinx/", line 271, in __init__
    this = _pocketsphinx.new_Decoder(*args)
RuntimeError: new_Decoder returned -1

On the other hand the wiki seems to refer to standard lInux installation paths for pocketsphinx like /usr/share/pocketsphinx. Why not use mycroft-core/pocketsphinx-python/pocketsphinx/ instead as all installations seem to be available locally?

p.s. Above error message is discussed here Mycroft on raspberry pi gives me this error and stops working


Previous error message “new_Decoder returned -1” and call stack appears in any case of error initializing pocketsphinx.

In this case, error is due to line “-logfn” in file “” that is pointing to a path not existing in your environment (“scripts/logs/decoder.log”). Fixed, now using “/tmp/pocketsphinx.log” that should exists in any Linux machine.

About “/usr/share/…” some oficial pocketsphinx packages install on this directory. However, this is a file not really used, wiki explains that these files must be copied/linked to some mycroft paths.

If you think wiki explanation must be improved, feel free of edit it.


@PasabaPorAqui Thanks, the log is useful regarding pocketsphinx, grammar stuff is not logged there.

I will for sure contribute to the wiki once things work. I think that all language wikis should go in the one @Thorsten did with a single table for the pocketsphinx download stuff.

Now, to new issues

Traceback (most recent call last):
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 196, in <module>
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 188, in main
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 250, in run
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 209, in start_async
    self.config, self.lang, self.state, self)
  File "/home/user/mycroft-core/mycroft/client/lspeech/", line 84, in __init__
    self.decoder.set_keyphrase('wake_word', self.wake_word)
  File "/home/user/.virtualenvs/mycroft/local/lib/python2.7/site-packages/pocketsphinx/", line 403, in set_keyphrase
    return _pocketsphinx.Decoder_set_keyphrase(self, name, keyphrase)
RuntimeError: Decoder_set_keyphrase returned -1

2017-07-09 19:24:13,215 - - INFO - Connected
2017-07-09 19:24:13,215 - Skills - DEBUG - {"type": "enclosure.reset", "data": {}, "context": null}

  File "/home/user/.virtualenvs/mycroft/local/lib/python2.7/site-packages/requests/", line 382, in prepare_url
    raise MissingSchema(error)
MissingSchema: Invalid URL '/v1/device//setting': No schema supplied. Perhaps you meant http:///v1/device//setting?

The second one seems to be related to setting “url” to “”. Currently, i am switching between English and German. The first one uses a binary language grammar and might lead to issues. For the second one there are no language files for the skills (yet).


The keyphrase specificed in entry “wake_word” of the configuration is not accepted by pocketsphinx. Check all words in the keyphrase exists in the pocketsphinx dictionary (.dict file). Add the missing ones (“mycroft” one?) if necessary or select another phrase.

For errors in calls to pocketsphinx library it is useful check its log file ( “/tmp/pocketsphinx.log” ), that usually contains a better description of the error.


I am suprised that ‘mycroft’ is not in the pocketsphinx dictionary. I changed it to a known keyword. Now i run into the below issue with lspeech that nothing happens, with speech i get at least a keyword recognition. For test purposes i only started mycroft services and voice but no skills:

2017-07-12 21:57:46,318 - mycroft.client.lspeech.pocketsphinx_audio_consumer - DEBUG - device_index=None
2017-07-12 21:57:46,328 - mycroft.session - INFO - New Session Start: 69d66955-db1d-4d63-bf67-ef182ede99eb
2017-07-12 21:57:46,328 - mycroft.client.lspeech.pocketsphinx_audio_consumer - DEBUG - Waiting for wake word...
2017-07-12 21:57:46,329 - - INFO - Connected

I assume lspeech has issues to recognize mic when several audio devices are available. On the other hand, one has to modify your audio consumer manually, for instance for the dictionary as it always uses es.dict which can be seen in the pocketsphinx log.


@PasabaPorAqui Maybe we can check if we can breakdown the arising issues on mycroft irc channel or can you try to use en-us language with your own code and without the jsgf stuff?

Currently i dont know how to handle the different issues:

  • mycroft seems to have not implemented yet the basic conversation stuff for other languages / localizations
  • dont know how to proceed with the skills, maybe i drop most of them and check only one or two and do a translation myself later to German
  • how to succeed with lspeech, maybe read more on jsgf…


Mozilla has an interesting open source STT they are working on If anyone is interested and want to check that out.


So many new speech recognition projects out there, MyCroft is not alone :slight_smile:


Replaced hardcoded “es.dict” by “<lang>.dict” and some other similar cases.
Thanks for notice it.


Hi Thorsten,
do you already have the Matrix Voice or Matrix Creator? How does it work with Mycroft?
I’m thinking of buying it too, but all I read so far was, that it won’t work with Mycroft, so I would be very interested in your experience.
Thanks in advance


Hi @remac.
I’m still waiting for my matrix voice to be delivered. I heard that matrix voice does not support pyaudio out of the box so this is a problem for use in python based products.
But that is just theory.


Hi all.
I just received an email update on matrix voice delivery containing a really good news i want to share with you :slight_smile:.

In addition, we are finishing our support for Pulse Audio so that you will be able to easily use Google Assistant and several other voice services such as Mycroft and on the MATRIX Voice.


Could you share matrix specs ?

  • Range of effective voice capture
  • Transmission protocol or interface
  • Type/structure of microphones array
  • Price

I know Amazon is giving public access to “Echo” microphone array. Someone with experience about it ?

Thanks a lot.