For the next dev sync, so @ken-mycroft can beat his own record of fastest status update…
Fairly sure Ken’s seen this. However I believe the intention is to run it with the leanest possible tf-lite runner, rather than the current approach of using the pre-built binary. I’ll mention it again just in case he missed it though.
Huge thanks also to andreselizondo-adestech who did the bulk of the work porting Precise to TF 2.x!
Just had a good laugh at it; @ken-mycroft “Busy working on precise-lite, so leave me alone / to it”
Anyhow, I believe ken and jarbas already had a chat about it if I am not mistaken.
Bare in mind that the precise-lite wake word plugin, is actually running only on a very minimal precise-lite-runner without the full precise binary. So yeah, very minimal, although it still requires TFlite_runtime to be installed ofcourse.
However, by just pip installing the precise-lite wake word plugin, it will all be taken care of; Installing precise-lite-runner which automagically installs the right tflite_runtime package for the cpu/architecture it runs at.
But you are right, special thanks to andreselizondo for initiating it all
Also special thanks for all the work and research that Bart Moss and the SecretSauceAI project/community has putted in; https://github.com/secretsauceai/secret_sauce_ai/wiki/Wake-Word-Project
Yes, I spoke with Jarbas. The concern was I had already written a runner which I included in client/speech/tensorflow so it was part of the stack. I was not aware of the runner Jarbas put up until this last weekend. That runner is a module on a separate repository so the issue was do we want it as a module or as a part of the base install (so if the internet is down we still have a working wake word if that is a desirable feature). We concluded was we want a module in one of our repositories which gets installed as part of the build process. Another issue was do we want a precise specific runner or do we want to abstract out vectorization and support any tflite model. There was also the issue of whether we want to support tf proper (along with keras .net models) or do we want to just support tflite. It was determined we just want precise tflite for now and we will leave the old binary install working but it will not install at all if we change the default module in the config to precise-lite. In other words by default you get precise running on tflite but you could switch back to the old binary. One final note was the original code I saw ran tflite and .net models using full blown tensorflow which we did not want so my runner specifically used tflite which I did not see in the original code which was shared with me. That code used tensorflow not tflite runtime. Perhaps when I was given a link to the code last month I might have missed the code that was using tflite runtime and not tensorflow. I hope all of that makes sense.
And my apologies as there is often a great deal of back and forth regarding design decisions and long term planning which may not show up just in our dev syncs. We need to get better at that and being more transparent on some of these decisions.
Well @ken-mycroft I just tested out the precise-lite wake word on OpenVoiceOS and the performance is amazing !!!
@JarbasAl Said that has a lot if not all to do with the model, that you created. So a big thank you and shout out for that.
My Mark2 dev kit is about 1 to 1.5 meters away from me on my table and even with my lowest of lowest possible spoken volume (hence I can’t even understand myself) “Hey Mycroft” and the device is actually starts listening to me…???
@ken-mycroft A quick showcase of the performance utilizing precise-lite-runner / tensorflow-lite runtime 2.5
Really worth the effort of including it in what ever way you decide to include it, going forward if you ask me.
Please forgive me as I am not always sure of what’s being asked and shown. If I am correct you are demonstrating the tf lite model using tensorflow. Is that correct or is this using the tensorflow lite runtime? If it is demonstrating the use of tensorflow are you encouraging me to include the support for tensorflow and keras models? If so I agree, however, I was thinking of first releasing just a tflite runner and then adding support for .pb abd .net files but first I want to get some eyes on my tflite runner as I did things a bit differently and am looking for feedback from the commnuty.
Apologies @ken-mycroft I am really not the most perfect communicator.
That is indeed the performance of ONLY the precise-lite-runner (No full precise-engine nor precise-engine-lite).
So basically what you had in mind to release first.
We are more then happy to help you out, looking at your code and test it if you want us to, but guess for that it will be best to reach out to @JarbasAl as he pulled away the precise-runner-lite from the precise-engine-lite code.