OpenVoiceOS - A bare minimal (production type of) OS based on Buildroot

I am really interested in both the seamless “Hey Mycroft do something” solution and probably the dsp processing you mentioned too (I use snapcast to play audio on the mycroft device and may need it to process that sound source if it’s not happening automatically).

Also would it be best to discuss the CPU load here or as an issue on github?
I notice other things like failing to connect to PA and comments about Threshold missing (cc @AIIX)

The issue I am see with the CPU usage may not be related as I believe the picroft image is Arm64, I am not quite sure yet but you can run a “uname -a” to check, The platform I am trying to get the precise lite runner working on is the armhf architecture running python 3.8 on ubuntu focal, I have already tried installing libatlas3-base and building both numpy and scipy from scratch but it didn’t seem to help, whereas I have no CPU usage issues on Arm64 following the above guide using the same python and distro combination

I may have used “picroft” misleadingly. I’m running vanilla rasperry pi OS 10 (lite) with mycroft installed on top.
It has the 32bit armv7l kernel
RpiOS is arm-linux-gnueabihf
python is 3.7.3

The seamless solution you mention is called; “Instant listening”, which is not (yet) part of mycroft-core but the code changes are here;

1 Like

ah right - so it “just” starts listening at once - I thought that it may be integrated into precise so that precise kinda stopped overwriting its ringbuffer when it was activated and streamed the post-wakeword sound to be STT’ed

I guess there is no need for that if the wake recognition → start recording latency is low enough

(and I also suspect that the dsp remove-any-playing-sound feature would help improve STT if it was active?)

precise kinda stopped overwriting its ringbuffer when it was activated and streamed…

The day after it came up, @gez-mycroft and I almost simultaneously arrived at (basically) the same solution in different channels on MycroftAI’s chat =P

That’s a core change, though, as it would require 1) switching to a single, persistent input stream, 2) adding a secondary buffer, so the WW can put a few frames back before the handoff, and 3) rewriting the Mycroft-side routine that handles the listen/record/stop cycle.

A good proposal, and we should all run with it, but it’s more than can be done in a plugin.


Worth noting that Instant Listen (above) gets you 99% of the way there in practice. What we’ve just described would be a good “permanent” solution.

1 Like

Just tested precise-lite within Docker mycroft-core containers on a Raspberry Pi 4B 64-bit, the result is really impressive. The footprint on the system is reduced by more than 50% for the CPU and memory usage. :scream:

The overall experience with “Hey Mycroft” has been improved! :heart_eyes:

CPU usage for mycroft_voice container (6 hours sample):

Memory usage for mycroft_voice container (6 hours sample):

Great job guys!


Development is a bit slow at the moment. Anyhow slowly getting my shit together, so playing the catch up game with the other devs at OpenVoiceOS team.

This is what @JarbasAl has been up to lately…

Still quite some work to do on an OS level from my side to support all different aspects, performance and fine tuning. Focus for now is on Airplay and spotify support out of the box.

1 Like

Same same, but different. Spotify MPRIS support within the audio service.

1 Like

November 8th casual status report:

Core spun off from Chatterbox/HolmesV
As some of you know, OpenVoiceOS runs a derivative of Mycroft-core called Holmes, which is the version at the heart of Chatterbox, a version of Mycroft for kids. Jarbas has been working for Chatterbox for a long time, and their core is sometimes ahead on community PRs, so it was a no-brainer for us to build around Holmes.

Some of our employers, however, feel weird about seeing us work for other software companies. It’s doable, but people have to sign stuff, and that’s a hassle. Because of that, we weren’t able to work together as closely on Holmes as we usually do.

Hence, we’ve moved our core to the previously empty repository at OpenVoiceOS/OVOS-core (possibly unstable during migration), and will now accept PRs directly instead of going through Chatterbox.

UI and Media Handling

Have you seen our search bar?

That’s right! A search bar! For the Common Play system, that is. We don’t want to overlook the non-voice elements. OpenVoiceOS offers full touchscreen support, and selecting media for playback is no exception.

We’re still working on virtual keyboard support, but once we have one, the rest is ready. If you have a USB keyboard attached to your smart speaker (why?) you’re already good to go! Simply tap and type, and you’re searching your media Skills’ catalogs for something to play.

Meanwhile, as I type this, @AIIX has just shown us his latest prototype for notifications, as in toast. You know, like on your phone or desktop, the little thingies that pop up, where you click 'em and it takes you to the app, sometimes it has the app’s logo in there what threw the notification.

A critical component of any self-respecting OS. OVOS is overflowing with self respect.

Je suis l’anguage support

Pleased to meet you. Jarbas’ recent progress on multiple wake word support, made easier by Precise Lite’s general awesomeness, brings us one massive step closer to a truly multilingual assistant. It’s the less-preferred solution, and surely temporary, but it should hypothetically work today: just start talking in the language you want to speak! And by hypothetically, I mean, I am told Jarbas has actually done it, but I haven’t tried it myself.

So, nothing to show yet. I just wanna hype it a little bit. I’ll hype it a little bit louder next time, and then we’ll show you.


As you guys might remember, I installed years ago the pure mycroft version on several Raspis throughout my flat. I was enjoying that it was an easy way for me to also get snapcast on all those devices. Well by now, mycroft doesn’t work anymore - as it doesn’t update and the installed version is veeeery old. So now the whole system only exists as distributed snapcast players. That just works, so I did not change anything for a long time. But, actually, I would like to get mycroft back working, so I would like to install OVOS mycroft edition on the Raspis.

Now my questions:

  • Can I still easily activate the snapcast player services on the devices?
  • Do you (plan to) support This would be very handy as this is something else the raspis could do, while they are around anyhow…
  • Raspi 3B+ and 4 both work, right?
  • Should I rather wait for the next proper release? :wink:

Thanks for an answer and your dedication in this project!

Already communicated within the OpenVoiceOS - Development Matrix channel but going to leave this here as well.

The Raspberry Pi3 version of OpenVoiceOS is in par with the Raspberry Pi4 version and an early development release has been pushed for the “brave” people to give it a try.

@Alcapond and others. In the upcoming days the rpi3 and rpi4 branches are going to be merged and a new Raspberry Pi4 version will be released based on this same code base. After that going forward, both rpi3 and rpi4 development images will be released.

Below some quick and dirty performance indicators of the device running.

OpenVoiceOS - Running on a rpi3b with GUI running.

OpenVoiceOS - Running on the same rpi3b without the GUI running (Headless)

As you can see, more then enough CPU power. The 1 GB memory is about enough with all default skills installed. Perhaps with a lot more skills installed or perhaps heavy skills there might be some task waiting happening but time will tell.

We have to say that at boot, 1/3 of available memory is converted into zram compressed swap (in memory) to squeeze a little bit more memory out of it.

If you have a Raspberry Pi3 lying around and have some time at hand, please feel free to give it a spin. The same as with the rpi4 images you can flash and boot this iimage from both SDcard as well as an USB drive, however as the rpi3 only has USB2.0 we recommend a proper SDcard.


It has been a while again since the last update, so about time to show / tell you guys what we have been up to lately. Development still goes slower as we all would have wanted but we are all just juggling with the little free time we have.

MPRIS Support

In previous post we already show cased the MPRIS listening architecture that we implemented within the audio architecture of OpenVoiceOS.

A quick reminder of what that was all about;


Basically MPRIS supported players and/or skill such as Spotify will be picked up by OpenVoiceOS. So when you start to play something on your mobile device, OpenVoiceOS picks up what is playing and shows that information on its screen and allows it to be controlled by either the GUI or Voice.

However we have now also included MPRIS broadcasting support, making the audio architecture of OpenVoiceOS a MPRIS player itself. This means if you start playing something on OpenVoiceOS through whatever skill (as long as it uses the OCP system) it can be controlled by MPRIS.

This allows things like for instance KDE-connect (which will soon be included within the image as well) to control the mediaplayer of OpenVoiceOS. As an example, KDE on your Android mobile device;

This will also allow to mute your audio on your OpenVoiceOS device when your phone rings. :smiley:

Different MPRIS enabled players, including OpenVoiceOS all connected and talking to each other.

Scalable GUI
Our philosophy differs with the Mycroft Team on this point. Where the Mycroft team goes for fixed size, resolution and orientation, we this believe the GUI should be able to run on any device size, resolution and orientation.

The audio player

The homescreen

Timer skill

fully scalable GUI on desktop

So you decide on which screen enabled device you run OpenVoiceOS

  • Landscape resolution
  • Portait resolution
  • hence even a square resolution which will be interesting later on in time with those nifty round screens that are available nowadays.

Wave visualization support within the GUI mediaplayer
Works has been done on a wave visualization within the GUI mediaplayer instead of the bar visualization.

App drawer
Have we already show cased the APP drawer within OpenVoiceOS? Swipe from the bottom - up to quick select most commonly used apps.

Homescreen updates
Even more work has been done on the home screen. Swipe from the left to right enabled night mode, which is nice when your OpenVoiceOS device is within your bedroom. Swiping from right to left opens the quick access boxes which enables you to quickly access certain skill aspects when using your voice is not the preferred option.

So let me think, what else have we all been up to. Ah right, not sure if we already communicated this, but below the examples of different notifications that will be included within the notification framework of OpenVoiceOS.

Coming up
At the moment we are all working hard on fixing all loose ends and wrap it all together into a kind of stable or at least more stable form so we can get an release out allowing you all to have a look at everything yourself and help us in testing all the new goodies. As shown in a previous post, both the Raspberry Pi3 and Pi4 are now supported and kept in sync, so whenever that release is ready and pushed anyone with a rpi3 or rpi4 can download the image and start tinkering around.

That will be it for this update. Hopefully you all are as excited as we are.


I’ll take the chance to show a couple new OCP goodies

First of all, we got a dedicated OCP logo now! Many thanks to @AIIX !


We now also have “featured tracks” functionality in ocp skills, allowing you to browse media in the gui before playback. It’s up to each skill to provide a list of featured entries

I have also been running OCP in my retro plasma bigscreen setup for some time now, here are some videos I’m sure you didn’t expect to see!

we now have a centralized list with all known OCP skills in one place

I made a significant number of OCP skills over the past few years, those were used to bootstrap the skill list but are very focused on plasma bigscreen, please feel free to send pull requests if you make or find a OCP skill in the wild! This repository will likely be parsed as an appstore by OSM in the future

some wont be very interesting but im sure theres something for everyone!

OCP is packaged as a audio service plugin and can be found on github


Introduction Post

Hello! I'm Strongthany, and I recently became community manager for OVOS! I'll be posting updates on releases, changes, news, and other things I or other members of the team feel like sharing!

This post is also my attempt to get Disqus to recognize I’m a legit user and let me make posts with more than two links :laughing:

1 Like

Announcing the Release OVOS 0.0.3

What’s in a name?
this was supposed to be release 0.0.2, but due to a technical issue it’s been renamed to 0.0.3.

A real fixer upper

This release contains over 20 fixes for various components!
  • fix/handle bad VAD plugin [#117
  • fix/audio service duplicate code [#109]
  • Fix/plugins shutdown [#110]
  • py3.10 compat [#108]
  • remove skill settings bad migration path [#107]
  • Fix additional settings UI [#106]
  • fix/skill settings using wrong xdg path [#104]
  • fix/remove skill updater lock [#103]
  • Add is_alive service check support to GUI Service [#102]
  • Fix bug in streaming STT audio stream handling [#98]
  • Fix/tts logs and shutdown [#96]
  • fix/fallback_tts [#95]
  • monitor all logs [#94]
  • fix/loop_race_condition [#93]
  • fix/mycroft_ready [#92]
  • fix/double_stt [#91]
  • Fix/ready timeout [#88]
  • Handle empty priority/blacklist configuration params [#85]
  • call device prepare after skill manager has started [#80]
  • Mismatched converse event names [#68]
  • Fix/unify converse namespace [#71]
  • Fix/changelog text [#70]
  • Copyleft detected - list of deps & util to detect them [#51]
  • Port mk2 GUI service with additional improvements to core [#66]
  • fix/skill_manager_events [#62]
  • Fix/adapt lang missing [#61]
  • Fix/intent issues [#59]
  • fix/skill_settings backwards compat [#55]
  • fix/log config [#48]
  • fix/speech [#47]
  • speech client systemd hooks [#39]
  • feat/ovos_lf + fix timezone issues [#37]
  • Fix: show text delegate as per autofit label refactor [#36]
  • fix/default_xdg [#31
  • fix/priority_skills [#23]
  • fix/RemoteTTS+remove_requests_futures_dep [#14]
  • refactor/better_sleep_mode [#10]

bigger, better, three'er

The release of zero dot zero dot THREE comes with improvements on top of the fixes! These include awesome language support, file system refactoring(done for XDG spec compliance), and GUI improvements.
  • full offline support (default)
  • skill plugins, skills can now be packaged like regular python packages (install with pip etc)
  • NO MSM integration
  • improved listener, instant_listen flag + VAD plugins
  • multiple wakewords
  • configurable fallback STT, talk to your assistant during internet outages!
  • migration to our fork of lingua franca, more languages supported, timezone issues fixed all over
  • full XDG compliance, no more permission issues
  • enclosure has been deprecated in favor of PHAL!
  • refactor/default_offline [#113]
  • Implement Color Schemes and Settings UI updates [#97]
  • Port/vad [#81]
  • Rewrite MutableStream class for microphone. [#82]
  • SmartSpeaker Extension: Add GUI interface for additional settings [#86]
  • feat/fallback_stt [#77]
  • Add GUI Extensions Support [#73]
  • port - mk2 gui refactor [#42][#67]
  • feat/configurable_fallback_tts [#49]
  • msm disabled by default [#24] [#46][#44]
  • feat/ovos_conf from ovos_utils [#34]
  • Refactor/converse [#32]
  • Refactor/standardize skill id usage [#28]
  • Refactor/ simplify skill loading [#27]
  • refactor/combo_lock [#15]
  • refactor/settings [#11]
  • Feat/skill plugins [#9]
  • refactor/gui_launcher [#8]
  • feat/extra_skill_dirs [#7]
  • refactor/ovos_utils [#5]
  • feat/multiple_wakewords [#2]
  • feat/ovos_plugin_manager [#1]

All these changes and more can be found on the GitHub release page. Stop on by and give it a read! I would have included links to each of the release notes, however because this account is considered new on the forum, it can not post more than two links. That said, if you have any suggestions for how these posts can be better going forward, please let me know!

Want to get involved? Stop by our Matrix chat, say hi, and see what we have going on.


@strongthany Excellent news and welcome :slight_smile: I am desperate for testing - so far I have been told, wait for to show a download, but it did not so far. Any instructions for me to play around with it? I am not too technical, trying to compensate this with interest though…

1 Like

As of current, I believe the best method for getting it installed is to pull the git repo, enter the directory, and run pip install ovos-core[all].

From there you have a bunch of other commands you can run:

'mycroft-messagebus =mycroft.messagebus.service.__main__:main',

Or as @JarbasAl told me, “alternatively you can use the start/stop scripts in ovos-core repo, equivalent to the mycroft scripts”

If you run into issues, feel free to hit us on the matrix chat, or if you find a bug file one on the github

Indeed, the first “real” release of the OS itself for the bigger masses will be linked to on our website, however now and then we do link to so called rolling development releases within our Matrix rooms. Mostly our Development channel;

We have some very intense weeks behind us, bumping almost any package that is involved and the latest rolling development release linked in that channel is now back again to be shared again.

For your and anybody else their convenience, I will leave the link of our shared gdrive folder here;

New releases will be pushed there for anyone that would like to have a look where we are at.

Quick summary for now:

  • Buildroot 2022.02.x LTS
  • Linux kernel 5.15.x LTS
  • Mesa3d 21.3.5
  • Ovos-core 0.0.3 and all associated packages/plugins announced above.
  • Ovos-shell (minimall eglfs based shell to launch the GUI)
  • KF5 5.91 framwork
  • QT 5.15.8

and probably much more I can’t think of now.

Current issues we are working on now;

  • WiFi Setup. Supporting screenless setups and try to find a way to also support Android >11 and iOS > 14 new Captive portal detection. (WiFi is a bit hit and miss at the moment)
  • Inclusion of webrtcvad to support the VAD plugin using it.
  • Some small issues here and there and HAT support fixes.

We invite anyone that would like to have a peek to do so and let us know what you think so far.