Yeah I had some fun evenings and I gained few more white hairs I really enjoyed the learning of the buildx feature which allows the multi-arch support but overall I still be a Podman fan boy ^^
@mdverhoeven that would be great, I mostly tested the containers on ARM 64-bit so some surprises could happen!
As far as I understood, the image assumes that there is a device that runs docker and requires audio and the microphone.
In my stack, we entirely rely on a Docker container in the cloud, i.e. there is no need for audio. Will this image work nicely in this setup too or is there even a configuration where I can exclude the audio?
i did some updates to the Docker images, please find the list below:
Implement precise-lite plugin from OpenVoiceOS, fooprint of mycroft_voice container reduced by 50% for CPU [1], memory [2] and fastest wakeword detection
Usage of tmpfs for /tmp/mycroft to reduce IO activity on the disk (mostly useful for Raspberry PI and it’s SD card), overall IOPS on the Raspberry Pi have been reduced by 50%[3]
Reduce the Docker volume usage to mycroft_skill* only
Use of environment files (.env*) with docker-compose to simplify the usage and the configuration of it
numpy Python library has been integrated to the mycroft_base image because it is now used by mycroft_voice and mycroft_skills containers. This speedup a lot the starting process after the pairing by not having to compile the library (especially on a Raspberry Pi… )
Documentation updates
[1] CPU usage for mycroft_voice container (6 hours sample):
But the major update is the support of the Mycroft GUI, a new Docker image mycroft-gui had been added to Docker Hub and a new container mycroft_gui have been added to the docker-compose.yml file.
The GUI has been tested successfully on a Raspberry Pi 4B with OpenGL acceleration. On my desktop with a Nvidia GPU I had to change the QT_QUICK_BACKEND to software to have the GUI started (which means no OpenGL).