Running the software stack completely airgapped

Greetings Community, I asked on the mattermost chat too but the forums seem to be way more active than the chat.

Im looking to turn an arm or x86 box into a complete smart home manager with voice commands that is totally airgapped. I head mycroft has good latency on voice prompts and decided to give it a try. Fired up docker and I just get bombarded with pairing requests and timers. It seems to require at least in the initial setup phase mandatory cloud access. I don’t want that at all.
I’ve had moderate success with openai whisper on prem using it as a STT but the latency is bad. Cuda acceleration enabled gets around 15 seconds.
The most important parts are flipping lights, monitoring sesors, open/closing windows and doors etc. Q.O.L skills like telling a joke and whatnot are not a primary concern.

Would Mycroft be a good choice for my project or should I keep looking ?

Hi there and welcome! Mycroft has unfortunately been reduced to a skeleton crew and the software hasn’t been touched in a while. Neon, their partner, has taken over software operations for the Mark 2 device. Neon is built on OVOS, which is a fork of Mycroft with many of the features you’re looking for.

Give this a try: GitHub - OpenVoiceOS/ovos-docker: Open Voice OS container images and docker-compose.yml files for x86_64 and aarch64 CPU architectures.

Also, I run FasterWhisper on GPU with the large-v2 model at home and it’s plenty fast and accurate. Works on both Neon and OVOS. Mike Gray | Running A FasterWhisper STT Server Note that the settings in this post assume no GPU - to use it, change use_CUDA to true in the config.

Please reach out with questions as you get going and feel free to join us on Matrix for more rapid responses: You're invited to talk on Matrix


very useful information, thank you :smiling_face_with_three_hearts: