I have Mycroft set up in the living room. I’ve got a script on multiple desktops and laptops bound to a hotkey to STT my speech and then send the utterance to my Mycroft via websocket. Its working well but the issue is, I dont need an audible response from the speaker. This could be a parameter for the message bus to process requests silently and just return the textual response.
see docs here
Just wondering whether HiveMind-core might be ‘overkill’. My request was more for the message bus service core service itself. Implementing as part of the core might be beneficial.
With that said, I’m already considering the utilizing HiveMind-core, specifically for its voice satellite features. Have a couple of additional speakerphones that I want to set up in multiple rooms to connect to some very low powered SBC’s (Pi Zero or similar) to communicate with Mycroft in my living room.
link above is about mycroft-core, if you set destination in context tts will not be executed
The core concept turns every bus client into a node, and everything that has clients into a mini-master.
That’s not Finished yet, but I would like to discourage thinking of HiveMind-core as “the big repo.” It’s one of the two that will define “master” nodes. Your smartphone will not be running Mycroft-core, but it will be running a slim version of HiveMind-core, so that all your apps can be HiveMind clients.
Ahh. Thanks for the clarification. Working.