Now we’re all looking to place a mycroft device in our living rooms to replace all those Echos and Homes, aren’t we ?
Mic + speaker need to be located in a central prominent place for sake of good acoustics, so the look/design is important, too. Nobody wants to place a visible RPi or similar there, and not everyone wants to use a Mark I.
It needs to be stylish enough to meet your (and your wife’s) needs, but it also needs to provide good audio. The mic is of particular importance for Mycroft speech recognition to work well, preferrably it’s even a mic array.
I think the best option would be to get a wireless mic/speaker combo like this one.
Does anyone have any experience with wireless mics in general on a RPi ? what about processing latency?
What’s the best technology, Bluetooth ? proprietary 2.4 GHz w/ a USB connector ? plain IP over WiFi ?
(btw, to use wireless audio I/O will also allow you to run Mycroft on a server with decent horsepower)
Any recommendations for a wireless mic/speaker combo ? The one above is two times the cost of an Echo.
Hi, I think that this issue with speaker and mic (setup, device size, power, latency, lag, etc) is the biggest hurdle to overcome in the setup and effective functioning of a home assistant. If you look at Amazon’s Alexa I am sure they spent quite alot of the RnD and development on making the mic very effective…and it is.
I don’t have experience of using the wireless mic and speaker you linked to, but I do applaude you for highlighting what in my opinion is one of the essential bits of tech to get right, to make MyCroft work.
I’m imagining a light bulb that contains a wireless mic and speaker…every room has a light socket. If anyone gets that right, they will make a killing!
Another approach might be to get a couple of stylish, cheap, _non-_array Bluetooth mic/speaker combos like this one each to be linked to the same Mycroft device via BT, then to do some sort of parallel processing there.
Not sure on the details. Can we have multiple active incoming BT streams ? As I believe speech recognition itself only uses a single processor core (although I might be mistaken there), we could have a couple of them run on a single RPi and place as many mics all over the room(s) as we need to get a good recognition quality.
I’m really interested in any progress on this topic. I plan (humh… when I find enough spare time to play with it) to reuse a Nabaztag as a remote mic + speaker. But the last time I take a look inside Mycroft core, it seems not trivial to add a sort of web-service API to submit a recorded file and expect a response.