Maybe somebody should suggest using a Mycroft unit to this guy?

Maybe somebody should suggest using a Mycroft unit to this guy?

Especially regarding the part where he says:

For my Echo based implementation, I had the
constraint of using Echo’s “Alexa, turn on X” and “Alexa, turn off X”
commands. Having this relatively long phrase makes control more
cumbersome than simple moment to moment LEFT’s and RIGHT’s or just naming a destination. That combined with Echo not responding instantly (voice is processed on Amazon’s servers) and sometimes not recognizing speech correctly makes a real-time micromanaged system not practical.

With the correct intent models, could we not design a similar system to be significantly less verbose? i.e. “Wheelchair - forward!” (especially if Adept cuts down on the time it takes between finishing speaking and processing)

Slashdot link discussing same article

Also as mentioned in the /. comments, an Amazon Echo isn’t a mobile device - an Ubuntu phone running Mycroft is though. Might be a good opportunity for some free advertising amongst the FOSS-friendly developer crowd that /. attracts :slight_smile:

The issue here is that MyCroft isn’t available until at least 2016. Echo is here now.

Too true. Still, it does show just one of the many potential uses for a truly open voice assistant. A small wheelchair manufacturer may not be able to get a licence for Siri or Alexa or Cortana, but could easily and freely put Mycroft on there. Who knows what other great things are out there and just waiting to benefit from this?