IMHO, the most important thing that MyCroft could possibly do is move to a local speech-to-text model. Get us all off “the cloud” and the WAN every time we say something.
Hard, you say? My GPS, a Garmin, which I bought about ten years ago, understands me pretty well, and it’s not connected to anything other than the listening it does for the GPS satellite constellation. I can name any town or street or address combination and as long as I speak clearly, it gets it right almost all of the time. So the tech is out there, and has been for a while. If Garmin (and Dragon, etc.) could do it (and they definitely did), so can others.
Speaking as a developer, my interest in MyCroft would skyrocket if it worked reasonably well in a LAN-only context. Speaking as a user, I’ve already got devices that look to the cloud for speech. Adding yet another, and thereby supporting the cloud model further, has very little appeal.
Local STT is the killer feature that could serve to make MyCroft (finally) stand out even among the big shots. I’d love to see that, as the combination of an open development system and real privacy and security based on LAN-only operation seems unbeatable to me.