I don’t know if this has been discussed here, but I am working with a speech therapist who is very excited about the potential of Mycroft.
She has been working with Siri to encourage correct speech (you only get the reward from the device if you say it properly), but it can be too picky and there is no flexibility at all. If someone struggles with r’s, they are out of luck.
Being able to change the wake word and allow for flexibility in pronunciation and volume (to the extent possible) could be huge for her and her peers.
Just thought I’d mention that in case it hasn’t come up yet.
I can’t stop thinking about your post and the potential here. It also seems very topical in that Mycroft has announced the move to https://research.mozilla.org/machine-learning/. It would be interesting to explore this technology to see if a model of exemplar training words could be built by the therapist with this that would be customized to the patients need using this. This would be separate from the routine skill/TTS recognition. I’ll bet on the back end, a score is assigned to potential word matches much like clarifai assigns scores to concepts “observed” in an image. This words and scores could be displayed to the therapist and patient . . . Then during a “hands on session” the therapist could be provide a scoring with the TTS engine & patient together ( supervised learning) of the TTS.
The patient could then go home and practice with Mycroft(voice gamification as incentive here). With the patients permission/consent, the results for these home sessions could be seen by the therapist to track progress and refine therapy.
Sorry if all of this is quite obvious already, but your post just got me thinking . .