Custom speech to text and intent recognizer model for skills

How would one go about training and using a custom speech to text and intent recognizer model in a skill… at skill runtime?

For domains where there is lots of custom vocabulary of words that aren’t in any dictionaries, this would very useful.

Is something like this possible either in mycroft-core or dinkum?

If it isn’t possible to do at skill runtime, is it possible to and skill “build” time (i.e., Trained by the skill developer and bundled with the skill)?