Can you clarify which part of the tech stack you mean?
If you’d like to replace Mycroft’s voice with one based on your own, this would require a third-party text to speech system. There are several, and they are not hard to plug in, but getting a large enough corpus to generate a good voice is a tremendous undertaking. The no-longer-Mozilla one uses a single, public corpus to generate only a handful of voices. Untold thousands of samples, of many different people, repeatedly blended with ML… lots of work.
If you’d like to tune Mycroft to understand your voice better, again, making it use just your voice would be a huge undertaking and probably impossible. However, you could consider contributing samples to the no-longer-Mozila speech to text system, which is quite functional. This will make your samples part of the dataset that system uses and refines to interpret what you say.
If you’d like to teach Mycroft to recognize your voice, as of today, I don’t think that exists. Mycroft itself doesn’t have the feature, and the only community plugin I know of that ever tried, it never got anywhere (afaik) and has been abandoned for ages.