General questions about Mycroft

Hello, Mycroft community! I’ve just learned about Mycroft and I’m pretty excited about it! Like many people, I have serious privacy concerns about Amazon/Alexa so I LOVE the local nature of Mycroft. And though I’m pretty entrenched in the Alexa ecosystem, I’m also pretty frustrated and ready and willing to jump to another platform if I find the right one.

I have various smarthome goals that involve voice control - and for various reasons, Alexa keeps presenting me with brick walls and dead ends. Here are my hopes for Mycroft…

  1. Easy to create skills - step-by-step tutorials available, helpful developer community, lots of example code that can be copied, pasted, and modified. As a shining example, I would offer up AutoHotkey - their help file has sample code for EVERYTHING. There’s almost nothing a novice can’t do with the reference material provided. On the opposite end of the spectrum, there’s Alexa skills. They change things so quickly that every time I find a detailed, comprehensive tutorial, I never get more than a few steps into it before I can’t go any further because a change has been made since the tutorial was created.
  2. The ability to differentiate between the people in the household. If my wife says, “Hey, Mycroft, call my phone.” It should know which phone to call.
  3. Conversational skills - ability to create skills that will lead users down a path with consecutive prompts e.g. “Hey, Mycroft! Help me remember something.” “Sure, Donnie! What question will you be asking me?” “Where did I hide Krista’s birthday present?” “Got it. And what answer should I give you?” “It’s in the front pocket of my bass travel case.” “Got it. Is this confidential?” Yes." “OK. We’ll keep this between the two of us.”
  4. It seems like EVERYTHING “works with Alexa”. That’s great except that NOTHING (except a spoken command or the Alexa app) triggers Alexa. If Amazon would create any way for Home Assistant to trigger an Alexa action, it would instantly be the most powerful home automation hub on the planet. Hopefully Mycroft improves on this huge restriction.
  5. Good integration with Home Assistant - Mycroft should be able to trigger automations and automations should be able to include spoken Mycroft announcements and question/(branching) answer logic.
  6. The ability for multiple Mycroft devices to work together/collectively as a whole-home system while still providing one/same-room functionality for commands/interactions.

There are probably more but that’s all I can think of for now. How realistic are my wishpoints above?

#1:


#2: is on the roadmap, I believe.
#3: Part of skills, will be refined more over time.
#4: There’s a home assistant skill already, check the skills repo. https://github.com/btotharye/mycroft-homeassistant/tree/9b137ce257880f23b651fcbbdf462acc246310ef
#5: See 4.
#6: Roadmap.

1 Like

Thanks @baconator for responding so quickly on this one.

Easy to create Skills

I’m the first to admit our Skills documentation is not perfect - but it’s certainly improved a lot in the last year or so. In particular, the Mycroft Skills Kit utility - msk is designed to try and reduce the frustration in making a first Skill - that is, reducing the time to “First Hello World”. Several of our Skills docs could use a lot more examples - one of our challenges is that our platform is moving so rapidly the docs rapidly become obsolete - at least we’re in good company there!

Differentiating by different users using voice analysis

This is a very difficult task, but is something that is on our long-term Roadmap.

Conversational Skills

Mycroft already has the ability to implement a basic dialogue tree using the Conversational Context functionality. We aim to improve this over time.

Triggering a Skill

We’re a lot more extensible than Alexa, because we’re open source. That’s a key point of difference for us, and one that we feel really resonates with our maker and developer community. Is there something specific you’re looking for Mycroft to do here and I can see whether it’s already implemented, or whether it’s flagged on a Roadmap?

Home Assistant and automations

Are you able to give us some examples of what you’re looking for here? Examples would help us raise features requests / Issues / PRs against this Skill.

Mycroft Devices working as a whole-home system

Great suggestion. One of our challenges in this space is - what if two or more Mycroft Devices hear an Utterance? Which one has responsibility for handling it? And where should the response be sent? There are a couple ways to do this - for example based on the decibel level of the microphone - using volume as a proxy for distance of the user from the Device - or alternatively, the Device that responds fastest packet-wise.

There are related problems to solve here as well - for example how do you define a “household”. This requires some sort of “grouping” mechanism for Devices - particularly in industrial or large-scale deployments.

Great suggestions, and I hope this has provided some clarity around each of them.

Best, Kathy

Thanks for the responses!

It seems like it could be made less difficult by using some sort of a “voice training” program where each user speaks certain phrases…?

Nothing specific; I’d just like to be able to tell Mycroft to say [whatever] via a Home Assistant automation.

Ultimately, I’d like to have a whole-house audio system with speakers in each room that can play music but also be available for Mycroft announcements any time it might be necessary. Some examples would be:
Home Assistant checks the weather (and/or an outdoor rain sensor) and the open/closed status of all the windows in the house then has Mycroft announce, “Looks like it’s about to rain - you might want to close the dining room and bedroom windows.”
Home Assistant checks the time/sunset and the garage door open/closed status and sees that the garage door is open and there has been no motion detected for 10 minutes so it triggers MyCroft to announce, “The garage door is open.”
Home Assistant runs an automation for watching a movie, setting a system variable to basically say, “Don’t bother me while I’m watching a movie.” Mycroft checks this variable before making any regularly scheduled announcements (“It’s 9 o’clock - time to start your nightly routine.”) Also, during the movie, a call comes in from someone who is not on the “critical callers” list. Once the movie ends and Home Assistant shuts the home theater system down and turns off the “Do Not Disturb” flag, it notifies Mycroft of the status change and Mycroft then announces, “You missed a call from Dave. Also, don’t forget to start your nightly routine.”
In general, if I had the aforementioned whole-home audio system, I’d like to be able to configure Mycroft to announce a variety of notifications such as, “You just received an email from Google Voice” or “You just received the following text message from Janet - Hey, we’re throwing some ribs on the BBQ, want to come over for dinner?” I know my phone will let me know these things, but what if I accidentally left it on vibrate? Or I’m bathing the dog and can’t look at my phone?

The amplitude of the utterance should be stronger at the closest Mycroft so that one would have the responsibility for handling it by default and that’s where the response should be sent. If you make an utterance in the kitchen but you’re on your way to the bedroom, a simple “Mycroft, repeat, please” from the bedroom should repeat the response in the bedroom.

I think the conversation component of Homeassistant works whith the mycroft-homeassistant skill.

And Homeassistant is able to make notifications throu mycroft.

So i think combining the to, you are able to get some sort of spoken announcements and question/(branching) answer logic.

But i see where you want to go - and that could be really fun to use.

It seems like it could be made less difficult by using some sort of a “voice training” program where each user speaks certain phrases…?

Sure - that’s what our Open Dataset is all about. If you’re opted in to the Open Dataset, then we record - in a de-identified way - your Utterances so that we can better train our machine learning algorithms to have better confidence in what you’re saying.

Nothing specific; I’d just like to be able to tell Mycroft to say [whatever] via a Home Assistant automation.

Right, so there are a couple of fairly mature Skills for this already;

Home Assistant checks the weather (and/or an outdoor rain sensor) and the open/closed status of all the windows in the house then has Mycroft announce, “Looks like it’s about to rain - you might want to close the dining room and bedroom windows.”
Home Assistant checks the time/sunset and the garage door open/closed status and sees that the garage door is open and there has been no motion detected for 10 minutes so it triggers MyCroft to announce, “The garage door is open.”

Great suggestion. We’re working on the ability to “chain” different Skills together at the moment - in the meantime @Christopher_Rogers has this Routine Skill available which may be what you’re looking for - GitHub - ChristopherRogers1991/mycroft_routine_skill: Create, run, and schedule routines with Mycroft.

One of our challenges in this space is - what if two or more Mycroft Devices hear an Utterance? Which one has responsibility for handling it? And where should the response be sent

The amplitude of the utterance should be stronger at the closest Mycroft so that one would have the responsibility for handling it by default and that’s where the response should be sent. If you make an utterance in the kitchen but you’re on your way to the bedroom, a simple “Mycroft, repeat, please” from the bedroom should repeat the response in the bedroom.

Yeah, it sounds super easy in theory, but there’s a lot of technical details that would need to be worked through to make this happen in reality, including things like echo cancellation. We’d also have to find a way to compare both the timestamps and the sounds, and do it in real time. Not insurmountable challenges, but not straightforward either :wink:

1 Like