I’d love to have a cloudy myscroft : to have the same mycroft that knows you, on your phone, computer, tv, and which can interact with any of them.
And a mycrooft who recognize you while listening to you, or watching your pretty face
I’d love to have a cloudy myscroft : to have the same mycroft that knows you, on your phone, computer, tv, and which can interact with any of them.
And a mycrooft who recognize you while listening to you, or watching your pretty face
Actually, this is not that far-fetched. We’ll have a cloud backend that will let you manage your Mycroft instances (opt-in). The voice and face recognition is also something we are interested in. (Spoiler: @jdorleans may know a bit about this)
It would be great if Mycroft could act as a DLNA Digital Media Controller. This is effectively equivalent to the ‘Play Media’ skill outlined above, but for locally stored media rather than streaming services. It’s a well-defined standard that’s already widely supported. For example, I can currently use an app on my phone to instruct my TV to play a movie from my media server—it would be nice if I could just ask Mycroft instead.
I don’t see why this would be a problem, in fact, this is one of my own use cases. I’ve been working on the best way to tackle this and am always open to feedback in this area. Also, I’m looking to put together a media team of community members to help tackle these challenges, so if you have expertise in this area…
actually, if you want the ai to carry out the command with a trigger word, you can do this thru AutoHotKey scripting. I did this with my older AI, Denise. The script contains variables which look for certain words to cause the trigger. Works very well. I can show an example of this if you wish, just let me know.
Voice and image recognition are very hard problems to solve. Despite there are some techniques out there, a Machine Learning algorithm can usually solve this problem with good accuracy. In order to do that, a huge data set of input information such as audio or image has to be used to train the machine.
The greatness of a ML algorithm is the ability to adapt in real-time. But for that, a real-time data has to be processed and it usually takes some time depending on the algorithm and the infrastructure behind the magic. That’s why a cloud application is also very important.
^^ Just thought of a million privacy implications for half of this stuff. So things like the facebook integration and sending text messages from a phone must only be possible if requested by the right person (e.g. “Mycroft, do I have any notiifcations from social media?”)
is there any plan to have voice recognition to tell users apart? Otherwise I can imagine a situation where one teenage son uses this skill to send a message from the other one’s phone, for example.
Will z-wave products be manageable by Mycroft? In that case, it would be possible to turn on/off everything connected to a zwave plug, or open/close windows, doors, etc from Mycroft
Right now Mycroft talks to hubs that manage Z-Wave devices, but we’ve been looking at how Z-Wave could be implemented. I will post more if we move towards having it on the device itself.
Well, if it can connect to any z-wave hub and interact with it its a great step I was afraid it couldn’t connect because the z-wave logo isn’t showed in the IoT providers logos.
Looking forward to hear from this topic!
I don’t have any programming experience yet, but I would eventually like to create a study/office worker aid, which would employ the Pomodoro technique (25 mins work, 5 mins break + support for extended breaks for lunch), and during the break it would just remind you to stand up, walk around and just get away from the screen. Sometimes when you’re working, time just flies (and you don’t notice), and getting away from the desk for a few minutes is a simple, yet important thing everyone should do.
I’d imagine it would be something like:
Me: "Mycroft, I’m going to start studying now"
Mycroft: “Ok Amar, your next break is in 25 minutes.”
-5 minute break time-
Mycroft: “Amar, it’s time for a short break, step away from the desk!”
And then I could potentially get a notification on my phone once the mini-break reaches 5 minutes, reminding me to get back to the desk.
I suppose this way, you could get Mycroft to log your total time spent on specific tasks too, which can be useful information (like the aTimeLogger app on android).
For example:
Me: "Mycroft, how much time have I spent on studying, gaming and exercising this week?"
Mycroft: “This week, you’ve completed 40 hours of study, 10 hours of gaming, and 0 hours at the gym.”
That seems simply easy to do @A-Doal. Well, the reminder portion of it all. I wouldn’t know how to log your total time.
I think I’d make a lot of use of Radio / Podcast functionality.
Notification when a new episode is available for a podcast you’re subscribed to.
Use of tags “Mycroft please play me a podcast about science”
This is absolutely something I would like to pursue. As an avid podcast listener (and guest). I listen to them daily, every morning and often in the evening.
VOIP integration would be cool. I can imagine having Mycroft in different locations, not on the same network. Like at work and at home. With VOIP integration, each Mycroft unit could have a public address that could be called from any other internet enabled device.
EDIT — ahh just saw @WillCooke had suggested this already.
Password or other two factor security. With the security issues, perhaps some functions require a secondary passcode/ confirmation. Image a guest saying, “Mycroft, read my email,” and then having your private emails read. So Mycroft could prompt for a secondary private wake-word before doing the action.
Me: Mycroft, read my email
Mycroft: Please confirm reading email.
Me: Make-it-so.
Mycroft: [proceeds to read email]
Even a two-factor/ one-time password could be implemented, though this would require the one-time password to be parsed over the public internet.
Me: Mycroft, read my email
Mycroft: Please confirm reading email.
Me: [reads two factor pass from phone] Alpha-Bravo-Five.
Mycroft: [parses ‘AB5’, checks it and then proceeds to read email]
[edit: the name of this app should be Make-It-So]
On further thought, the two-factor might not have to go out over the internet. If Mycroft can recognize a good, but limited, set of wake-words, then the two factor code could be comprised of these words. So, lets assume that Mycroft can recognize 0-9. 0-9 is good enough for a one-time passcode.
Not necessarily. They’ve already said that they’ll be using pocketsphinx to detect for the wake-word (i.e. “Mycroft” or whatever you set it to) - I suppose it could be configured to use this for Two Factor Authentication codes as well…?