What "Skills" Will You Make for Mycroft?


@nogre: For @WillCooke’s proposed “Mycroft to Mycroft voice IM between houses” thing, it should be feasible to use webRTC audio-only


The question is the capabilities of pocketsphinx versus the sophistication of the two factor code. Since the project has stated that we can change wake-words, having a special confirmation word should be within reach. The issue is whether we can have lots of wake-words, one for each symbol used in passcodes.

For instance, the two factor authenticator on my phone produces 6 digit numerical codes every thirty seconds, e.g. 754707 … 903992 … etc. So if pocketsphinx can recognize spoken numerals, this would be sufficient to enter this sort of code.

But if someone wants to use a code like `5vV#Sj^]3 then there would be an issue. “Square Close Bracket” would likely have to be parsed online, as would “lowercase v” and “uppercase V”. Also this symbol, “#”, is referred to both as a hash tag or a pound sign (at least in the USA). What are these things " `, ^ " called again? So the complexity can grow quickly… which is kind of the point in security, but not so good in language parsing.

My hope is that the numerical processing is within the reach of the system. No special grammar is needed and it would work with the existing authenticator programs for people who want extra security.


Ah, right. I see. I always assumed that 2FA codes were numerical. Guess not then.


I am, myself, not convinced that reading the codes out loud is a good thing, even given that they’re only temporary.


Agreed. But the idea was to give people the option to have whatever sort of confirmation system they would like.

Initially I was thinking about the Gmail anti-drunk emailing feature. Gmail can be made to set simple captchas or math problems before it sends email if it thinks you are drunk. Likewise Mycroft could be set to require some simple barriers for different reasons. Then I expanded out to include general security.


How MyCroft is different thatn Jasper ? Most of the things available in Jasper already . What new here ?


Jasper is a neat project @G10DRA, but there are some key differentiators between Mycroft and Jasper.

The first is we are not simply a client that connects to APIs. We have to Adapt Intent Parser and now our OpenSTT initiative that are key projects within the Mycroft effort that Jasper can make use of, but we are pioneering.

As we release our voice loop you will find that our approach to key word recognition and how Mycroft is structured is different as well. Lead dev @seanfitz can probably speak to this more in-depth. But while similar we are not the same.

Finally is the goal, I have reached out to some Jasper devs to see if our two projects can collaborate. But we explored using Jasper early on and had issues getting it set up and running. We very much want to ensure anyone is able to contribute with relative ease. We would still love to work with the Jasper community. But as of right now our goal is to create something that is easy to contribute to and use and can integrate into as many devices and software as possible - if the Jasper community wants to join us in that we would love that.


@ryanleesipe, Is there any plan to put Camera on Mycroft in first cut?
That will open door for lot more ‘skills’.


I love Jasper; it definitely got the ball rolling on these types of automation projects. That being said, it does have it’s limitations. As @ryanleesipes called out, their intent determination implementation, while functional, is brittle. It’s based on regular expressions, which aside from being difficult to write well, are error prone and difficult to debug when conflicts arise. Adapt will allow for more skills, allow users to target specific implementations of the same skill (for example, playing music on pandora vs spotify when both skills are enabled), and allow developers to determine when their skill will potentially cause ambiguity within an operating mycroft environment. I’m hoping to call out more of this in a future blog post, and be able to point to code samples as Adapt is released publicly.

Also, sadly, it appears Jasper has not continued active development (last commit in July). We’re hoping to build a strong community around both Adapt and Mycroft to keep the good times rolling :smile:

@ryanleesipes is also right in stating that we’re building more than just the bits on box. Initial versions of mycroft will likely look very similar to Jasper; the whole thing will run on your Raspi. As time goes on, we’ll pull things into the cloud (or allow you to run your own backend) to provide better experiences than can be achieved in the limited resource world of Pi. We want to collect data (from our generous community) to train better speech and learning algorithms, and release those algorithms, code, and data back into the community. There is a moon, and we are shooting for it.

Kill all the moons!


Mean time in Japan


I would like Mycroft to support also:

  1. Deezer for playing music maybe throught Chromecast as done for Netflix.
  2. Smart Light compatible MiLight/LimitlessLed
  3. Execute SSH commands

I can help developing this modules when will be released the code for developers.
I decided to pre-order a unit, I suppose I will be able to create modules then if not already there out-of-the-box


We are talking about this now: https://mycroft.ai/how-mycroft-plans-to-make-building-skills-accessible/


I’ve not been participating enough on the forums here! :smiley: I’ve been thinking a bit about applications that need escalated privileges. I really like the two factor ideas, and a custom confirmation / password would get the job done ( as long as no one else hears passphrase.) I’m left wondering about voice fingerprinting…
EDIT: This is another reason I’m interested in Mycroft on the desktop, who else here has a GPU with OpenCl or Cuda support for those extra heavy processing tasks?!


This would be awesome! Walkie-talkie over IP :slight_smile:


The ability to play music from a local source such as a USB stick would be handy or to play from an existing local networked media server,


Considering the USB ports are exposed this could be done. The local media server is an idea we had been working on, but I hadn’t considered if someone just plugged a USB stick into the back of the unit. Good idea @kongwak!


Or integrating a MMC slot in the housing, connected to an USB port. I am making such a device for my car, to play music from a memory card in stead of CD’s, using a Raspberry Pi.


The ability to put Mycroft into active listening mode with the ability to be “quiet” and only print the answer to a log. I am starting a Web series with friends, and it would be awesome to add instant lookup for the chat room. You could link it to all sorts of back ends for Video Games which would be great for the Twitch community.


The USB port is exposed on the back of the device. Looking at a way to allow Mycroft to play local media like that fairly easily.


One could seen it replacing the phone over time if you could hold conversations with contacts at another distant location !