Skills consuming private LAN services (Intranet instead of a "cloud" services)

Is there currently, or are there plans to support Skills that can connect to resources on the LAN that Mycroft is connected to? Related question: what device fulfills the intent? Does the little Mycroft unit issue the network requests to fulfill once it has received the processed intent? Or does intent fulfillment also happen “in the cloud” and the local Mycroft terminal is merely given the status and a message to speak to the user? If the latter, could there be some special case extension that would allow for local network actions as well?

I ask this with the understanding that the speech recognition would still be sending data outside of a LAN, so there is no option for fully isolated, no-leaking scenario for the foreseeable future. In some cases, this might be tolerable for a corporation while publicly exposing internal services would not.

Scenario: corporate office, accountants wandering down the halls to the IT corner. Everyone is already out and about fixing things. Accountant is greeted by Mycroft, and uses the “file service desk assistance request” skill to generate a description of the current problem with their email client and webbrowser (thus they were unable to file a ticket from desktop workstation anyway) . Mycroft reviews details, and says, “Got it thanks.” and it submits the collected data to a RESTful service on a server on the private LAN, not the public internet. The RESTful service replies with confirmation and an issue record ID. So then Mycroft says “I’ve queued your assistance request in the Service Desk issue tracker. You may wish to jot down this issue ID, although I will also email it to you: ITR#0234-332.”

Great question. The awesome @steve.penrod and @forslund are best placed to respond here :slight_smile:

Hi!

I’ll try to hit all questions above:
LAN resources:
The skill framework is very open and allow connecting to basically any resources using standard python calls and modules. This means that a skill can be integrated with a LAN resource without too much trouble.

Intent processing:
Intents are processed locally on the device as are skills. The process is:
1 Mic Records sound
2 Data is sent to STT in the cloud
3 STT replies with a text string
4 Text string is passed to intent parser systems (there are two on the Mycroft device)
5 Intent parser triggers appropriate skill

Fully isolated:
There are the option to use a local Kaldi server, however this is really messy setting up and is not as accurate as for example google STT so it’s nothing we recommend.

To isolate from Mycroft servers there are some things that should be turned off (configuration updates and pairing for example) and you’d need to get an account for some STT service.

The example you provide should be able to be created with the current state of Mycroft. If you’re interested in this kind of setup I’m sure we can help you out.

Also worth mentioning is that nothing is sent to an external STT until the “Hey Mycroft” wake-word is triggered, so no passive recording of everything.

If I missed something or if you have further questions I’d be happy to help further.

/Åke

1 Like

Thank you, that was a very clear and helpful answer, @forslund and @KathyReid. And good news from my point of view.
One quick follow-up but kind of a tangent: is there a current method to trigger the wake sequence with something like a passive infrared sensor or a webcam? No recording or anything just a simple “visual” way to prompt the Mycroft device to say “Hello, how may I help you?” Perhaps in the scenario that the wandering accountant does not realize that we offer such a nifty AI service for their convenience.

1 Like

You can trigger it by sending a call to the message bus which can be done with pretty much anything.

You could us a Digital Input on the 40 pin header in the back, then use a pressure switch. You could do the same with an IR sensor, motion sensor, etc. You could also use the network and…say…scan for a specific MAC address ( your phone ) and when it appears in the ARP the unit would know you are home ( and welcome you by name! ).

1 Like

Totally cool. Those are some intriguing and fun possibilities.
I am looking at the messagebus overview (https://docs.mycroft.ai/core.overview/messagebus ). I was expecting to see a message type of “Digital I/O” (or something like) where I could register a “handler function” . Maybe I’m thinking about this wrong. Or maybe its a specific case of message type “Message” with some identifying attributes?

I’m not really sure, but it will be fun to learn how to do it together ( as CEO I rarely get to do fun stuff like dig into the software ).

Can you walk me through the specifics of your use case and I’ll build one at my house at the same time?

Why don’t we start with something simple. An instantaneous switch that, when you press it, triggers the “giggle” skill.

When the giggle skill is triggered Mycroft laughs (i.e. the instantaneous switch tickles it ). You game?

I’d surely love to do that, but I do not have hardware yet. I have plain-vanilla Raspberry Pi 3 on order, I will certainly give this a shot once I have it all PiCroft-ed.

By the by, I saw the recent news that you’ve completed the Kickstarter initial fulfillment cycle; congratulations! Where can I find a schedule or estimated shipping date for the next round? The store shows the mark 1 at this point as a “preorder” item, but I didn’t see any dates or descriptions of the pre-order process.

We have them in stock. We’re in the process of switching over to Woocommerce, but until then we are taking “pre-orders” that ship the next day. So pre-order and @Darren-Mycroft will get your shipment out right away.

2 Likes