Mycroft Home : using more than one device

Hi,

I am using Picroft 18.8.2

To learn skills, I create one to send orders to a device which controls my home rolling shutters. With settingsmeta.json I have a new form in Mycroft Home Skills to send the IP address of the device. The skill works fine. Now, I control my rolling shutters manually, by web from a computer, by my smartphone with an Android API when I am outside and now by voice. It’s fascinating to interact by voice. The limit is to stay within earshot so we need multiple devices.

Having always in mind to work on French language, I think it could be better to get one Mycroft device working in English and one to experiment. I replaced the SD card in the Raspberry, follow the boot process as if it was a new device, and pair it. I have now 2 Mycroft devices but one Skills and Settings pages for both. This configuration is not easy to handle.

When using device n°2 and despite the fact that they work well with it, the device owner of the skills is always device n° 1 as shown by (i). The settings for my custom skill device n°2 was difficult to get because the form was filled in for device n°1. I remove it and after multiple tries, finally I get my custom skill form on the Skills page. The owner is now device n°2. I don’t retry device N°1 !

The easiest could be to create one account for each device, but I think it’s not the usual way. I certainly missed something. How does Mycroft Home work when we use more than one device and need different parameters ?

Hi there @henridbr, this is a really strong problem description - thank you.

At the moment the only solution we have is to use the Skill Setting page on home.mycroft.ai and hit the reload icon -
🔄
to delete the Skill and reload it from a new Device. The owner is then reset to the new Device.

Going forward, we’d like to build in more functionality to better support synchronization and communication between multiple Devices - but we haven’t developed that yet. For future development plans, please do have a look at our Roadmaps.

Thank you @KathyReid

It’s the problem with sound devices : one device with multiple speakers and one sound displayed or multiple devices and multiple sounds displayed.
The first solution would be more appropriate for elder people personal assistants and the second one for families with many different uses, or in offices.
Good to dream a little !

1 Like

Hi there

I’m also facing the same issue. I own a house with multiple rooms in Belgium, we are a family with, depending of the time, 0 and 7 children, and I always wanted to have a so-called Digital Servant. I’ll not describe the concept in this discussion but privacy is part of the concept. So I decided to think on the best suited and accepted configuration and design regarding the hardware to place in the house, what to place and where. I finally came to a design where I need one central decision processing unit with multiple satellites. I finally build the central unit (the master) on a Nvidia Jetson TX2 and the satellites are cheaper but good looking boxes, like good old radio with quality loudspeakers, a raspberry+hifiberry, a screen, a Jevois A33 camera, a microphone array and some electronics. The satellites gather voices commands and faces picture that are streamed to a central unit (the “master”) which play the same role as a satellite except that it can provide the necessary cpu + gpu power to process face recognition and voice processing for itself and all the satellites. PocketSphynx is used on the satellite for keyword spotting and the rest is streamed to the master. I’ve installed Mycroft and Kaldi on a Nvidia Jetson TX2 board (the master) and currently writing the streaming processes for video and voices between the master and the satellites. I do not know Mycroft yet, but after having installed it on the tx2 and quickly try to understand the basic concept on the use of skills and the messaging bus, I think Mycroft is not designed at that stage to take into account multiple entities and I’m wondering for example on how to “route” voice responses to the correct satellite.
I just finish the installation of Mycroft on the Jeston tk2 and I don’t know the Mycroft structure yet, still a lot to learn :slight_smile: but I think that taking into account multiple satellites is not just a matter of writing a new skill but need diving into the core system and the communication process.

On the other hand, I’m convinced that the only solution for a viable “digital servant” is to build it in the spirit of a real servant : someone accepted by the whole family, someone who is there (almost) everywhere, someone who respect the private life between each member of the family and over the internet and someone who can potentially know very well each member of the family by a continuous improvement of the neural network parameters used for face and voice recognition for example, or for some decisional models.
Trusting the system is the goal of Mycroft so why do you want to hide your conversation and yourself to the cam and the microphone (like a real servant by the way :wink:

But, and this is the pin-point, because the framework is dedicated and trusted by its users the beloved data (the hyperparameters of the neural network framework that are continuously fine-tuned) are getting more and more accurate and unique over the time. Even Google and Amazon are not able to build such accurate data because they are not capable to build a trusted relationship with you, at least for faces and personal features concerning you and your family, and your habits inside the house. You don’t need to worry about privacy, you know it will keep your secrets, it is designed to do so. Nevertheless,security matter. Trained faces and voices hyperparameters will become really sensitive data over the time because they can be used by another model to perfectly identify the person to whom those data belongs to. On the other hand, no picture of you are kept in the system, pictures are used the period of time to improve the neural network model and deleted directly after.

Kathy, we need this feature ! :sunny:

I take the opportunity to thank all the Mycroft team and the community for developing and supporting such an initiative and hope to grab in a near future enough Mycroft knowledges and know-how to contribute somehow as well.

2 Likes

Hi Jeeves,

It sounds like a great setup you have in mind and I very much agree with your focus on respect for the privacy of each family member. The sensitivity of our biometric data is also a critical point.

We do have some important updates coming very soon to provide some better handling of multiple devices. However there are few aspects of your vision that aren’t yet possible such as speaker recognition - the ability for Mycroft to discern who is speaking to it and using their data / settings as distinct from another family member.

It is theoretically possible to use the satellites as thin-clients who all communicate to a central Mycroft instance, however that would require significant work. The standard setup is for each satellite to run it’s own instance, and communicate with a central server for those processor intensive tasks. There is a community project to do this at https://github.com/MycroftAI/personal-backend which is in effect a localised version of home.mycroft.ai so everything can run within the confines of your LAN. This however is very much a work in progress.