A Service Needed to Execute Intents of Visible Components

Lets discuss implementing visual results to Mycroft queries. This would also open up whole new categories of skills and use cases for Mycroft. It seems that since Mycroft is being integrated into everything possible that we have some opportunities to take advantage of the unique features of each form factor. Desktops generally have a screen that is always available to display query results, the Mark-1/PiCroft are basically audio only, and the context on Mobile there is sometimes a screen available and sometimes it’s audio only.

There could be a Mycroft daemon that processes the relative location of each of your Mycroft devices and which contexts are available to you. Using this we can respond to intents in different ways depending on if the user is near a screen, if they are driving, if they are listening over headphones with their phone screen off, etc.

Adapt would then use this device availability context when selecting an appropriate skill response.

Implementation TODO List:

  • Fields added to home.mycroft.ai to store
    – device location and capabilities,
    – current user’s estimated proximity to each device
    – other compatible devices within range of Mycroft (Chromecast, Zigbee, etc)
  • Service to process GPS location on mobile
    – use GPS location to update nearby devices
    – use WiFi network joining to update nearby devices
    – use BLE proximity tags update nearby devices
    – pool nearby devices into a ‘Global Message Bus’ capable of receiving and dispatching remote commands?
  • Updating Adapt to be aware of additional capabilities of these remote targets
  • Existing Intents will continue to work, new intends to be added over time