Avatar Link | Middleware between Mycroft and Avatars

I want to build avatars - digital characters - for Mycroft. My focus is on the character’s appearance and behavior (or Lower Mind), while Mycroft provides its embodied advanced AI (or Higher Mind)(1).

I know there are a few other projects here of people doing similar things (2). People are also making avatars for other voice assistants, and other types of AI (like chat bots).

Robustly compatible avatars are appealing, but it’s impractical to build and maintain interfaces for the various combinations of Avatar and Higher Mind.

Avatar Link could be a way to solve that. It’s a separate piece of open-source software to sit between Mycroft and the Avatar, which will provide a consistent interface no matter what changes occur with the Higher Mind - or even which Higher Mind is active. As long as the interface layer is maintained, all avatars that implement it won’t break with changes to Higher Minds.

Here’s the repo, which currently just contains documentation.

Avatar-Link README

Design Document

  1. Is there a already a project doing what I propose with Avatar Link?

  2. How can I receive data from the App Channel?

My search has turned up negative on the first question. It’s got to be open source and decoupled from any particular Higher Mind and any particular implementation of Avatar (so it should work with an Arduino-based robot or whatever).

For the second question, I could really use some guidance. I’ve done a bit of web development, but for this project I’m just now learning about web sockets and Python’s web libraries. Similarly, I’ve only set up and run an Ubuntu Mycroft client and don’t know what inner workings of Mycroft I need to understand or interact with to make progress on Avatar Link. I think the Message Bus looks useful, but am not sure what to do about it.

I welcome any feedback or other assistance you provide on these two questions or the project in general. Either here or in the [Mattermost chat channel](https ://chat.mycroft.ai/community/channels/avatar-link)

Thanks!


(1) Higher / Lower Minds. Apologies for introducing new terms. It’s the best I could think of for the 2 levels of AI behind this type of embodied avatar.

(2) Here are some relevant discussions about Mycroft avatars

[Tree Industries Avatars](https ://community.mycroft.ai/t/mycroft-gets-a-3d-avatar/4623)

[Q.bo Avatar](https ://mycroft.ai/blog/mycroft-on-the-q-bo-one-robot/)

[Some thoughts about what Avatars might need](https ://community.mycroft.ai/t/what-do-i-need-for-a-3d-avatar/3988/5)

(sorry for broken links, “New Users can only Post 2 links”)

2 Likes

Regarding the architecture and “inner workings” of Mycroft you’ll probably find this video by @steve.penrod helpful: https://www.youtube.com/watch?v=pyc3wWYoI8E - although this is two years old it should still be valid…

As you probably know Mycroft supports different hardware designs, among them the Mark-1 with Pixel-Ring (“eyes”) and a LED-Matrix (“mouth”), and the (future) Mark-2 which will have a touch-sensitive LCD. You should have a look in the mycroft-core source-code how the enclosure-mechanism including display-manager works: https://github.com/MycroftAI/mycroft-core/tree/dev/mycroft/enclosure

1 Like

Great suggestions.

@steve.penrod 's video was exactly the sort of broad perspective I’d been trying to pull together from the documentation. Very helpful to step through the process from speech waveform to action. I’m curious about how the improvements he mentioned are coming along but they probably won’t affect what I’m doing at the moment.

I think you’re spot on with the Enclosure API providing relevant messages. That means the Avatar Link would be an Enclosure, as far as Mycroft is concerned.

I’m pretty much in the dark about how that’s done, but have a clearer direction. Thanks

I was stuck on how to read messages from the message bus.

After playing with the example programs here I believed that websocket was a point-to-point protocol (TCP). It turns out that Mycroft is broadcasting to everything on the Message Bus.

Here’s the code I used to find MB traffic including messages meant for the enclosure

#!/usr/bin/env python

# WS Client Listener

import asyncio
import websockets

async def ListenMycroftMB():
    IP = 1.2.3.4 # Substitute your Mycroft IP address
    uri = f"ws://{IP}:8181/core"
    async with websockets.connect(uri) as websocket:
        while True:
            message = await websocket.recv()
            print(f"||| I heard {message}")

# Event Loop

loop = asyncio.get_event_loop()

try:
    asyncio.ensure_future(ListenMycroftMB())
    loop.run_forever()
except KeyboardInterrupt:
    pass
finally: 
    print("closing Loop")
    loop.close()

I got outputs like this:

||| I heard {"type": "enclosure.mouth.events.activate", "data": {}, "context": {}}

Hey Derek,

Welcome to the Community, it looks like some exciting stuff you have planned!

On the message bus front - if you haven’t already seen it checkout our documentation on this:

WebSockets are actually a protocol themselves so utilises TCP but is also quite different:

Each of Mycroft’s services listens to the message bus and they decide if there’s a message they need to respond to. So a message emitted to the bus can be heard by any service listening.

1 Like

Hi Derek!

I’m working on a Pi-based magic mirror enclosure for Mycroft, using the Mycroft API and Pi3D to render on Raspberry Pi hardware. I’ve got the back-end stuff running (no github project yet), but building an animated model to animate has been a challenge.

I was looking at your Avatar-Link for inspiration, but I can’t figure out what it forwards to (on port 9494). I have got the Mycroft Messagebus working.

1 Like