There are 3 different Kodi skills. This one seems to be the most recently added/updated. https://github.com/k3yb0ardn1nja/mycroft-skill-kodi
Yes! and among them all i loved mycroft remote Kodi skill. It really made Kodi media player easy to use. Sometimes while changing between the Kodi addons or the shows it sucked to getup and change manually. Now this is cool.
Old thread, but i am facing the same issue: does anyone know is there a up2date (and/or working) Kodi skill for MyCroft?
If not i’ll give it a try and develope one.
Not that I’m aware of @Luke - none of the Kodi Skills has been updated for Python 3.
@Luke, I am attempting to get something working. This has been on my wish list for a while but doesn’t look like any of the existing skills are being updated. I am doing some testing and will see what I can come up with in the next couple days. If you want to give it a go feel free as well. I have forked this one https://github.com/Cadair/mycroft-kodi as it uses the kodipydent interface which looks pretty straight forward.
I’m perfectly fine with you going for it if you need a tester i would be glad to give it a shot
I will give it a go! At this point I am thinking to implement …
- play / stop / pause / resume functions for movies.
- Search for movies
- Cursor control (up, down, left, right, select, back)
@pcwii I have another possible addition, however not really thought throught yet.
Similar as things for KDE-plasma and the mark-II. To be able to see / sent info to the screen. With the right skin and such, Kodi can then act as the frontend for Mycroft. A bit similar as the new Alexa cube thing.
@j1nx, Might try to get that portion working first as it will greatly assist in my debugging. I hope I have not bitten off more than I can chew.
Saw your fork and initial work. I might join you in your efforts shortly (need a few more days for my wordpress screw up).
Need the exact same, so…
Excellent, I am a bit of a GitHub noob. Have not quite got the push / pull / merge / branch thing going. This is an area I will need to learn if we are both going to collaborate on this. I 100% welcome the collaboration though so I will learn as we go. I know there are many interested in this skill.
Ah cool, perfect combo because I know GIT but lacking proper Python knowledge.
Just keep on going, we can branch, cherry-pick and squeeze your commits later making it nice for a PR.
Would you post the github. That everybody got the link if they read the thread (and because i am also a github noob and have trouble finding the fork )
Very much a work in progress (more like a nothing working yet but in progress). I have made some changes and I have been doing some testing (tests.py) just to understand the kodipydent module. I have also included an API.txt for my reference. The repo can be found here.
Here is the progress so far…
- Notifications are working (albeit a bit slow).
- Cursor control is working (sort of). Have an issue where TTS thinks “move kodi up” is “food co-op” (might try “move cursor up”).
- Can’t figure out how to get the conversational context to work for the cursor control. ie. once I tell mycroft to move the cursor, I would like to just say up, up, right, select.
- Have an issue trying to get the other kodi.py module to import so this is causing issues with the play and search requests. Might need to move the functions into the init.py if I can’t figure it out.
Might have more time to play later this evening but that is it for now.
End of day update on the new Kodi Skill
- Notifications “mycroft: turn kodi notifications on / off”
- cursor control.“mycroft: move the cursor up / down / left / right”. Still an issue with conversational context.
- pausing a running movie “mycroft: pause the movie”
- restarting a paused movie “mycroft: re-start the movie”
- stopping a running movie “mycroft: stop the movie”
- Playing a movie, may need some assistance on how to extract the movie name from an utterance.
Wow looks great for this little time!
About this issue:
Can’t figure out how to get the conversational context to work for the cursor control. ie. once I tell mycroft to move the cursor, I would like to just say up, up, right, select.
Shouldn’t a simple while loop be efficient?
When user tells mycroft to move the cursor go into while loop like this:
While true (or while user does not trigger stop):
Do something with cursor move and if cursor_move equal stop then break
Something like this, just my 2 cents
Keep up the good work!
@pcwii Nice, running strong! If this is just “learning as you go” then I look forward to “getting the hang of it”
@forslund, I thought I would poke you to see if you could support an issue I am having with the expect_response=True option. In my skill I want to be able to ask mycroft : “move the cursor down”, at this point I want to set the conversational context and permit one word requests: “down”, “down”, “right”, “select”. I am pretty sure I don’t fully understand the way this context is expected to work but here is my code snip it for reference.
def handle_move_kodi_intent(self, message):
direction = message.data.get(“DirectionKeyword”)
if direction == “up”:
if direction == “down”:
if direction == “left”:
if direction == “right”:
if direction == “select” or direction == “enter”:
if direction == “back”:
move_kw = message.data.get(‘MoveKeyword’)
kodi_kw = message.data.get(‘KodiKeyword’)
self.speak(“o-k, next”, expect_response=True)
that looks kind of correct. (Technically you shouldn’t need to send the second parameter of the set_context but this is still correct)
What is the actual issue you have? It’s just not detecting the words followup commands? Or is some other skill triggering?