Stopping play_wav()?

I’ve been messing about writing a skill and it’s mostly going well (although the documentation hasn’t been much help, I’m afraid). However I can’t seem to figure out if there’s a way to stop the play_wav() function once it’s started.

Alternatively which other ways are there to get a skill to play sound and which formats are supported? (Other than play_mp3())

I’ve also tried getting VLC to play and whilst it will from a normal command line it’s not interested when run via Mycroft. I figured native functionality would be better though as I had assumed I’d be able to yell “Stop!” or similar to end the noise :grin:

It seems like I’m supposed to instead be using the Mycroft Audio Service (I’d assumed that play_wav() was part of that). I’m now trying to implement that.

I’ve imported the right thing and my initialize self thingy looks like this:

    def initialize(self):
            play_video_audio_intent = IntentBuilder("PlayYoutubeAudioIntent").require("play_youtube_audio").build()
            self.register_intent(play_video_audio_intent, self.play_video_audio_intent)
            self.audio_service = AudioService(self.emitter)

Of course the skill doesn’t work and throws an error:

File "/opt/mycroft/skills/skill-youtube-audio.flamekebab/__init__.py", line 31, in initialize
self.audio_service = AudioService(self.emitter)
AttributeError: 'YoutubeAudioSkill' object has no attribute 'emitter'

I haven’t a clue what that error means and there doesn’t appear to be anything in the documentation about it. Any idea what basic mistake I’m making here?

I’m running the unstable branch.

The audio service hands off the playing to other programs that may not have a easy way to stop.

Hi there,

Have you seen our AudioService documentation?
https://mycroft-core.readthedocs.io/en/master/source/mycroft.html#audioservice-class

These technical docs are well linked from the primary user docs which we really need to change.

Any thoughts on what this emitter business is about?

I can see it mentioned in these two skills:
https://github.com/forslund/event_skill/blob/master/init.py
https://github.com/skeledrew/brain-skill/blob/master/init.py

But beyond that I can’t find any mention of what it’s about or how to use it.

So for the AudioService I believe you want to be passing it a reference to the messagebus eg:
self.audio_service = AudioService(self.bus). It then knows where to send the appropriate messages.

The emitter is what sends messages to the bus. However it’s format has changed over the years as Mycroft evolved and became more stable. So instead of self.emitter.emit now we have something like:
self.bus.emit(Message("speak", {"utterance": "words to be spoken", "lang": "en-us"}))
However if you’re writing a Skill you generally shouldn’t need to send messages to the bus directly. We instead use simpler helper functions, like:
self.speak_dialog("words to be spoken")

and instead of the old self.emitter.on, we can call:
self.add_event(mycroft.audio.service.stop, some_function)
which would trigger some_function everytime the AudioService emitted a stop message.

1 Like

Thanks, I’ll look into it. Where should I be looking to find out about this stuff? (It’s not currently mentioned in the documentation)

The various messages are documented at:

However I don’t think the bus.emit and add_event methods are currently covered. It’s one (of many) areas we need to expand on in the docs.

If you feel like writing something to help solidify what you’re learning please let us know :grin:

I’m asking because this page contains the instructions I followed. It doesn’t cover any of the emitter stuff and as a result the code doesn’t work at all.

I generally comment my code, like, a lot. I’m not a professional coder and I work in quite a few different languages so I don’t tend to remember how things work syntactically - I comment stuff to help my future self!

Which also makes me wonder - what happened to the comments in the Hello World skill? I’ve seen projects that use it as a basis that have the comments still intact but the current incarnation has nothing. Getting started creating a skill is a nightmare as a result. I know how to get Python to do what I want but getting to the stage where I can actually start writing a skill has been horrible.

Changing

self.audio_service = AudioService(self.emitter)

to

self.audio_service = AudioService(self.bus)

fixed the problem. It immediately worked as intended.

Related question:

self.audio_service.play()

Which formats does that support? MP3 and WAV, sure, but anything else?

Thanks for the heads up on that documentation. I’ve fixed that up and added a link to the technical docs at least.

The other piece you might want to get familiar with is the Common Play Framework. This is what allows us to differentiate between Skills that use common phrases like “play metallica” as there are likely multiple Skills that are able to respond to this. So for example if a user has an existing Pandora station called Metallica, then that should probably win out over a random Metallica playlist from Spotify.

In terms of formats, we just just aplay so whatever that supports should work. Other skills like iheartradio however have explicitly used VLC to handle a broader range of streams.

1 Like

Thanks yourself! With the information you’ve provided I’ve slowly been able to make sense of how this stuff fits together and now my skill works! There’s plenty of room for improvement but it actually… works!

I started trying to wrap my head around the Common Play Framework and gave up. It’s easier to define a keyword to be said before the relevant query than try to figure that thing out.

aplay, eh? That helps a great deal. It doesn’t help that their format list on the man page was apparently written for people whose second language is 6502 :grin: I’ll just assume MP3 and WAV.

Is there some documentation on VLCservice?

Yeah, the Play Framework is another hurdle, and I wouldn’t tackle that unless you really need it.

Mp3 and wav I think is the safest bet.

Haven’t had time to do VLCservice docs yet, but the code is fairly straightforward if you checkout the repo.

If you haven’t seen it before, you will see that iheartradio passes a 'duck': true. This is a fairly new, somewhat experimental, feature to drop the audio level down when it detects the user speaking.

1 Like