fixed and improved it a little.
My Dev-Machine is a Ubuntu VM, does anyone have a Mark I at hand to test if everything works as expected?
- New function sendMycroftSay: it just takes a message and puts “say” in front of it and sends it to the message bus as an utterance (why use the say skill if you could send it directly to the message bus?: if the say-skill get’s an update -> telegram-skill does not need to be modified and the phrase “say SOMETHING” will not likely change in the future)
- The skill should not only send you a message when loaded, mycroft should also say “Telegram skill loaded”.
- Muting should be more reliable (it happened before, that only some part of the sentence was muted)
- If Mute is enabled (at home.mycroft.ai) but does not work (because of problems with alsaaudio), the skill now says out loud “There is a Problem with alsa Audio, mute is not working” when skill is loaded
- If skill was not able to compare the Mycroft Device Name with the Name you stated on home.mycroft.ai, mycroft will say out loud “No or incorrect Device Name specified! Your DeviceName is: YOURUNITNAME”
Multi-Sentence answears from mycroft are not yet supported by the telegram skill.
User: Hey Mycroft, what can you do?
Mycroft: i can tell you the time, what the weather is like……. (first sentence -> is sent back to user via telegram, mute is working)
Mycroft: i have 33 Skills installed …… (second sentence -> is not sent to user, no mute)
best would be if it was possible to send an utterance with context-tag (something like telegram-message) and everything mycroft does/say because of this utterance get’s the same context-tag
If anyone has an idea how to achieve this, i would appreciate it.