Hi all,
I am creating a skill that echoes back what the user has said. I want to display the recognized utterance on a screen. How do I do this? Do I need to use socket communication? I ask because the skill runs as a service. I could write the output to a text file and have a reader program read it but that kind of thing is tricky to do. I have looked at the recording skill and know that I have to use something called a scheduled skill. However, there is no documentation on this kind of skill. I have some code below. The scheduling is failing because the line
self.schedule()
is getting a null value. I am also uncertain about how to stop the skill. Do I have to do something with threading and static methods? I am seeing the record skill where this has been done but I do not know how to apply that knowledge to my code.
By the way, the use case for this skill is to create an application for deaf people which would output conversation to a screen.
import time
from adapt.intent import IntentBuilder
from mycroft.skills.scheduled_skills import ScheduledSkill
from mycroft.util import record, play_wav
from mycroft.util.log import getLogger
there is no mechanism to display things in other enclosures
maybe of interest: https://github.com/JarbasAl/skill-parrot <- speak back every utterance to user, you probably could change this and do something with the text
im not sure how you want to display the text, but i would maybe make a simple flask endpoint to receive it and then decide what to do after that, no longer constrained by mycroft itself
@Jarbas_Ai
The parrot skill is exactly what I need. I am happy with creating a flask end-point. However, how do I get the utterance to that end point? Should I use sockets or should I write them to a file and have the program read that file?
in converse method you capture all utterances before intent parsing, i do that check with the list to know if the stop intent will trigger, if yes i return False, this means converse method did not do anything with the utterance, then adapt processes the utterance and triggers the stop intent
if the stop will not trigger, i capture the utterance and speak it back, i then return True, to tell adapt to not do anything with the utterance
the flask endpoint was just an example, if you use it you should be able to simply use the requests library to post data there, something like sending a request to http://127.0.0.1:9876/ then doing whatever you want with the utterance in flask
flask may not be the best tool for this, depends on where you want to show the utterance, flask is nice because you can just open the webpage in your phone, which would be desired for deaf people
Jarbas_Ai
Many thanks for your message. I looked at your parrot skill. It is exactly what I need. However, I tried incorporating your techniques into my own code without success. My skill launches and then appears to die immediately. I think I have removed all the run time errors. I am unable to figure out the flaw in my logic.
from adapt.intent import IntentBuilder
from mycroft.skills.core import MycroftSkill
from mycroft.util.log import getLogger
from os.path import dirname, join author = 'Pranav Lal’
LOGGER = getLogger(name)
class EchoUtteranceSkill(MycroftSkill):
def init(self):
super(EchoUtteranceSkill, self).init(“EchoUtteranceSkill”)
self.should_echo=False
self.stop_words = []
# load stop words from .voc file
path = join(dirname(file), “vocab”, self.lang, “EchoUtteranceSkillStopKeyword.voc”)
with open(path, ‘r’) as voc_file:
for line in voc_file.readlines():
parts = line.strip().split("|")
entity = parts[0]
self.stop_words.append(entity)
for alias in parts[1:]:
self.stop_words.append(alias)
def initialize(self):
intent = IntentBuilder("EchoUtteranceSkillIntent").require(
"EchoUtteranceSkillKeyword").build()
self.register_intent(intent, self.handle_echo_utterance)
intent = IntentBuilder("EchoUtteranceSkillIntent").require(
"EchoUtteranceSkillStopKeyword").build()
self.register_intent(intent, self.handle_echo_stop_utterance)
LOGGER.info("echo skill initialized")
def handle_echo_utterance(self,message):
LOGGER.info("echo back on")
self.should_echo=True
self.speak_dialog("echo.on",expect_response=True)
def handle_echo_stop_utterance(self):
self.should_echo=False
self.speak_dialog("echo.off",expect_response=False)
LOGGER.info("echo back off")
def converse(self, utterances, lang="en-us"):
if utterances is not None:
for stop in self.stop_words:
if stop in utterances[0]:
self.should_echo=False
else:
self.should_echo=False
if self.should_echo==True:
LOGGER.info("about to speak")
self.speak(utterances[0], expect_response=True)
else:
pass
return self.should_echo
Hi all,
I have my skill working. The problem was in the intents. I had 2 intents with the same name.
intent = IntentBuilder(“EchoUtteranceSkillIntent”).require(
“EchoUtteranceSkillKeyword”).build()
self.register_intent(intent, self.handle_echo_utterance)
intent = IntentBuilder(“EchoUtteranceSkillIntent”).require(
“EchoUtteranceSkillStopKeyword”).build()
Once I changed the names of the intents, the skill began to work. I am sending out the data via http requests. I have made a basic http server in python and am using that to display the data on the console. I do not know enough about dynamic web applications and ajax to crete webpages that display the updated content. However, the console output is working for me.