This new release contain a more dynamic dialog, and a new feature. Check out https://github.com/CarstenAgerskov/skill-the-cows-lists#readme for details.
The feature: You can now ask what tasks are due.
The dialog: The cows lists work with lists and tasks. When you mention a list or a task in a command to the cows lists, it will be remembered for a short time. The task or list is said to be in context. Within this time, further commands to the cows list will refer to the list and/or task in context. For instance:
You: "Hey Mycroft, find bananas on the grocery list"
Mycroft: "I found bananas on list grocery"
You: "Hey Mycroft, complete it"
Mycroft: "bananas on list grocery was marked complete"
You: "Hey Mycroft, read the list"
Mycroft: "List grocery has 3 tasks on it, potatos, apples, oranges"
You: "Hey Mycroft, complete oranges"
Mycroft: “oranges on list grocery was marked complete”
Under the hood:
The automatic test runner is used, and a second layer utterance parser in introduced.
In short, the second layer utterance parser can return which regex was matched, and can force the evaluation order of regex expressions for the skill, which allows for a more “narrow” regex expression to be tested before a “broader” one. I would propose these two features to be put in Adapt (to avoid a second parsing)
Reason for the second layer parsing: With the new dynamic dialog, utterances with different meaning become more and more similar. Using Adapt, context goes a long way to make distinctions. However, it is difficult to handle “complete call home on the to do list”, “complete call home” (in context of a list) and “complete all tasks”. Furthermore, while utterance normalizing helps in many ways to deduce the intent, I want the exact wording of a task reflected, for instance “add the matrix to the movie list” should add “the matrix”, and not only “matrix”.
In the skill, Adapt and contexts are used to activate an intent. But the first thing the intent code do, is to run yet another parser. This second parser simply try to match the original utterance to an ordered lists of regex expressions. At the first match it returns, with a key to the matching regex.
This release utilize the new test runner, and all intents are tested, some more than once, using different wordings. However, the cows lists depends on remember the milks back end. The test runner alone can not authenticate with the backed, it requires a user to log on and allow the skill to use RTM initially.
Because of that, all intents, and the back end library, are tested by Python unittest, and when those tests are run, the skill is authenticated, and tests are run end to end, with much better coverage.
The strategy is to use the test runner to validate that Spoken utterances hist the correct intent method, and that they are understood. The ordinary unit test takes over from there, to test the functionality inside the skill.
To get the most of the test runner under these circumstances, I do as much processing as possible in the intent, before starting to use the RTM backend. In the test case file for the test runner, I set a context (_TestContext), then in the intent, before the RTM backend is called, I test for _TestContext. Id the context is set, the skill Speak the parameters (status) it has, for the test runner to verify.