Methodical? Looking for exposure to Python? Be a Mycroft Skill Tester!

Originally published at: Be a Mycroft Skill Tester! Get exposure to Mycroft and Python

Are you a methodical sort of person? Do you like to find bugs that no one else can spot? Do you like things to work cleanly and without error?

A code tester walks into a bar... #coding #testing #geekhumour #geekhumor #programming pic.twitter.com/NIy61IinlS

— Matthew J Scott (@GeezerD205) May 15, 2015

Perhaps you’re in college and looking to get some exposure to Python and voice user interfaces. Perhaps you’re thinking of developing Mycroft Skills and want to to see some other Skills in action first. Maybe you’re looking for how you can contribute to the Mycroft ecosystem, to find where you fit and where your unique talents will be appreciated.

Being a Mycroft Skill Tester is for you!

This volunteer role is critical to assuring the quality and integrity of Skills that are submitted to the mycroft-skills repo, ensuring that standards are upheld, and providing constructive and diplomatic feedback to Skill Authors to encourage high-quality Skill development.

This might be the right place for you in the Mycroft ecosystem if;

  • You have some exposure to programming, and in particular Python
  • You have some exposure to Mycroft, and have some idea of Skill Intents, and how dialog, vocab and locale files work
  • You're passionate about helping to realize the vision of Mycroft as an open voice assistant for everyone
In return we can provide;
  • A well documented Skills Acceptance Process
  • Mentoring from Kathy, who currently leads our Skill Testing efforts and can provide one on one coaching
  • A rundown on common errors found during Skill Testing so you know exactly what to be on the lookout for
If you're interested, drop @kathy-mycroft a line in Mycroft Chat.
2 Likes

That’s a lovely job description, Kathy. I hope that people respond.

Interesting that you would just now post that. ‘Expanding search in the space of empirical ML’ is the title of a paper published at arxiv-sanity.com on December 4, 2018. The author, Bronwyn Woods, points out that the rate of algorithmic innovation is so high already that it is difficult to keep up with it.

At the same time that the innovation continues, it is important to direct at least as many resources now to testing and synthesis of the models in which the new algorithms are being used.

The job may be too much for any single entity to do alone. Imagine where we could be by the end of 2019 if there was a global collaborative effort to do this.

I’m not sure what that has to do with Skill Testing?