I love Jasper; it definitely got the ball rolling on these types of automation projects. That being said, it does have it's limitations. As @ryanleesipes called out, their intent determination implementation, while functional, is brittle. It's based on regular expressions, which aside from being difficult to write well, are error prone and difficult to debug when conflicts arise. Adapt will allow for more skills, allow users to target specific implementations of the same skill (for example, playing music on pandora vs spotify when both skills are enabled), and allow developers to determine when their skill will potentially cause ambiguity within an operating mycroft environment. I'm hoping to call out more of this in a future blog post, and be able to point to code samples as Adapt is released publicly.
Also, sadly, it appears Jasper has not continued active development (last commit in July). We're hoping to build a strong community around both Adapt and Mycroft to keep the good times rolling
@ryanleesipes is also right in stating that we're building more than just the bits on box. Initial versions of mycroft will likely look very similar to Jasper; the whole thing will run on your Raspi. As time goes on, we'll pull things into the cloud (or allow you to run your own backend) to provide better experiences than can be achieved in the limited resource world of Pi. We want to collect data (from our generous community) to train better speech and learning algorithms, and release those algorithms, code, and data back into the community. There is a moon, and we are shooting for it.
Kill all the moons!