The Machine Learning Team

Hey, I’m also up for some coding! I’m a C and C++ developer with 4 years experience in Autonomos Cars. Nothing to do with text, but I’m a software architect and I know a thing or two about good code structure.
Looking forward to helping out.

1 Like

Hi, I have a PhD in Computational Science and Informatics (Computational Statistics), and have developed and extended several machine learning methods through my research. I don’t have a huge amount of free time, but am willing to contribute and evaluate methodology as well as do some coding.

One issue that might be out-of-scope is the intent parser. Right now, it’s just a multinomial likelihood ratio, with some comments about improving it some day. I don’t think this method will get you beyond the “command line / text adventure” approach to voice queries. I have no issue with some sort of likelihood ratio approach, but the keyword parameterization is severely limited beyond just creating a proof-of-concept.

The next time I have some free time I’ll do a bit of a lit review and try to put together an alternative intent parser, probably written in R or C. Is there any documentation on interfaces between the STT -> Intent Parser -> TTS?

Thanks,

1 Like

Hi Jonathan,
Linagora is an open-source company located in France. We are developing hubl.in (https://hubl.in/) the open-source video conference service. We are working on STT to add new features such as real-time recommendation, automatic summary or subtitles. We are also working on Kaldi and developping models for English, French and Arabic languages. Perhaps we may collaborate ?
By the way I will be in ICML this month in NYC, will you be also ? perhaps we can talk ?
Jean-Pierre
Innovation director at Linagora
@jplorre

2 Likes

@jplorre yes! Let’s see if we can collaborate. I’ll PM you along with @jdorleans.

I’d be interested in helping where I can.

I work as a lead visual designer at Canonical on Ubuntu Touch/Desktop, and in my spare time am playing with machine learning. I’ve got roughly a year of experience with Caffe, currently learning Tensorflow.

I have an Amazon Echo Dot- I think the hardware is great and my kids love it (mostly for jokes and Spotify), but I find the software integrations very limiting. I am also hoping we can come up with accurate speech recognition that runs completely on the device instead of the cloud, on current (or near future) mobile GPUs. Shrinking model sizes with pruning and quantizing weights to 8 bit looks promising for mobile use.We also need better CUDA alternatives for mobile GPUs. Has anyone looked at Vulkan/SPIR-V for deep learning? This implements some basic deep learning on Metal (and their paper has pointers for doing the same with SPIR-V) - http://deeplearningkit.org/

On the hardware side I think we also need readily available array microphones (and OSS drivers that do beamforming) for far-field audio. Has anyone found a source for affordable MEMS array mics?

You might be interested in this - It’s based on Kaldi but has its own CTC implementation

6 Likes

I’m experienced with javascript, and did a couple of hobby projects in python, java and c. Not much experienced with ML - familiar with basic concepts like linear/logistic regression. Would love to contribute.

This is awesome project! I have experience in python, C, and Java, and I have about 2 years of experience in ML, most of which is in text NLP. In the past I’ve experimented a bit with theano and currently I’m learning TensorFlow. I would love to contribute.

Greg

I have worked on C/C++ for close to 3 years professionally. Currently pursuing graduate studies and working on an R&D project trying to control an IoT hub through Voice commands. No experience in ML or AI. However, have experience developing API’s. Would love to contribute! Adaptable to new programming languages and technologies too.

Hi

I’ve got many years of experience in development (web, application and IoT) but my main interest in machine learning and AI is the idea of building emotional intelligence into machines.

Computers are great at computation, analysis and have an extremely high level or logical intelligence (IQ), but at present very low (EQ) and its EQ that would enable a machine to interact with a human and have that person enjoy the interaction.

Now in order for a computer to know whether an interaction with a person is fulfilling that person’s requirement for social interaction, the computer would need sensors. Our sensors are sight, hearing, touch and intuition. At present MyCroft only has sound (we speak to it) or typed text by which to guage our response, but we could add sight with the use of OpenCV

In any case, give it more sensors and it could guage our response and tailor it’s interaction to better suit our need.

I’m rambling… hope I can be of use.

Regards
Heinz

I’m currently in my last year of Undergraduate Software Engineering degree. I have some work experience on developing data analysis application, using ML for prediction. Plus some research experience on Reinforcement Learning and Computer Vision.

I’m very interested in creating machines that we can interact with almost effortlessly. My experience includes C#, C++, Python and Java. I’d love to contribute.

Thanks,
Dai

Hey @jdorleans,

I just learned about your project on Coder Radio EP 217 with @ryanleesipes and would love to help out. I develop in Java and Python in my day job and I am doing the GATech OMSCS part-time where I have taken some ML and knowledge-based AI classes.

Let me know how I can contribute!

Steve

1 Like

Hi,

I’m in 2nd year Master degree in Electrical Engineering.
I’m working on a smart home project which uses voice to control thought IoT.
I have done a face recognition project on TensorFlow.

I’m very fond with Mycroft.

I would love to contribute.

Trung.

Hi, I’m a iOS & Python programer from China. Thanks.

Hi,

I’m a Linux System Architect working in India. I’m currently pursuing a Machine Learning NanoDegree from Udacity. Is there any project related to Machine Learning in Mycroft that I could work on as part of my capstone project for my nanodegree?

Details: https://www.udacity.com/course/machine-learning-engineer-nanodegree--nd009

Thanks

I am very eager in participating the AI x prize challenge, regarding the experience part I have ~6 years of experience under my belt in stacks ranging from J2EE, PHP, Python, .NET and embedded systems. I was part of the google X prize competing team and developed the rover navigation as a research assistant intern. (team got 1 million prize from X prize). I have good understanding of most of machine learning algos and basic tensorflow and CNN model development. Do let me know if you are interested and we can talk more about it.

Hi all,

I have over 40 years of experience and I’m developer that can write in Java, C# and C. I have several years experience Linux, Androd and Windows. I am looki g for I have over 40 years of experience and I’m developer that can write in Java, C# and C. I have several years experience Linux, Androd and Windows. I am looking for a system that can start thinking for itself and not reply to commands. We can talk if you want to hear more.

Hi @jdorleans, @wolfv and other friends at mycroft, have you contacted anyone from OpenML yet? I am involved in the community and maybe we could skype some time. Also there is a hackathon coming up soon. Maybe you wanna join? http://openml2017dev.openml.org/

1 Like

I used to work for a lab in computational neuroscience when I was getting my degree. We created NCS (Neuro Cortical Simualtor) with our sister university in Lauzanne creating a more advanced version. Is the Open Brain Initiative similar? I would like to chat with you if it is.

I was also thinking the same thing. I am going to start creating my own version of a ML probably sourced from the Open ML at first. My project is still a secret though for now.

I am interested in Golang but I don’t know the maturity of the go library. Google search lead me to https://github.com/sjwhitworth/golearn/blob/master/README.md

Any idea if it’s good?