I have a lot of time to think about this case during these days. Unfortunately, I think this mods couldn’t be created easely.
In fact, we need a real full car computer based on snappy over RPi2 with Mycroft integration, instead.
That could be an exiting project to lauch on a crowdfunding platform for a team who desire to design those kind of car computer.
I take a look on similar project based on Pi and this is what a MycroftKitt would need at least:
- Ubuntu Snappy as based OS
- MIR server display
- Unity8 with a car dedicated scope
- A snapps of XBMC for all the multimedia part
- A snapps of Marcos Costales’s uNav for navigation
- A snapps of a FLOSS diagnostic car tool
Because MycroftKitt would have to connect to internet, the best things to do could be activated thethering on the phone when driver enter in their car.
“- I detected that an Internet connection is possible through your phone, would you like to activate thethering ?”
"- Yes please !"
Now driver can say:
“- Mycroft, I want to go home”
"- Your place is at 50 min. The traffic is dense. I checked the best way to go there. Let’s go home, follow me !"
"- Mycroft, I want to listen
this song from
"- The song is not present on any of your device locally. Would you like I play it from Internet ?"
“- I detected an anomaly with your equipment. You should bring your vehicule to the garage. Would you make the call now ordo I need to remember you to do it later ?”
“- Your gasoline level is very low. I advise you to go to the next station. Would you want to add a step to your itinerary ?”
Any suggestions ?
Maybe warnings about police radars (not sure of legality in some states)?
Well here in France is forbidden, but it could be great if MycroftKitt could alert us about all the other things waze can do.
“Warning, there’s car stopped at the side road in 200m. Do you confirm, is the car still here ?”
Maybe add bluetooth for calls, send sms, stream music, another internet access point…
Driver could say:
" - Mycroft, turn on bluetooth
- Ok, would you like me to connect to XXXXXX
- Yes please "
“- Mycroft, call XXXXXX XXXXXX”
You’re able as I understand it to identify accident blackspots and areas of concern to drivers as you approach them (and French law would infringe human rights legislation if it stopped you doing this). What you’re not allowed to do is to identify police speed cameras (although I’m not clear how effectively this has been challenged. As the police tend to carry out checks near accident blackspots then you can simply get around this by changing the alert to “You are entering an accident Blackspot area” - just as with fixed cameras in the UK, you never actually know if they’re working or not.
The second, more difficult one to get around is reports of police cameras; however as these are areas where motorists commonly speed, the reports are able to state “You are entering an area with high probability of high speed traffic”. Of course, you can’t operate the relevant server in France, but it would be very easy for an AI system to identify police speed report locations and assess whether these are Blackspot or high speed traffic areas and report accordingly. With a server outside Europe (no Eu arrest warrant applicability) it is then very difficult for the authorities to identify that police reports are being used to identify “high speed traffic areas”.
It is time to revisit this topic.
I think the type of thinking here in the initial post is a step in the right direction. @Winael is thinking beyond the home.
This type of entity is something that I have been wanting for some time now. A.I. in my vehicle.
However a huge plus would be if I could have that A.I. connected to my home as one entity. Perhaps with new Raspberry 3 with more processing power it will be easier to develop such a thing.
If you all crowdfund this, I am in.
@Dominique we are looking at how best to integrate with a wide range of devices, but are always open to suggestions. It would be fantastic to have an open platform be the base for voice-enabling products that we already use (like Bluetooth speakers in the home), but there is no reason why we couldn’t voice enable many other things.
If you guys have any ideas or connections in these spaces, please let us know. We’d love to see if any OEMs are already looking for this tech for use in their products.
I think there should be a way to communicate through individual rooms with MyCroft even of you do not have one dedicated to the room. However this poses a problem as to how due to having require hardware being bought to accomplish it.
How difficult would it be to develop a sort of “dummy” Mycroft that would be a small device with a lower quality microphone and speaker used for communicating with Mycroft from different rooms via bluetooth or WiFi? If placed in each room it could allow communication between rooms without Mycroft. My concern would be that the bluetooth range would be insufficient and the circuitry for it would be too expensive to make it economically practical.
And, Ryan Sipes, you mentioned the use of existing bluetooth speakers. Is it possible, with the Raspberry pi 3, to have Mycroft pair and be connected to these bluetooth speakers and individually output to separate speakers? (Which would allow placing a speaker permanently in another room and talking across rooms with it) If so, that would definitely be something to look into.
@Wolfgange good question! We started doing that just this week with a Raspberry Pi + Bluetooth speaker and it is pretty bad ass. So we will work really hard to get everything out there that people need to enable that and use the set up. You’ll probably see a blog post about it soon.
Nice, I’m eager to see that
@Wolfgange when dealing with speech recognition you cannot go cheap when it comes to a microphone. There really just is no way to get around that.
Yeah, you bring up a good point, however the dummy mycroft unit’s microphone could have a sole purpose of two-way communication between rooms. So although the dummy Mycroft units wouldn’t be able to initiate a conversation with Mycroft or use speech recognition, the person in another room could have a conversation through this unit.
Mycroft in the car is an awesome idea! I am very much interested in this, as a matter of fact Mycroft portability period. The hard part I see is how will Mycroft interface with other devices, like say all aspects of my car’s computer? I know I am still very much new to the community, but these are the things I would like to understand and figure out. If we could develop an array of interfacing devices running some sort of Mycroft API, wouldn’t that be a great start?
I am interested and would like to connect my car to mycroft or have a version that run in the car. But it needs to communicate to the main server. A lot of work . If you are interested, let us talk.
We’ve been talking about the value and challenges of working in a car environment. One advantage is that modern cars usually have a pretty good microphone system for picking our the driver’s voice and a nice set of speakers. The obvious challenge for Mycroft or any voice assistant is how to deal with internet connectivity. I’ve been thinking about “offline” options for the Speach to Text (definitely possible), but I think most of what you really want Mycroft for still requires reaching out to the internet at the Skills level. Do you agree? Or is there much value in a purely offline Mycroft that can talk to the car’s CAN Bus?
I don’t think it would be that difficult given the explosion in wireless, cheap, but capable ARM based SOC’s : http://hackaday.com/2014/10/25/an-sdk-for-the-esp8266-wifi-chip/
That is an older article on the now ubiquitous esp8266; however, a newer bluetooth 4.1/wifi module is available. These are literally 5 bucks (not in volume) and have enough on board cpu / mem to do this simple ‘proxy’ / ‘wireless voice bridge’ (?) type of application, right?
I think the design question is whether you want to require the car system to be reliant on you phone for its internet connection, or if you want to build some kind of 3G/LTE support right in to the system. For a system to be successful and not just a toy that becomes irritating, the car’s internet connection would need to be really solid.
A completely disconnected Mycroft would definitely require much more than an ESP8266. The storage and processing power for Speech To Text far exceeds that, almost certainly more than even the Pi 3.
I’ve been thinking about mycoft in the car as well, seems a logical place for voice controlled AI to be.
I think probably best to first reach out to this project http://i-carus.com/ as they’ve already got a linux car product and hardware. I’m not sure how active this project still is but I’ve been planning on purchasing the components from them and having a go at getting mycroft running, or shoe-horning in the mycoft internals when the backer devices are shipping.
To me would seem to save time and money if there is someone that already had components ready and just needs the Mycroft AI software .
I’ve heard different reports of this companies service though so I dunno but I thought it was worth mentioning in this thread.
Be great to know your thoughts also
Automotive Grade Linux is thè reference distro:
They seem to need a Speech Services subteam btw