It’s Pi Day again so let’s celebrate Picroft!
We’d love to hear your story around Picroft. Did hearing about it bring you to the Community? Maybe you saw one of the Kickstarters but were so excited you got going ahead of time with a Pi?
My favorite thing is that Picroft can be used with the AIY Voice Kit, the Matrix Voice arrays, and Respeaker arrays, all of which were Community contributions.
Ten points to whoever has had the most Picrofts running in their house at the same time!
I love Mycroft and my Raspberry Pi’s. In fact, I have 4 different Picroft devices running, each with a different mic and speaker setup. I also have 2 different Mycroft devices running, one with Ubuntu Studio and one with Archlinux. I have submitted a skill, PR #890 on Github. It is avaliable here tmdb-skill. Test it out and leave a response. Maybe I can get the developers to add it to the Mycroft Skills Marketplace.
Good luck to everybody out there that gives Mycroft a go!!
Damn - 4 Picrofts! I don’t know if anyone is going to top that.
I have 3 Pi’s, but don’t know that I’ve ever had all three running at once. I’m always managing to ‘tweak’ one a little too much haha.
I enjoy raspberry pi projects so picroft was a must. I only have one running at this time but am planning on adding a picroft to several rooms in my home to control lights fans or what ever.
3 of them run constantly, and the other I use for testing. I love them.
I LOOOOVE Mycroft. I Love an hackable voice assistant and defently love the comunity and the people at Mycroft. Every day I look forward to do some mycrofting and go to the forum and read and wrtite and chat on the chat channels.
I have a Mark 1 in my kitchen speaking English. Then I have at least one picroft with google AIY for hacking, testing and playing. For the time beeing setup running Danish to complete translation.
I have more AIY kits waiting to I get time and lust to setup more deviceses around the house. But I also need my homeassistant and all the IOT stuff working (again) so I can get full use of mycroft to control lights, TV and media etc.
Most mycrofting is done on the picrofts using THEIA IDE from my laptop running windows. But also have a mycroft running on WLS but only for text as I hassnt got the lust and time to get audio working on WLS. But it should be posible i think.
I like the personal server project, but dont se myself wanting to run all the stuff myself - I am to old to keep servers running at home anymore. In old dayes I would be running everything self. But the project is very interesting.
I would like to se some better unity between the mycroft devices like setting an alarm from one device to another - intercom, only advertising stuf in room with people etc. But I am sure that will come sooner or later
Those are things that I am looking for also. I want interrogation between devices. I would love to use them as a whole home intercom and home assistant.
Nice!! I will check it out @brrn. Thanks
About a year ago I was demonstrating a “toy” I’d built (incorporating Picroft) to a blind friend and she had an epiphany that led to me building (and her testing) a network of several devices attuned specifically to assisting the blind and low-seeing navigate around, and communicate with, their own homes. We formulated a business case, did our user-research homework and as an ABQid/SFid business accelerator grad with multiple “Certified Accessibility Professional” certifications under my belt, finding an angel investor was pretty easy. We also got component design and manufacturing assistance as part of that package. Some local coders heard what we are doing and as long as we keep all code open source, we have excellent help coming out our ears. Personally, once Joyce had her epiphany, it’s been a no-brainer ever since: we (and our angels) like open source and not-for-profit.
Don’t know if this is a good place to post this in these forums but I did start with a Pi, Picroft and a few peripherals, right around last Pi Day.
And then I reread the beginning of this thread: so far I’ve reached 8 Pi’s at once in this network, interconnected via wifi with an Ubuntu server to process the larger I/O flows in a more timely manner. In my back bedroom is a wall with 15 monitors attached to 15 Pi’s wifi-attached to another Ubuntu box. Each screen has its own digital character speaking in its own accent, using its own slang, responding to different spoken and digital inputs. Should you ask one the “wrong” question it will auto-refer you to another character designed to respond to queries like that. All 15 can carry on separate conversations at the same time, or all can act as one. My proof-of-concept had characters talking back and forth on their separate screens until a person approached the array too closely. Cross the first proximity line and one character would turn and say “You’re getting too close to us.” Cross the second proximity line and all would turn and say “Back off! You’re too close!” (My 8-year-old loves it.) Further development of that is another beast, one I’m really looking forward to wrestling with but first things first: the 8-node network has a more pressing use case and things are already well along the track to fulfillment.
Holy moly. That sounds incredible! And, it sounds prime for a Brady Bunch intro reference!
If you have picture or video of that, it would be awesome to see.
If you ever want to connect on your business case, feel free to drop me a line at email@example.com
That does sound pretty damn amazing. What have you got the local Ubuntu server handling?
I think you definitely win the most Pi’s award!
Eric, I like that you went for the Brady Bunch option, my mind instantly thought of some super creepy Return to Oz running through the hallways style scenario…
Unfortunately I’m a government contractor of retirement age, and my employer is busy putting a couple million contractors like me out of work. The future for people of my age in the US is pretty bleak, that’s partially why I’ve been focused on smaller implementations since last summer. I’ve also had to reduce my living footprint and am now looking at moving abroad within the next few months (contract ends in June). So I’m down to only a couple Pi’s and a couple monitors (my grandson took most of my collection with him to college this year). I have to reduce my footprint further yet, soon.
To get where I was, I experimented with MS Cognitive Services and Google Voice (I do a lot of my work in STT and TTS) and Mycroft/Picroft. I also pruned my own versions of Pocketsphinx and its relatives. My problem was the time it takes to process and respond, so real interactive I/O ran through the Linux box, as did some of the larger image files. A lot of the speech I was recorded and I set triggers for playback. Generating 3D figures was fun but such a time suck! I ended up playing with one figure in multiple skins, repeated across the monitor array in slightly different poses. My Python scripts were great fun but I’m not a great Python programmer: a lot more could have been done using a cleaner code stream (and something like Tensorflow). I didn’t get there because of Joyce’s epiphany: I have long been looking for a device to create that could generate a long-term income stream for my impending retirement. I’m on the cusp of that.
Once set up in an affordable destination, I will probably turn back to what was most fun: creating and animating the 3D figures for the large interactive video array.
I live in the home town of Meow Wolf and I see the video array as a powerful competitor for them: their stuff is almost purely built-world and takes forever to construct and/or change anything. My system would be almost instantly portable: a series of monitors, Pi’s and peripherals controlled by a Linux box. Once an array is set up and operating, reloading the array with a whole new “show” would take a couple hours at most. And once a library of “shows” is built up…
The US is littered with declining retail malls, some of which are trying to pivot to become boutique malls with community centers and social centers. The owners of those malls are searching the world for entertainment options to, hopefully, increase their customer flows and bring some social value back to their dinosaurs. That’s only one profitable use case.
Imagine a group of people being able to join in a “conversation” with one (or more) of the characters in the array. Each of those characters being relatively independent of each other and yet able to refer queries to each other as appropriate. One that I was looking to develop was “Evening at the Improv,” complete with a rowdy audience and potential for human interaction. Another is the “Greeting Wall” in most children’s museums around the world. I installed a simple system in my granddaughter’s elementary school and the kids wiped my MS hobbyist account in a couple days, just triggering the simplest of things repeatedly to see the video response. That caused me to get into Mycroft/Picroft and play with that some. All that said, I’m not married to anyone or anything for any of it. So I may only have a couple Pi’s in my drawer these days but there are a couple dozen micro-SDs in there with them (don’t even think about the updates I’m behind - each has something of value from where I stopped that development stream).
All that said, I’m now only working with a proximity detector triggering a specific set of playback functions in a very small form device. Light-powered with battery backup, wifi connected, programmable (and custom message recordable) via phone app. Proof-of-concept runs on a single Pi with a proximity detector and audio speaker. Hopefully I’ll get finished with this before my next life on this world…