Coral support is limited. I would not pick one up at this point. Wait for a more generally usable version or updates to current software to take advantage of its capabilities.
As for the N GPUs are able to X mycroft units, that’s not a very clear metric, either. There’s significantly more elements to make that a coherent equation.
Yeah it seems it works ok with the object identification demo.
Its my noobness as I thought tensorcore-lite support would mean it would support tensor-lite?!?
e.g. “1 GTX 1080 can handle X Mycroft units” or “1 FPGA (brand and type need to be specced) can handle Y Mycroft units”
Was from the Mycroft roadmap and only quoted in a similar vain of curiosity that X or Y of a brand and type could be tested.
I always struggle getting through any ML documentation as always seems extremely longwinded and painful.
I was hoping it was going to be purchase and install with tensorflow-lite which I keep calling tensorcore, but doesn’t matter its currently a cul-de-sac.
Apols for all the questions but just getting my bearings and direction.
Not so sure about the sausages in the casing, from personal experience seems to be us outside the case trying to develop.
There is a disconnect in the knowledge hireachy of ML, but up at the top close to the source get results.
I played around with Unity and their OpenML stuff and the samples worked perfect, but my usual approach to hacking (what I call programming) was a pointless disaster.
The model collection and hyperparameters seems more like an arcane art rather than the simple brute force of it works, it doesn’t check error msg.
TFL and GPU seem to be currently x86_64 only where the Arm/Raspberry version is native client_client only
I read enough of lissyx posts that if they are failing to compile then I am not even going to bother,
Hello can I get faster CNN training time by using Google Coral dev rather than PYNQ-Z1? Can I get faster CNN training time by Google Coral dev comparing to Jetson nano? Has anyone who use it, give an advice?
Yes but many only offer a subset of tensorflow compatibillity.
The Coral works with the Google image project but haven’t seen another project for it.
They are coming down in price but how restrictive there subset is still means unless you know of a working project prob don’t bother.
Prob not as they offer a subset of tensorflow lite and have you seen another project for it?
Or any mention anywhere of another project that supports it?
If you can rewrite using its specifics then yes, but no one seems to.
Deepspeech might be now they are using 1.15 but prob not as likely.
No as they are all very similar with different compatibility issues that you will have to research yourself.
The google coral is as good as any and you could take the image kit and feed with Mel-Frequency Cepstral Coefficients, or MFCCs
Basically voice images and the standard image classification with that input should work.
I think google where/are in the process of improving whats available not sure what the state of play is.
Asus say they are going to ‘support’ but if its any better or worse than the Pi offering via Google image dunno.
Just don’t expect to grab Deepspeech compile and fly as Deepspeech even runs a fork of tensorflow 1.5 that I have no idea what stage with accelerators such as that are.
You can try but think its best to say that actual compatibility and whats available might be big constraints.
I want one but think it prob might be a dissapointment in what I can run.
The best overall compat are the new Nvidia RTX cards and after that its all down hill with earlier cards often needing earlier versions of tensorflow as performance is badly effected.
My graphics card is a mweh GTX780 pretty old now and don’t even bother trying to use it.
Deepspeech prob would benefit from an accelerator if it would work as on a Pi at least its single thread.
That’s literally a google coral tpu plus an SBC, so a direct competitor from a different vendor.
Other than a Jetson TX2 or Xavier board from nvidia, there’s not much in the sbc space that’s viable for anything but customer or very specific ML work yet. If you’re looking to train, get an add in board for a desktop or go the cloud route. SBC’s are inference boards.
TPU = Tensor Processing Unit - this can be seen as a GPU that is specialized/optimized for tensor operations (vector and matrix multiplication)
NPU = Neural Processing Unit (based on FPGA) where you can load the model directly to the processing unit. This may give excellent performance but as a drawback programming is quite complicated. Most available model/algorithms are for visual processing (object detection) so this would not be my choice when it comes to speech recognition.
In absolute numbers: Rockchip RK3399pro+NPU is up to 2.4 TOPS, Google Coral TPU is rated up to 4TOPS.
@Dominik actually there is practically no difference to the parts that process models in a TPU to NPU.
As currently the defacto processing unit is Tensorcore the choice of interface has much to do if the model or current training step is loaded but both are batch to unit ram processes.
The USB sticks in comparison due to cost have a smaller compatibillity subset and register types mainly due to cost.
Google cloud TPUs are NPU based not GPU if you can accept the approximation of terminology as its the tensor cores of a GPU without the GPU stuff it doesn’t need.
Yes, there is a lot of marketing speech involved here. To my understanding the Rockchip-NPU is FPGA and supports “reprogramming” while Coral Edge-TPU is “hardcoded” ASIC.
I think the rockchip is somewhere between the 2.
You can hire a Cloud TPU v2 for $4.50 / TPU hour $1.35 / TPU hour on preempt (ie kicked if someone wants to pay $4.50 until spare)
Which is 11.5 petaflops and for training it doesn’t seem to make sense to purchase what you can get for hardware.
Most common voice models run happily at > relatime on a cpu but training can be an almighty endurance chore.
But you can hire server space and have it returned in minutes what will take hours with even some condiserable hardware.
But the little sticks are quite impressive seen as a RTX2080 is about 12Tflops and the vision models are pretty excellent but thankfully voice streams are much slower than visual ones.