Google Coral Edge TPU

No as they are all very similar with different compatibility issues that you will have to research yourself.
The google coral is as good as any and you could take the image kit and feed with Mel-Frequency Cepstral Coefficients, or MFCCs
Basically voice images and the standard image classification with that input should work.
I think google where/are in the process of improving whats available not sure what the state of play is.

Asus say they are going to ‘support’ but if its any better or worse than the Pi offering via Google image dunno.

Just don’t expect to grab Deepspeech compile and fly as Deepspeech even runs a fork of tensorflow 1.5 that I have no idea what stage with accelerators such as that are.

You can try but think its best to say that actual compatibility and whats available might be big constraints.

I want one but think it prob might be a dissapointment in what I can run.

The best overall compat are the new Nvidia RTX cards and after that its all down hill with earlier cards often needing earlier versions of tensorflow as performance is badly effected.
My graphics card is a mweh GTX780 pretty old now and don’t even bother trying to use it.

Deepspeech prob would benefit from an accelerator if it would work as on a Pi at least its single thread.

That’s literally a google coral tpu plus an SBC, so a direct competitor from a different vendor.

Other than a Jetson TX2 or Xavier board from nvidia, there’s not much in the sbc space that’s viable for anything but customer or very specific ML work yet. If you’re looking to train, get an add in board for a desktop or go the cloud route. SBC’s are inference boards.

The NPU that this https://tinker-board.asus.com/prod_tinker-edge-r.html contains, has anything to do with TPU? Is it faster than Google Coral Dev/Asus Tinker edge T ?

TPU = Tensor Processing Unit - this can be seen as a GPU that is specialized/optimized for tensor operations (vector and matrix multiplication)

NPU = Neural Processing Unit (based on FPGA) where you can load the model directly to the processing unit. This may give excellent performance but as a drawback programming is quite complicated. Most available model/algorithms are for visual processing (object detection) so this would not be my choice when it comes to speech recognition.

In absolute numbers: Rockchip RK3399pro+NPU is up to 2.4 TOPS, Google Coral TPU is rated up to 4TOPS.

@Dominik actually there is practically no difference to the parts that process models in a TPU to NPU.

As currently the defacto processing unit is Tensorcore the choice of interface has much to do if the model or current training step is loaded but both are batch to unit ram processes.

The USB sticks in comparison due to cost have a smaller compatibillity subset and register types mainly due to cost.
Google cloud TPUs are NPU based not GPU if you can accept the approximation of terminology as its the tensor cores of a GPU without the GPU stuff it doesn’t need.

But TPU is an ASIC right? Another implementation of Digital Design instead of FPGA, right?

Yes, there is a lot of marketing speech involved here. To my understanding the Rockchip-NPU is FPGA and supports “reprogramming” while Coral Edge-TPU is “hardcoded” ASIC.

So, that’s why Google Coral Dev is faster…I got it!

Dunno where you get FPGA for the rk1808 but they are all much the same apart from the GPU part of a GPU :slight_smile:

http://opensource.rock-chips.com/images/4/43/Rockchip_RK1808_Datasheet_V1.2_20190527.pdf

They are all just accelerators with GPUs (nvidia) having the most compatibility due to nvidia’s head start with cuda.

I think the rockchip is somewhere between the 2.
You can hire a Cloud TPU v2 for $4.50 / TPU hour $1.35 / TPU hour on preempt (ie kicked if someone wants to pay $4.50 until spare)
Which is 11.5 petaflops and for training it doesn’t seem to make sense to purchase what you can get for hardware.
Most common voice models run happily at > relatime on a cpu but training can be an almighty endurance chore.
But you can hire server space and have it returned in minutes what will take hours with even some condiserable hardware.
But the little sticks are quite impressive seen as a RTX2080 is about 12Tflops and the vision models are pretty excellent but thankfully voice streams are much slower than visual ones.

Arm are also in the action with the Ethos-N78 neural processing unit 1-10 tops.

Guess they will be coming thick and fast and they will just be as standard as current tensor core GPU.

I try to install the following packages

pandas 
numpy 
scikit-learn 
scikit-image 
click tqdm

in Google Coral TPU, in order to work with tensorflow lite, and I get these errors:

File "numpy/core/setup.py", line 422, in generate_config_h
      moredefs, ignored = c6wfs --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel Cython>=0.29.13 "numpy==1.13.3; python_version=='3.6' and platform_system!='AIX'" "numpy==1.14.5; python_version=='3.7' and platform_system!='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system!='AIX'" "numpy==1.16.0; python_version=='3.6' and platform_system=='AIX'" "numpy==1.16.0; python_version=='3.7' and platform_system=='AIX'" "numpy==1.17.3; python_version>='3.8' and platform_system=='AIX'"" failed with error code 1 in None

Any ideas??

https://coral.ai/docs/

I think you have to delegate to libedgetpu.so.1 otherwise you are just running normal.
But as I said above “No as they are all very similar with different compatibility issues that you will have to research yourself.”

Don’t think many if any are using them here and likely to find better answers elsewhere.

Add the delegate when constructing the Interpreter .

For example, your TensorFlow Lite code will ordinarily have a line like this:

interpreter = tflite.Interpreter(model_path)

So change it to this:

interpreter = tflite.Interpreter(model_path, experimental_delegates=[tflite.load_delegate(‘libedgetpu.so.1’)])

The file passed to load_delegate() is the Edge TPU runtime library, and you installed it when you first set up your device. The filename you must use here depends on your host operating system, as follows:

You wrote at the end “as follows:” but I do not see the rest of the sentence. Do I miss something?

You wrote at the end “as follows:” but I do not see the rest of the sentence. Do I miss something?

Yes it would seem the docs.

I have ran a CNN model on Google Coral TPU Dev Board, but I am not sure it does accelerate the model… Is there a command to use in order to see specifically the ASIC load? And more important the time duration that tensorflow lite accelerates the CNN model?

I dunno but at a guess if

interpreter = tflite.Interpreter(model_path, experimental_delegates=[tflite.load_delegate(‘libedgetpu.so.1’)])

Is included in your code then it should use the tpu

if

interpreter = tflite.Interpreter(model_path)

Its just going to use cpu.

So try testing your code and see if it makes a difference.

You need to find a Google Coral TPU group or forum as really no-one is using them here.

I was wondering if there is open source to blur faces, license blades and company labeling on vehicles on-the-fly in video streams?

This would make the stream GDPR compliant as long as not recording private property.
Ideal would be if an additional feature would allow to black out private property if a camera is fixed mounted at one position.

If there is such software, how performant would the Coral EDGE TPU be? 1 or more streams? What resolution and frame rate?

Recommended hardware | Frigate is prob the one to check as build around a coral edge.

A single Coral can handle many cameras and will be sufficient for the majority of users. You can calculate the maximum performance of your Coral based on the inference speed reported by Frigate. With an inference speed of 10, your Coral will top out at 1000/10=100 , or 100 frames per second. If your detection fps is regularly getting close to that, you should first consider tuning motion masks. If those are already properly configured, a second Coral may be needed.

A pi though comes at the bottom of the list as the base board.

A Coral m.2 is half the price of with maybe the dual e key one being best value even though have never tried.
GDPR is more about using, informing and retention of data and stoping the misuse of that data and sharing.
If you comply correctly then thoses faces, licence blades and company labeling are not a problem, but yeah there are ML segmentation models to capture those so likely the same could be used to obscure.

If you do some googling and find the perfect int8 model for edge ml then performance is extremely impressive.