Arm do an interesting Tutorial with ArmNN that is pretty awful in results but prob going to play with it to see if I can get the Mali delegate working.
https://developer.arm.com/documentation/102603/2108/Device-specific-installation/Install-on-Raspberry-Pi
rock@rock-5b:~/workspace/armnn/python/pyarmnn/examples/speech_recognition$ time python3 run_audio_file.py --audio_file_path tests/testdata/quick_brown_fox_16000khz.wav --model_file_path tflite_int8/wav2letter_int8.tflite --preferred_backends CpuAcc CpuRef
Your ArmNN library instance does not support Onnx models parser functionality. Skipped IOnnxParser import.
Preferred backends: ['CpuAcc', 'CpuRef']
IDeviceSpec { supportedBackends: [CpuAcc, CpuRef]}
Optimization warnings: ()
Processing Audio Frames...
the quick brown fox juhmpe over the llazy dag
real 0m2.693s
user 0m8.031s
sys 0m0.282s
Just to see if I can switch to GPU with the Mali G610MP4 which is supposed to have pretty good ML perf
# Copyright © 2021 Arm Ltd and Contributors. All rights reserved.
# SPDX-License-Identifier: MIT
"""Automatic speech recognition with PyArmNN demo for processing audio clips to text."""
import sys
import os
import numpy as np
script_dir = os.path.dirname(__file__)
sys.path.insert(1, os.path.join(script_dir, '..', 'common'))
from argparse import ArgumentParser
from network_executor import ArmnnNetworkExecutor
from utils import prepare_input_data
from audio_capture import AudioCaptureParams, capture_audio
from audio_utils import decode_text, display_text
from wav2letter_mfcc import Wav2LetterMFCC, W2LAudioPreprocessor
from mfcc import MFCCParams
# Model Specific Labels
labels = {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e', 5: 'f', 6: 'g', 7: 'h', 8: 'i', 9: 'j', 10: 'k', 11: 'l', 12: 'm',
13: 'n',
14: 'o', 15: 'p', 16: 'q', 17: 'r', 18: 's', 19: 't', 20: 'u', 21: 'v', 22: 'w', 23: 'x', 24: 'y',
25: 'z',
26: "'", 27: ' ', 28: '$'}
def parse_args():
parser = ArgumentParser(description="ASR with PyArmNN")
parser.add_argument(
"--audio_file_path",
required=True,
type=str,
help="Path to the audio file to perform ASR",
)
parser.add_argument(
"--model_file_path",
required=True,
type=str,
help="Path to ASR model to use",
)
parser.add_argument(
"--preferred_backends",
type=str,
nargs="+",
default=["CpuAcc", "CpuRef"],
help="""List of backends in order of preference for optimizing
subgraphs, falling back to the next backend in the list on unsupported
layers. Defaults to [CpuAcc, CpuRef]""",
)
return parser.parse_args()
def main(args):
# Read command line args
audio_file = args.audio_file_path
# Create the ArmNN inference runner
network = ArmnnNetworkExecutor(args.model_file_path, args.preferred_backends)
# Specify model specific audio data requirements
audio_capture_params = AudioCaptureParams(dtype=np.float32, overlap=31712, min_samples=47712, sampling_freq=16000,
mono=True)
buffer = capture_audio(audio_file, audio_capture_params)
# Extract features and create the preprocessor
mfcc_params = MFCCParams(sampling_freq=16000, num_fbank_bins=128, mel_lo_freq=0, mel_hi_freq=8000,
num_mfcc_feats=13, frame_len=512, use_htk_method=False, n_fft=512)
wmfcc = Wav2LetterMFCC(mfcc_params)
preprocessor = W2LAudioPreprocessor(wmfcc, model_input_size=296, stride=160)
current_r_context = ""
is_first_window = True
print("Processing Audio Frames...")
for audio_data in buffer:
# Prepare the input Tensors
input_data = prepare_input_data(audio_data, network.get_data_type(), network.get_input_quantization_scale(0),
network.get_input_quantization_offset(0), preprocessor)
# Run inference
output_result = network.run([input_data])
# Slice and Decode the text, and store the right context
current_r_context, text = decode_text(is_first_window, labels, output_result)
is_first_window = False
display_text(text)
print(current_r_context, flush=True)
if __name__ == "__main__":
args = parse_args()
main(args)
Wav2Letter seems to be exactly what the name is and lacks a context dictionary, but it was ArmNN that was of interest as unlike a Pi many Arm boards are now have quite capable GPU’s and NPU’s.
The 8gb $149.00 Rock5b might sound expensive when compared to a Pi4 but on the CPU it ran Whisper x5 faster and also has the most powerful Mali based GPU I have seen and also a 6 Tops NPU (Supposedly) as Tops is not a good metric.
These boards are never going to compete with the latest and greatest GPU’s but the can partition models and uses system ram where with a dedicated GPU you may want allocate to a single model already loaded in vram.
But either way server based systems shared across clients (satelites) is a far superior infrastructure as the diversification of commands is inherently client-server with the big load of ASR & TTS being idle the majority of time and a time frame of use where queued clashes are low.
Rock5b is still only shipping to the early adopters but apparently OKDO will stocking them, the carrot was $50 off as the distro images are still extremely raw.