FileNotFoundError: [Errno 2] No such file or directory: 'aplay'

I use docker image on RaspberryPi4+.
It run but, when I select Play audio on server and try English example text I get error
FileNotFoundError: [Errno 2] No such file or directory: ‘aplay’

root@b2qt-raspberrypi4-64:~# docker run        -it        -p 59125:59125        -v "${HOME}/.local/share/mycroft/mimic3:/home/mimic3/.local/share/mycroft/mimic3"        'mycroftai/mimic3'
INFO:__main__:Starting web server
[2022-12-05 17:42:52 +0000] [1] [INFO] Running on http://0.0.0.0:59125 (CTRL + C to quit)
INFO:hypercorn.error:Running on http://0.0.0.0:59125 (CTRL + C to quit)
INFO:mimic3_tts.tts:Loaded voice from /usr/share/mycroft/mimic3/voices/en_UK/apope_low
ERROR:mimic3_http.app:[Errno 2] No such file or directory: 'aplay'
Traceback (most recent call last):
  File "/home/mimic3/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1673, in full_dispatch_request
    result = await self.dispatch_request(request_context)
  File "/home/mimic3/app/.venv/lib/python3.9/site-packages/quart/app.py", line 1718, in dispatch_request
    return await self.ensure_async(handler)(**request_.view_args)
  File "/home/mimic3/app/mimic3_http/app.py", line 225, in app_tts
    subprocess.run(play_cmd, input=wav_bytes, check=True)
  File "/usr/lib/python3.9/subprocess.py", line 505, in run
    with Popen(*popenargs, **kwargs) as process:
  File "/usr/lib/python3.9/subprocess.py", line 951, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/usr/lib/python3.9/subprocess.py", line 1823, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'aplay'

But I have it
root@b2qt-raspberrypi4-64:~# whereis aplay
aplay: /usr/bin/aplay

But aplay is not inside your docker container, just your main OS.
Look into possibly mounting /usr/bin into your docker container, or install aplay within it.

Docker is ephemeral so you have the problem on restart all installs will be gone.

Dunno how good this is as a 1st Google

What you can do on a running container is do a docker commit that will create a new image of what you are running with the new changes.
Just Google the above as simply sharing /usr/bin very likely will run into the problems that the conf and data are stored elsewhere and like on a normal host you need to install a package than just copy a binary.

You could try by just adding another volume to the docker run command but think changing that to a new docker image you have created with commit is likely better.
Docker often confuses but you have to think of containers as the different virtual computer they are and you just ssh into it like you would a remote computer docker exec -it <running_container_name_or_id> \bin\bash prob needing sudo as you will likely end up as root so often its better leaving docker needing permission elevation of sudo