@baconator
Oh OK, now i’ve dug a little deeper i saw that the articles talking about 2 servers with the second model already packaged, so i haven’t recognized it as such.
So, STT aside. Is the TTS serveing (described in the how-to) still viable? Or what would you suggest?
Is STT modeling sourced from one speaker beneficial?
We first want to iron out the shortcomings mentioned above, e.g. “stop attention” and voice quality. After that the TTS & vocoder models will be published.
As discussed in Mycroft chat with @SGee i’ve uploaded the sample phrases as in first post with a new “vocoder” (wavegrad) model training. @Dominik and i are currently playing around with different vocoders.
It’s based on same taco2 model as first samples (460k steps), so voice flow is identical but it’s pronounced diffently. Random noise in background will (hopefully) get away on more training steps (currently wavegrad training on 350k steps).
thx a lot for all the hard work. works great but sadly on my machines quite slow and therefore almost not usable since I’m not sure how to improve the time it takes to generate the wav.
I’d check if the model can be run easily with the Griffin-Lim vocoder. That could be faster but with less quality.
I’ll check and give feedback if i know it’s working.
@Dominik and i are still working on the next release of “Thorsten” voice to be used as offline TTS for Mycroft. (@Olafthanks for supporting with HifiGAN training compute power)
Some german sentences taken from Mycroft skills can be heard here:
Even if this thread a a little outdated because of @synesthesiam fantasic work on Mimic 3, but i still work on providing a free, high(er) quality TTS voice.
I’ve trained a new model and am unsure which of these two variations to be released.
Please give the samples a listen and let me know which variation you like more.
I have the newest (.8.0) coqui-tts docker successfully deployed (gpu-enabled). Can’t speak for cpu only, yet this should be straight forward with the given Dockerfile.
GPU on the other hand needs, if nvcr.io/nvidia/pytorch:22.08-py3 is used, changes to the Dockerfile and setup.py directly in the source code. I only got it to work with a conda install (nvidia uses conda), otherwise pytorch will throw exceptions left and right.
The plugin for the Coqui server is in the making. (But you would need to build coqui tts form a fork of mine, it needs an api addition and the possibility to define a conf.json loaded at startup)
you don’t necessarily need synthesiams template. Just clone coqui tts and try building it from their Dockerfile.
docker build -t <somename> .
run it with docker run -it -v "<dir/from/your/host>:/root/.local/share/tts" -p 5002:5002 --entrypoint 'tts-server' "<somename>"
thx a lot for your help. docker build builds me an img but when I try to start it I get this error:
Traceback (most recent call last):
File "/venv/bin/tts-server", line 5, in <module>
from TTS.server.server import main
File "/root/TTS/server/server.py", line 13, in <module>
from TTS.config import load_config
File "/root/TTS/config/__init__.py", line 10, in <module>
from TTS.config.shared_configs import *
File "/root/TTS/config/shared_configs.py", line 5, in <module>
from trainer import TrainerConfig
File "/venv/lib/python3.8/site-packages/trainer/__init__.py", line 3, in <module>
from trainer.model import *
File "/venv/lib/python3.8/site-packages/trainer/model.py", line 4, in <module>
import torch
File "/venv/lib/python3.8/site-packages/torch/__init__.py", line 655, in <module>
from ._tensor import Tensor
File "/venv/lib/python3.8/site-packages/torch/_tensor.py", line 15, in <module>
from torch.overrides import (
File "/venv/lib/python3.8/site-packages/torch/overrides.py", line 33, in <module>
from torch._C import (
ImportError: cannot import name '_set_torch_function_mode' from 'torch._C' (/venv/lib/python3.8/site-packages/torch/_C.cpython-38-x86_64-linux-gnu.so)
Probably a simple problem but I first need to dig deeper and learn a lot…