Fine-tuning a 🐸 TTS model¶
Fine-tuning takes a pre-trained model, and retrains it to improve the model performance on a different task or dataset. In 🐸TTS we provide different pre-trained models in different languages and different pros and cons. You can take one of them and fine-tune it for your own dataset. This will help you in two main ways:
Since a pre-trained model has already learned features that are relevant for the task, it will converge faster on a new dataset. This will reduce the cost of training and let you experient faster.
Better resutls with small datasets
Deep learning models are data hungry and they give better performance with more data. However, it is not always possible to have this abondance, especially in domain. For instance, LJSpeech dataset, that we released most of our English models with, is almost 24 hours long. And it requires for someone to collect thid amount of data with a help of a voice talent takes weeks.
Fine-tuning cames to rescue in this case. You can take one of our pre-trained models and fine-tune it for your own speech dataset and achive reasonable results with only a couple of hours in the worse case.
However, note that, fine-tuning does not promise great results. The model performance is still depends on the dataset quality and the hyper-parameters you choose for fine-tuning. Therefore, it still demands a bit of tinkering.
Steps to fine-tune a 🐸 TTS model¶
Setup your dataset.
You need to format your target dataset in a certain way so that 🐸TTS data loader would be able to load it for the training. Please see this page for more information about formatting.
Choose the model you want to fine-tune.
You can list the availabe models on terminal as
The command above lists the the models in a naming format as
Or you can manually check
.model.jsonfile in the project directory.
You should choose the model based on your requirements. Some models are fast and some are better in speech quality. One lazy way to check a model is running the model on the hardware you want to use and see how it works. For simple testing, you can use the
ttscommand on the terminal. For more info see here.
Download the model.
You can download the model by
ttscommand. If you run
ttswith a particular model, it will download automatically and the model path will be printed on the terminal.
tts --model_name tts_models/es/mai/tacotron2-DDC --text "Ola." > Downloading model to /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts ...
In the example above, we called the Spanish Tacotron model and give the sample output showing use the path where the model is downloaded.
Setup the model config for fine-tuning.
You need to change certain fields in the model config. You have 3 options for playing with the configuration.
Edit the fields in the
config.jsonfile if you want to use
TTS/bin/train_tts.pyto train the model.
Edit the fields in one of the training scripts in the
recipesdirectory if you want to use python.
Use the command-line arguments to override the fields like
--coqpit.lr 0.00001to change the learning rate.
Some of the important fields are as follows:
datasetsfield: This is set to the dataset you want to fine-tune the model on.
run_namefield: This is the name of the run. This is used to name the output directory and the entry in the logging dashboard.
output_pathfield: This is the path where the fine-tuned model is saved.
lrfield: You may need to use a smaller learning rate for fine-tuning not to impair the features learned by the pre-trained model with big update steps.
audiofields: Different datasets have different audio characteristics. You must check the current audio parameters and make sure that the values reflect your dataset. For instance, your dataset might have a different audio sampling rate.
Apart from these above, you should check the whole configuration file and make sure that the values are correct for your dataset and training.
Whether you use one of the training scripts under
recipesfolder or the
train_tts.pyto start your training, you should use the
--restore_pathflag to specify the path to the pre-trained model.
CUDA_VISIBLE_DEVICES="0" python recipes/ljspeech/glow_tts/train_glowtts.py \ --restore_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/model_file.pth.tar
CUDA_VISIBLE_DEVICES="0" python TTS/bin/train_tts.py \ --config_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/config.json \ --restore_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/model_file.pth.tar
As stated above, you can also use command-line arguments to change the model configuration.
CUDA_VISIBLE_DEVICES="0" python recipes/ljspeech/glow_tts/train_glowtts.py \ --restore_path /home/ubuntu/.local/share/tts/tts_models--en--ljspeech--glow-tts/model_file.pth.tar --coqpit.run_name "glow-tts-finetune" \ --coqpit.lr 0.00001