TZLD ab54463a9a master 4 лет назад
..
__pycache__ ab54463a9a master 4 лет назад
.gitmodules ab54463a9a master 4 лет назад
LICENSE ab54463a9a master 4 лет назад
README.md ab54463a9a master 4 лет назад
config.json ab54463a9a master 4 лет назад
convert_model.py ab54463a9a master 4 лет назад
denoiser.py ab54463a9a master 4 лет назад
distributed.py ab54463a9a master 4 лет назад
glow.py ab54463a9a master 4 лет назад
glow_old.py ab54463a9a master 4 лет назад
inference.py ab54463a9a master 4 лет назад
mel2samp.py ab54463a9a master 4 лет назад
requirements.txt ab54463a9a master 4 лет назад
train.py ab54463a9a master 4 лет назад
waveglow_logo.png ab54463a9a master 4 лет назад

README.md

WaveGlow

WaveGlow: a Flow-based Generative Network for Speech Synthesis

Ryan Prenger, Rafael Valle, and Bryan Catanzaro

In our recent paper, we propose WaveGlow: a flow-based network capable of generating high quality speech from mel-spectrograms. WaveGlow combines insights from Glow and WaveNet in order to provide fast, efficient and high-quality audio synthesis, without the need for auto-regression. WaveGlow is implemented using only a single network, trained using only a single cost function: maximizing the likelihood of the training data, which makes the training procedure simple and stable.

Our PyTorch implementation produces audio samples at a rate of 4850 kHz on an NVIDIA V100 GPU. Mean Opinion Scores show that it delivers audio quality as good as the best publicly available WaveNet implementation.

Visit our website for audio samples.

Setup

  1. Clone our repo and initialize submodule
   git clone https://github.com/NVIDIA/waveglow.git
   cd waveglow
   git submodule init
   git submodule update
  1. Install requirements pip3 install -r requirements.txt

  2. Install Apex

Generate audio with our pre-existing model

  1. Download our published model
  2. Download mel-spectrograms
  3. Generate audio python3 inference.py -f <(ls mel_spectrograms/*.pt) -w waveglow_256channels.pt -o . --is_fp16 -s 0.6

N.b. use convert_model.py to convert your older models to the current model with fused residual and skip connections.

Train your own model

  1. Download LJ Speech Data. In this example it's in data/

  2. Make a list of the file names to use for training/testing

   ls data/*.wav | tail -n+10 > train_files.txt
   ls data/*.wav | head -n10 > test_files.txt
  1. Train your WaveGlow networks
   mkdir checkpoints
   python train.py -c config.json

For multi-GPU training replace train.py with distributed.py. Only tested with single node and NCCL.

For mixed precision training set "fp16_run": true on config.json.

  1. Make test set mel-spectrograms

python mel2samp.py -f test_files.txt -o . -c config.json

  1. Do inference with your network
   ls *.pt > mel_files.txt
   python3 inference.py -f mel_files.txt -w checkpoints/waveglow_10000 -o . --is_fp16 -s 0.6