PyTorch通过语音循环实现了野外演讲者的语音合成中描述的方法。
VoiceLoop是一种神经文本到语音(TTS),能够在野外采样的语音中将文本转换为语音。 一些演示样品可以在这里找到。
按照安装程序中的说明,然后简单地执行:
python generate.py --npz data/vctk/numpy_features_valid/p318_212.npz --spkr 13 --checkpoint models/vctk/bestmodel.pth
结果将放在models / vctk / results中。 它将生成2个样本:
You can also generate the same text but with a different speaker, specifically:
python generate.py --npz data/vctk/numpy_features_valid/p318_212.npz --spkr 18 --checkpoint models/vctk/bestmodel.pth
Which will generate the following sample.
Here is the corresponding attention plot:
Requirements: Linux/OSX, Python2.7 and PyTorch 0.1.12. The current version of the code requires CUDA support for training. Generation can be done on the CPU.
git clone https://github.com/facebookresearch/loop.git cd loop pip install -r scripts/requirements.txt
用于训练本文中模型的数据可以通过以下方式下载:
bash scripts/download_data.sh
The script downloads and preprocesses a subset of VCTK. This subset contains speakers with american accent.
The dataset was preprocessed using Merlin - from each audio clip we extracted vocoder features using the WORLD vocoder. After downloading, the dataset will be located under subfolder data
as follows:
loop
├── data
└── vctk
├── norm_info
│ ├── norm.dat
├── numpy_feautres
│ ├── p294_001.npz
│ ├── p294_002.npz
│ └── ...
└── numpy_features_valid
The preprocess pipeline can be executed using the following script by Kyle Kastner:https://gist.github.com/kastnerkyle/cc0ac48d34860c5bb3f9112f4d9a0300.
Pretrainde models can be downloaded via:
bash scripts/download_models.sh
After downloading, the models will be located under subfolder models
as follows:
loop
├── data
├── models
├── vctk
│ ├── args.pth
│ └── bestmodel.pth
└── vctk_alt
Finally, speech generation requires SPTK3.9 and WORLD vocoder as done in Merlin. To download the executables:
bash scripts/download_tools.sh
Which results the following sub directories:
loop
├── data
├── models
├── tools
├── SPTK-3.9
└── WORLD
在vctk上训练一个新的模型,首先使用4的噪声级别和100的输入序列长度训练模型:
python train.py --expName vctk --data data/vctk --noise 4 --seq-len 100 --epochs 90
然后,继续训练模型使用2的噪声水平,完整序列:
python train.py --expName vctk_noise_2 --data data/vctk --checkpoint checkpoints/vctk/bestmodel.pth --noise 2 --seq-len 1000 --epochs 90
如果您发现这段代码在您的研究中有用,请引用:
@article{taigman2017voice,
title = {Voice Synthesis for in-the-Wild Speakers via a Phonological Loop},
author = {Taigman, Yaniv and Wolf, Lior and Polyak, Adam and Nachmani, Eliya},
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprinttype = {arxiv},
eprint = {1705.03122},
primaryClass = "cs.CL",
year = {2017}
month = July,
}
Loop has a CC-BY-NC license.
代码地址:https://github.com/facebookresearch/loop
论文地址:https://arxiv.org/abs/1707.06588