Most modern text-to-speech architectures use a WaveNet vocoder for
synthesizing high-fidelity waveform audio, but there have been limitations,
such as high inference time, in its practical application due to its ancestral
sampling scheme. The recently suggested Parallel WaveNet and ClariNet have
achieved real-time audio synthesis capability by incorporating inverse
autoregressive flow for parallel sampling. However, these approaches require a
two-stage training pipeline with a well-trained teacher network and can only
produce natural sound by using probability distillation along with auxiliary
loss terms. We propose FloWaveNet, a flow-based generative model for raw audio
synthesis. FloWaveNet requires only a single-stage training procedure and a
single maximum likelihood loss, without any additional auxiliary terms, and it
is inherently parallel due to the characteristics of generative flow. The model
can efficiently sample raw audio in real-time, with clarity comparable to
previous two-stage parallel models. The code and samples for all models,
including our FloWaveNet, are publicly available.Comment: 9 pages, ICML'201