66 research outputs found
Waveform Generation for Text-to-speech Synthesis Using Pitch-synchronous Multi-scale Generative Adversarial Networks
The state-of-the-art in text-to-speech synthesis has recently improved
considerably due to novel neural waveform generation methods, such as WaveNet.
However, these methods suffer from their slow sequential inference process,
while their parallel versions are difficult to train and even more expensive
computationally. Meanwhile, generative adversarial networks (GANs) have
achieved impressive results in image generation and are making their way into
audio applications; parallel inference is among their lucrative properties. By
adopting recent advances in GAN training techniques, this investigation studies
waveform generation for TTS in two domains (speech signal and glottal
excitation). Listening test results show that while direct waveform generation
with GAN is still far behind WaveNet, a GAN-based glottal excitation model can
achieve quality and voice similarity on par with a WaveNet vocoder.Comment: Submitted to ICASSP 201
Audio representations for deep learning in sound synthesis: A review
The rise of deep learning algorithms has led many researchers to withdraw from using classic signal processing methods for sound generation. Deep learning models have achieved expressive voice synthesis, realistic sound textures, and musical notes from virtual instruments. However, the most suitable deep learning architecture is still under investigation. The choice of architecture is tightly coupled to the audio representations. A sound’s original waveform can be too dense and rich for deep learning models to deal with efficiently - and complexity increases training time and computational cost. Also, it does not represent sound in the manner in which it is perceived. Therefore, in many cases, the raw audio has been transformed into a compressed and more meaningful form using upsampling, feature-extraction, or even by adopting a higher level illustration of the waveform. Furthermore, conditional on the form chosen, additional conditioning representations, different model architectures, and numerous metrics for evaluating the reconstructed sound have been investigated. This paper provides an overview of audio representations applied to sound synthesis using deep learning. Additionally, it presents the most significant methods for developing and evaluating a sound synthesis architecture using deep learning models, always depending on the audio representation
- …