113 research outputs found

    Spanish statistical parametric speech synthesis using a neural vocoder

    Get PDF
    During the 2000s decade, unit-selection based text-to-speech was the dominant commercial technology. Meanwhile, the TTS research community has made a big effort to push statistical-parametric speech synthesis to get similar quality and more flexibility on the synthetically generated voice. During last years, deep learning advances applied to speech synthesis have filled the gap, specially when neural vocoders substitute traditional signal-processing based vocoders. In this paper we propose to substitute the waveform generation vocoder of MUSA, our Spanish TTS, with SampleRNN, a neural vocoder which was recently proposed as a deep autoregressive raw waveform generation model. MUSA uses recurrent neural networks to predict vocoder parameters (MFCC and logF0) from linguistic features. Then, the Ahocoder vocoder is used to recover the speech waveform out of the predicted parameters. In the first system SampleRNN is extended to generate speech conditioned on the Ahocoder generated parameters (mfcc and logF0), where two configurations have been considered to train the system. First, the parameters derived from the signal using Ahocoder are used. Secondly, the system is trained with the parameters predicted by MUSA, where SampleRNN and MUSA are jointly optimized. The subjective evaluation shows that the second system outperforms both the original Ahocoder and SampleRNN as an independent neural vocoder.Peer ReviewedPostprint (published version

    Tacotron: Towards End-to-End Speech Synthesis

    Full text link
    A text-to-speech synthesis system typically consists of multiple stages, such as a text analysis frontend, an acoustic model and an audio synthesis module. Building these components often requires extensive domain expertise and may contain brittle design choices. In this paper, we present Tacotron, an end-to-end generative text-to-speech model that synthesizes speech directly from characters. Given pairs, the model can be trained completely from scratch with random initialization. We present several key techniques to make the sequence-to-sequence framework perform well for this challenging task. Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English, outperforming a production parametric system in terms of naturalness. In addition, since Tacotron generates speech at the frame level, it's substantially faster than sample-level autoregressive methods.Comment: Submitted to Interspeech 2017. v2 changed paper title to be consistent with our conference submission (no content change other than typo fixes

    Speech synthesis using recurrent neural networks

    Full text link
    Les réseaux neuronaux récurrents sont des outils efficaces pour modeler les données à structure séquentielle. Dans ce mémoire, nous décrivons comment les utiliser pour la synthèse vocale. Nous commençons avec une introduction à l’apprentissage automatique et aux réseaux neuronaux dans le chapitre 1. Dans le chapitre 2, nous développons un gradient algorithmique stochastique automatique ayant pour effet de réduire le poids des recherches extensives hyper- paramétrées pour l’optimisateur. L’algorithme proposé exploite un estimateur de courbure du coût de la fonction de moindre variance, et utilise celui-ci pour obtenir un taux d’apprentissage adaptatif qui soit automatiquement calibré pour chaque paramètre. Dans le chapitre 3, nous proposons un modèle innovateur pour la génération audio inconditionnelle, basée sur la génération d’un seul échantillon audio à la fois. Nous montrons que notre modèle, qui prend avantage de la combination de modules sans mémoire (notamment les perceptrons autorégressifs à plusieurs couches et les réseaux de neurones récurrents dans une structure hiérarchique), est capable de capturer les sources de variation sous-jacentes dans les séquences temporelles, et ce, sur de très longs laps de temps, sur trois ensembles de données de nature différente. Les résultats de l’évaluation humaine à l’écoute des échantillons générés semblent indiquer que notre modèle est préféré à d’autres modèles de compétiteurs. Nous montrons aussi comment chaque composante du modèle contribue à ces performances. Dans le chapitre 4, nous présentons un modèle d’encodeur-décodeur focalisé sur la synthèse vocale. Notre modèle apprend à produire les caractéristiques acoustiques à partir d’une séquence de phonèmes ou de lettres. L’encodeur se constitue d’un réseau neuronal récurrent bidirectionnel acceptant des entrées sous forme de texte ou de phonèmes. Le décodeur se constitue, pour sa part, d’un réseau neuronal récurrent avec attention produisant les caractéristiques acoustiques. Par ailleurs, nous adaptons ce modèle, afin qu’il puisse réaliser la synthèse vocale de plusieurs individus, et nous la testons en anglais et en espagnol. Finalement, nous effectuons une réflection sur les résultats obtenus dans ce mémoire, afin de proposer de nouvelles pistes de recherche.Recurrent neural networks are useful tools to model data with sequential structure. In this work, we describe how to use them for speech synthesis. We start with an introduction to machine learning and neural networks in Chapter 1. In Chapter 2, we develop an automatic stochastic gradient algorithm which reduces the burden of extensive hyper-parameter search for the optimizer. Our proposed algorithm exploits a lower variance estimator of curvature of the cost function and uses it to obtain an automatically tuned adaptive learning rate for each parameter. In Chapter 3, we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variation in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance. In Chapter 4, we present Char2Wav, an end-to-end model for speech synthesis. Char2Wav has two components: a reader and a neural vocoder. The reader is an encoder-decoder model with attention. The encoder is a bidirectional recurrent neural network (RNN) that accepts text or phonemes as inputs, while the decoder is a recurrent neural network with attention that produces vocoder acoustic features. Neural vocoder refers to a conditional extension of SampleRNN which generates raw waveform samples from intermediate representations. We show results in English and Spanish. Unlike traditional models for speech synthesis, Char2Wav learns to produce audio directly from text. Finally, we reflect on the results obtained in this work and propose future directions of research in the area
    • …
    corecore