808 research outputs found
Exploring efficient neural architectures for linguistic-acoustic mapping in text-to-speech
Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU.Peer ReviewedPostprint (published version
Tacotron: Towards End-to-End Speech Synthesis
A text-to-speech synthesis system typically consists of multiple stages, such
as a text analysis frontend, an acoustic model and an audio synthesis module.
Building these components often requires extensive domain expertise and may
contain brittle design choices. In this paper, we present Tacotron, an
end-to-end generative text-to-speech model that synthesizes speech directly
from characters. Given pairs, the model can be trained completely
from scratch with random initialization. We present several key techniques to
make the sequence-to-sequence framework perform well for this challenging task.
Tacotron achieves a 3.82 subjective 5-scale mean opinion score on US English,
outperforming a production parametric system in terms of naturalness. In
addition, since Tacotron generates speech at the frame level, it's
substantially faster than sample-level autoregressive methods.Comment: Submitted to Interspeech 2017. v2 changed paper title to be
consistent with our conference submission (no content change other than typo
fixes
Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems
Natural language generation (NLG) is a critical component of spoken dialogue
and it has a significant impact both on usability and perceived quality. Most
NLG systems in common use employ rules and heuristics and tend to generate
rigid and stylised responses without the natural variation of human language.
They are also not easily scaled to systems covering multiple domains and
languages. This paper presents a statistical language generator based on a
semantically controlled Long Short-term Memory (LSTM) structure. The LSTM
generator can learn from unaligned data by jointly optimising sentence planning
and surface realisation using a simple cross entropy training criterion, and
language variation can be easily achieved by sampling from output candidates.
With fewer heuristics, an objective evaluation in two differing test domains
showed the proposed method improved performance compared to previous methods.
Human judges scored the LSTM system higher on informativeness and naturalness
and overall preferred it to the other systems.Comment: To be appear in EMNLP 201
Forward Attention in Sequence-to-sequence Acoustic Modelling for Speech Synthesis
This paper proposes a forward attention method for the sequenceto- sequence
acoustic modeling of speech synthesis. This method is motivated by the nature
of the monotonic alignment from phone sequences to acoustic sequences. Only the
alignment paths that satisfy the monotonic condition are taken into
consideration at each decoder timestep. The modified attention probabilities at
each timestep are computed recursively using a forward algorithm. A transition
agent for forward attention is further proposed, which helps the attention
mechanism to make decisions whether to move forward or stay at each decoder
timestep. Experimental results show that the proposed forward attention method
achieves faster convergence speed and higher stability than the baseline
attention method. Besides, the method of forward attention with transition
agent can also help improve the naturalness of synthetic speech and control the
speed of synthetic speech effectively.Comment: 5 pages, 3 figures, 2 tables. Published in IEEE International
Conference on Acoustics, Speech and Signal Processing 2018 (ICASSP2018
Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking
The natural language generation (NLG) component of a spoken dialogue system
(SDS) usually needs a substantial amount of handcrafting or a well-labeled
dataset to be trained on. These limitations add significantly to development
costs and make cross-domain, multi-lingual dialogue systems intractable.
Moreover, human languages are context-aware. The most natural response should
be directly learned from data rather than depending on predefined syntaxes or
rules. This paper presents a statistical language generator based on a joint
recurrent and convolutional neural network structure which can be trained on
dialogue act-utterance pairs without any semantic alignments or predefined
grammar trees. Objective metrics suggest that this new model outperforms
previous methods under the same experimental conditions. Results of an
evaluation by human judges indicate that it produces not only high quality but
linguistically varied utterances which are preferred compared to n-gram and
rule-based systems.Comment: To be appear in SigDial 201
- …