1 research outputs found
Expressive TTS Training with Frame and Style Reconstruction Loss
We propose a novel training strategy for Tacotron-based text-to-speech (TTS)
system to improve the expressiveness of speech. One of the key challenges in
prosody modeling is the lack of reference that makes explicit modeling
difficult. The proposed technique doesn't require prosody annotations from
training data. It doesn't attempt to model prosody explicitly either, but
rather encodes the association between input text and its prosody styles using
a Tacotron-based TTS framework. Our proposed idea marks a departure from the
style token paradigm where prosody is explicitly modeled by a bank of prosody
embeddings. The proposed training strategy adopts a combination of two
objective functions: 1) frame level reconstruction loss, that is calculated
between the synthesized and target spectral features; 2) utterance level style
reconstruction loss, that is calculated between the deep style features of
synthesized and target speech. The proposed style reconstruction loss is
formulated as a perceptual loss to ensure that utterance level speech style is
taken into consideration during training. Experiments show that the proposed
training strategy achieves remarkable performance and outperforms a
state-of-the-art baseline in both naturalness and expressiveness. To our best
knowledge, this is the first study to incorporate utterance level perceptual
quality as a loss function into Tacotron training for improved expressiveness.Comment: Submitted to IEEE/ACM Transactions on Audio, Speech and Language
Processin