664 research outputs found
Improving Prosody for Cross-Speaker Style Transfer by Semi-Supervised Style Extractor and Hierarchical Modeling in Speech Synthesis
Cross-speaker style transfer in speech synthesis aims at transferring a style
from source speaker to synthesized speech of a target speaker's timbre. In most
previous methods, the synthesized fine-grained prosody features often represent
the source speaker's average style, similar to the one-to-many problem(i.e.,
multiple prosody variations correspond to the same text). In response to this
problem, a strength-controlled semi-supervised style extractor is proposed to
disentangle the style from content and timbre, improving the representation and
interpretability of the global style embedding, which can alleviate the
one-to-many mapping and data imbalance problems in prosody prediction. A
hierarchical prosody predictor is proposed to improve prosody modeling. We find
that better style transfer can be achieved by using the source speaker's
prosody features that are easily predicted. Additionally, a
speaker-transfer-wise cycle consistency loss is proposed to assist the model in
learning unseen style-timbre combinations during the training phase.
Experimental results show that the method outperforms the baseline. We provide
a website with audio samples.Comment: Accepted by ICASSP202
Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech Prompts
Zero-shot text-to-speech aims at synthesizing voices with unseen speech
prompts. Previous large-scale multispeaker TTS models have successfully
achieved this goal with an enrolled recording within 10 seconds. However, most
of them are designed to utilize only short speech prompts. The limited
information in short speech prompts significantly hinders the performance of
fine-grained identity imitation. In this paper, we introduce Mega-TTS 2, a
generic zero-shot multispeaker TTS model that is capable of synthesizing speech
for unseen speakers with arbitrary-length prompts. Specifically, we 1) design a
multi-reference timbre encoder to extract timbre information from multiple
reference speeches; 2) and train a prosody language model with arbitrary-length
speech prompts; With these designs, our model is suitable for prompts of
different lengths, which extends the upper bound of speech quality for
zero-shot text-to-speech. Besides arbitrary-length prompts, we introduce
arbitrary-source prompts, which leverages the probabilities derived from
multiple P-LLM outputs to produce expressive and controlled prosody.
Furthermore, we propose a phoneme-level auto-regressive duration model to
introduce in-context learning capabilities to duration modeling. Experiments
demonstrate that our method could not only synthesize identity-preserving
speech with a short prompt of an unseen speaker but also achieve improved
performance with longer speech prompts. Audio samples can be found in
https://mega-tts.github.io/mega2_demo/
Minimally-Supervised Speech Synthesis with Conditional Diffusion Model and Language Model: A Comparative Study of Semantic Coding
Recently, there has been a growing interest in text-to-speech (TTS) methods
that can be trained with minimal supervision by combining two types of discrete
speech representations and using two sequence-to-sequence tasks to decouple
TTS. To address the challenges associated with high dimensionality and waveform
distortion in discrete representations, we propose Diff-LM-Speech, which models
semantic embeddings into mel-spectrogram based on diffusion models and
introduces a prompt encoder structure based on variational autoencoders and
prosody bottlenecks to improve prompt representation capabilities.
Autoregressive language models often suffer from missing and repeated words,
while non-autoregressive frameworks face expression averaging problems due to
duration prediction models. To address these issues, we propose
Tetra-Diff-Speech, which designs a duration diffusion model to achieve diverse
prosodic expressions. While we expect the information content of semantic
coding to be between that of text and acoustic coding, existing models extract
semantic coding with a lot of redundant information and dimensionality
explosion. To verify that semantic coding is not necessary, we propose
Tri-Diff-Speech. Experimental results show that our proposed methods outperform
baseline methods. We provide a website with audio samples
- …