2 research outputs found
Adversarial Feature Learning and Unsupervised Clustering based Speech Synthesis for Found Data with Acoustic and Textual Noise
Attention-based sequence-to-sequence (seq2seq) speech synthesis has achieved
extraordinary performance. But a studio-quality corpus with manual
transcription is necessary to train such seq2seq systems. In this paper, we
propose an approach to build high-quality and stable seq2seq based speech
synthesis system using challenging found data, where training speech contains
noisy interferences (acoustic noise) and texts are imperfect speech recognition
transcripts (textual noise). To deal with text-side noise, we propose a VQVAE
based heuristic method to compensate erroneous linguistic feature with phonetic
information learned directly from speech. As for the speech-side noise, we
propose to learn a noise-independent feature in the auto-regressive decoder
through adversarial training and data augmentation, which does not need an
extra speech enhancement model. Experiments show the effectiveness of the
proposed approach in dealing with text-side and speech-side noise. Surpassing
the denoising approach based on a state-of-the-art speech enhancement model,
our system built on noisy found data can synthesize clean and high-quality
speech with MOS close to the system built on the clean counterpart.Comment: submitted to IEEE SP
Data Efficient Voice Cloning from Noisy Samples with Domain Adversarial Training
Data efficient voice cloning aims at synthesizing target speaker's voice with
only a few enrollment samples at hand. To this end, speaker adaptation and
speaker encoding are two typical methods based on base model trained from
multiple speakers. The former uses a small set of target speaker data to
transfer the multi-speaker model to target speaker's voice through direct model
update, while in the latter, only a few seconds of target speaker's audio
directly goes through an extra speaker encoding model along with the
multi-speaker model to synthesize target speaker's voice without model update.
Nevertheless, the two methods need clean target speaker data. However, the
samples provided by user may inevitably contain acoustic noise in real
applications. It's still challenging to generating target voice with noisy
data. In this paper, we study the data efficient voice cloning problem from
noisy samples under the sequence-to-sequence based TTS paradigm. Specifically,
we introduce domain adversarial training (DAT) to speaker adaptation and
speaker encoding, which aims to disentangle noise from speech-noise mixture.
Experiments show that for both speaker adaptation and encoding, the proposed
approaches can consistently synthesize clean speech from noisy speaker samples,
apparently outperforming the method adopting state-of-the-art speech
enhancement module.Comment: Accepted to INTERSPEECH 202