752 research outputs found
You Do Not Need More Data: Improving End-To-End Speech Recognition by Text-To-Speech Data Augmentation
Data augmentation is one of the most effective ways to make end-to-end
automatic speech recognition (ASR) perform close to the conventional hybrid
approach, especially when dealing with low-resource tasks. Using recent
advances in speech synthesis (text-to-speech, or TTS), we build our TTS system
on an ASR training database and then extend the data with synthesized speech to
train a recognition model. We argue that, when the training data amount is
relatively low, this approach can allow an end-to-end model to reach hybrid
systems' quality. For an artificial low-to-medium-resource setup, we compare
the proposed augmentation with the semi-supervised learning technique. We also
investigate the influence of vocoder usage on final ASR performance by
comparing Griffin-Lim algorithm with our modified LPCNet. When applied with an
external language model, our approach outperforms a semi-supervised setup for
LibriSpeech test-clean and only 33% worse than a comparable supervised setup.
Our system establishes a competitive result for end-to-end ASR trained on
LibriSpeech train-clean-100 set with WER 4.3% for test-clean and 13.5% for
test-other
MSM-VC: High-fidelity Source Style Transfer for Non-Parallel Voice Conversion by Multi-scale Style Modeling
In addition to conveying the linguistic content from source speech to
converted speech, maintaining the speaking style of source speech also plays an
important role in the voice conversion (VC) task, which is essential in many
scenarios with highly expressive source speech, such as dubbing and data
augmentation. Previous work generally took explicit prosodic features or
fixed-length style embedding extracted from source speech to model the speaking
style of source speech, which is insufficient to achieve comprehensive style
modeling and target speaker timbre preservation. Inspired by the style's
multi-scale nature of human speech, a multi-scale style modeling method for the
VC task, referred to as MSM-VC, is proposed in this paper. MSM-VC models the
speaking style of source speech from different levels. To effectively convey
the speaking style and meanwhile prevent timbre leakage from source speech to
converted speech, each level's style is modeled by specific representation.
Specifically, prosodic features, pre-trained ASR model's bottleneck features,
and features extracted by a model trained with a self-supervised strategy are
adopted to model the frame, local, and global-level styles, respectively.
Besides, to balance the performance of source style modeling and target speaker
timbre preservation, an explicit constraint module consisting of a pre-trained
speech emotion recognition model and a speaker classifier is introduced to
MSM-VC. This explicit constraint module also makes it possible to simulate the
style transfer inference process during the training to improve the
disentanglement ability and alleviate the mismatch between training and
inference. Experiments performed on the highly expressive speech corpus
demonstrate that MSM-VC is superior to the state-of-the-art VC methods for
modeling source speech style while maintaining good speech quality and speaker
similarity.Comment: This work was submitted on April 10, 2022 and accepted on August 29,
202
- …