1,224 research outputs found
SIG-VC: A Speaker Information Guided Zero-shot Voice Conversion System for Both Human Beings and Machines
Nowadays, as more and more systems achieve good performance in traditional
voice conversion (VC) tasks, people's attention gradually turns to VC tasks
under extreme conditions. In this paper, we propose a novel method for
zero-shot voice conversion. We aim to obtain intermediate representations for
speaker-content disentanglement of speech to better remove speaker information
and get pure content information. Accordingly, our proposed framework contains
a module that removes the speaker information from the acoustic feature of the
source speaker. Moreover, speaker information control is added to our system to
maintain the voice cloning performance. The proposed system is evaluated by
subjective and objective metrics. Results show that our proposed system
significantly reduces the trade-off problem in zero-shot voice conversion,
while it also manages to have high spoofing power to the speaker verification
system
PMVC: Data Augmentation-Based Prosody Modeling for Expressive Voice Conversion
Voice conversion as the style transfer task applied to speech, refers to
converting one person's speech into a new speech that sounds like another
person's. Up to now, there has been a lot of research devoted to better
implementation of VC tasks. However, a good voice conversion model should not
only match the timbre information of the target speaker, but also expressive
information such as prosody, pace, pause, etc. In this context, prosody
modeling is crucial for achieving expressive voice conversion that sounds
natural and convincing. Unfortunately, prosody modeling is important but
challenging, especially without text transcriptions. In this paper, we firstly
propose a novel voice conversion framework named 'PMVC', which effectively
separates and models the content, timbre, and prosodic information from the
speech without text transcriptions. Specially, we introduce a new speech
augmentation algorithm for robust prosody extraction. And building upon this,
mask and predict mechanism is applied in the disentanglement of prosody and
content information. The experimental results on the AIShell-3 corpus supports
our improvement of naturalness and similarity of converted speech.Comment: Accepted by the 31st ACM International Conference on Multimedia
(MM2023
Disentangled Feature Learning for Real-Time Neural Speech Coding
Recently end-to-end neural audio/speech coding has shown its great potential
to outperform traditional signal analysis based audio codecs. This is mostly
achieved by following the VQ-VAE paradigm where blind features are learned,
vector-quantized and coded. In this paper, instead of blind end-to-end
learning, we propose to learn disentangled features for real-time neural speech
coding. Specifically, more global-like speaker identity and local content
features are learned with disentanglement to represent speech. Such a compact
feature decomposition not only achieves better coding efficiency by exploiting
bit allocation among different features but also provides the flexibility to do
audio editing in embedding space, such as voice conversion in real-time
communications. Both subjective and objective results demonstrate its coding
efficiency and we find that the learned disentangled features show comparable
performance on any-to-any voice conversion with modern self-supervised speech
representation learning models with far less parameters and low latency,
showing the potential of our neural coding framework.Comment: Submitted to ICASSP202
- …