2,232 research outputs found
Robust One-Shot Singing Voice Conversion
Recent progress in deep generative models has improved the quality of voice
conversion in the speech domain. However, high-quality singing voice conversion
(SVC) of unseen singers remains challenging due to the wider variety of musical
expressions in pitch, loudness, and pronunciation. Moreover, singing voices are
often recorded with reverb and accompaniment music, which make SVC even more
challenging. In this work, we present a robust one-shot SVC (ROSVC) that
performs any-to-any SVC robustly even on such distorted singing voices. To this
end, we first propose a one-shot SVC model based on generative adversarial
networks that generalizes to unseen singers via partial domain conditioning and
learns to accurately recover the target pitch via pitch distribution matching
and AdaIN-skip conditioning. We then propose a two-stage training method called
Robustify that train the one-shot SVC model in the first stage on clean data to
ensure high-quality conversion, and introduces enhancement modules to the
encoders of the model in the second stage to enhance the feature extraction
from distorted singing voices. To further improve the voice quality and pitch
reconstruction accuracy, we finally propose a hierarchical diffusion model for
singing voice neural vocoders. Experimental results show that the proposed
method outperforms state-of-the-art one-shot SVC baselines for both seen and
unseen singers and significantly improves the robustness against distortions
- …