Nowadays, recognition-synthesis-based methods have been quite popular with
voice conversion (VC). By introducing linguistics features with good
disentangling characters extracted from an automatic speech recognition (ASR)
model, the VC performance achieved considerable breakthroughs. Recently,
self-supervised learning (SSL) methods trained with a large-scale unannotated
speech corpus have been applied to downstream tasks focusing on the content
information, which is suitable for VC tasks. However, a huge amount of speaker
information in SSL representations degrades timbre similarity and the quality
of converted speech significantly. To address this problem, we proposed a
high-similarity any-to-one voice conversion method with the input of SSL
representations. We incorporated adversarial training mechanisms in the
synthesis module using external unannotated corpora. Two auxiliary
discriminators were trained to distinguish whether a sequence of
mel-spectrograms has been converted by the acoustic model and whether a
sequence of content embeddings contains speaker information from external
corpora. Experimental results show that our proposed method achieves comparable
similarity and higher naturalness than the supervised method, which needs a
huge amount of annotated corpora for training and is applicable to improve
similarity for VC methods with other SSL representations as input.Comment: Accepted by ICME 202