3,656 research outputs found
Adversarially Trained Autoencoders for Parallel-Data-Free Voice Conversion
We present a method for converting the voices between a set of speakers. Our
method is based on training multiple autoencoder paths, where there is a single
speaker-independent encoder and multiple speaker-dependent decoders. The
autoencoders are trained with an addition of an adversarial loss which is
provided by an auxiliary classifier in order to guide the output of the encoder
to be speaker independent. The training of the model is unsupervised in the
sense that it does not require collecting the same utterances from the speakers
nor does it require time aligning over phonemes. Due to the use of a single
encoder, our method can generalize to converting the voice of out-of-training
speakers to speakers in the training dataset. We present subjective tests
corroborating the performance of our method
Rhythm-Flexible Voice Conversion without Parallel Data Using Cycle-GAN over Phoneme Posteriorgram Sequences
Speaking rate refers to the average number of phonemes within some unit time,
while the rhythmic patterns refer to duration distributions for realizations of
different phonemes within different phonetic structures. Both are key
components of prosody in speech, which is different for different speakers.
Models like cycle-consistent adversarial network (Cycle-GAN) and variational
auto-encoder (VAE) have been successfully applied to voice conversion tasks
without parallel data. However, due to the neural network architectures and
feature vectors chosen for these approaches, the length of the predicted
utterance has to be fixed to that of the input utterance, which limits the
flexibility in mimicking the speaking rates and rhythmic patterns for the
target speaker. On the other hand, sequence-to-sequence learning model was used
to remove the above length constraint, but parallel training data are needed.
In this paper, we propose an approach utilizing sequence-to-sequence model
trained with unsupervised Cycle-GAN to perform the transformation between the
phoneme posteriorgram sequences for different speakers. In this way, the length
constraint mentioned above is removed to offer rhythm-flexible voice conversion
without requiring parallel data. Preliminary evaluation on two datasets showed
very encouraging results.Comment: 8 pages, 6 figures, Submitted to SLT 201
Voice Conversion Based on Cross-Domain Features Using Variational Auto Encoders
An effective approach to non-parallel voice conversion (VC) is to utilize
deep neural networks (DNNs), specifically variational auto encoders (VAEs), to
model the latent structure of speech in an unsupervised manner. A previous
study has confirmed the ef- fectiveness of VAE using the STRAIGHT spectra for
VC. How- ever, VAE using other types of spectral features such as mel- cepstral
coefficients (MCCs), which are related to human per- ception and have been
widely used in VC, have not been prop- erly investigated. Instead of using one
specific type of spectral feature, it is expected that VAE may benefit from
using multi- ple types of spectral features simultaneously, thereby improving
the capability of VAE for VC. To this end, we propose a novel VAE framework
(called cross-domain VAE, CDVAE) for VC. Specifically, the proposed framework
utilizes both STRAIGHT spectra and MCCs by explicitly regularizing multiple
objectives in order to constrain the behavior of the learned encoder and de-
coder. Experimental results demonstrate that the proposed CD- VAE framework
outperforms the conventional VAE framework in terms of subjective tests.Comment: Accepted to ISCSLP 201
Voice Conversion Using Sequence-to-Sequence Learning of Context Posterior Probabilities
Voice conversion (VC) using sequence-to-sequence learning of context
posterior probabilities is proposed. Conventional VC using shared context
posterior probabilities predicts target speech parameters from the context
posterior probabilities estimated from the source speech parameters. Although
conventional VC can be built from non-parallel data, it is difficult to convert
speaker individuality such as phonetic property and speaking rate contained in
the posterior probabilities because the source posterior probabilities are
directly used for predicting target speech parameters. In this work, we assume
that the training data partly include parallel speech data and propose
sequence-to-sequence learning between the source and target posterior
probabilities. The conversion models perform non-linear and variable-length
transformation from the source probability sequence to the target one. Further,
we propose a joint training algorithm for the modules. In contrast to
conventional VC, which separately trains the speech recognition that estimates
posterior probabilities and the speech synthesis that predicts target speech
parameters, our proposed method jointly trains these modules along with the
proposed probability conversion modules. Experimental results demonstrate that
our approach outperforms the conventional VC.Comment: Accepted to INTERSPEECH 201
- …