990 research outputs found
Developing Sparse Representations for Anchor-Based Voice Conversion
Voice conversion is the task of transforming speech from one speaker to sound as if it was produced by another speaker, changing the identity while retaining the linguistic content. There are many methods for performing voice conversion, but oftentimes these methods have onerous training requirements or fail in instances where one speaker has a nonnative accent. To address these issues, this dissertation presents and evaluates a novel “anchor-based” representation of speech that separates speaker content from speaker identity by modeling how speakers form English phonemes.
We call the proposed method Sparse, Anchor-Based Representation of Speech (SABR), and explore methods for optimizing the parameters of this model in native-to-native and native-to-nonnative voice conversion contexts. We begin the dissertation by demonstrating how sparse coding in combination with a compact, phoneme-based dictionary can be used to separate speaker identity from content in objective and subjective tests. The formulation of the representation then presents several research questions. First, we propose a method for improving the synthesis quality by using the sparse coding residual in combination with a frequency warping algorithm to convert the residual from the source to target speaker’s space, and add it to the target speaker’s estimated spectrum. Experimentally, we find that synthesis quality is significantly improved via this transform. Second, we propose and evaluate two methods for selecting and optimizing SABR anchors in native-to-native and native-to-nonnative voice conversion. We find that synthesis quality is significantly improved by the proposed methods, especially in native-to- nonnative voice conversion over baseline algorithms. In a detailed analysis of the algorithms, we find they focus on phonemes that are difficult for nonnative speakers of English or naturally have multiple acoustic states. Following this, we examine methods for adding in temporal constraints to SABR via the Fused Lasso. The proposed method significantly reduces the inter-frame variance in the sparse codes over other methods that incorporate temporal features into sparse coding representations.
Finally, in a case study, we examine the use of the SABR methods and optimizations in the context of a computer aided pronunciation training system for building “Golden Speakers”, or ideal models for nonnative speakers of a second language to learn correct pronunciation. Under the hypothesis that the optimal “Golden Speaker” was the learner’s voice, synthesized with a native accent, we used SABR to build voice models for nonnative speakers and evaluated the resulting synthesis in terms of quality, identity, and accentedness. We found that even when deployed in the field, the SABR method generated synthesis with low accentedness and similar acoustic identity to the target speaker, validating the use of the method for building “golden speakers”
Parallel and Limited Data Voice Conversion Using Stochastic Variational Deep Kernel Learning
Typically, voice conversion is regarded as an engineering problem with
limited training data. The reliance on massive amounts of data hinders the
practical applicability of deep learning approaches, which have been
extensively researched in recent years. On the other hand, statistical methods
are effective with limited data but have difficulties in modelling complex
mapping functions. This paper proposes a voice conversion method that works
with limited data and is based on stochastic variational deep kernel learning
(SVDKL). At the same time, SVDKL enables the use of deep neural networks'
expressive capability as well as the high flexibility of the Gaussian process
as a Bayesian and non-parametric method. When the conventional kernel is
combined with the deep neural network, it is possible to estimate non-smooth
and more complex functions. Furthermore, the model's sparse variational
Gaussian process solves the scalability problem and, unlike the exact Gaussian
process, allows for the learning of a global mapping function for the entire
acoustic space. One of the most important aspects of the proposed scheme is
that the model parameters are trained using marginal likelihood optimization,
which considers both data fitting and model complexity. Considering the
complexity of the model reduces the amount of training data by increasing the
resistance to overfitting. To evaluate the proposed scheme, we examined the
model's performance with approximately 80 seconds of training data. The results
indicated that our method obtained a higher mean opinion score, smaller
spectral distortion, and better preference tests than the compared methods
- …