4 research outputs found
Accurate glottal model parametrization by integrating audio and high-speed endoscopic video data
The aim of this paper is to evaluate the effectiveness of using video data for voice source parametrization in the representation of voice production through physical modeling. Laryngeal imaging techniques can be effectively used to obtain vocal fold video sequences and to derive time patterns of relevant glottal cues, such as folds edge position or glottal area. In many physically based numerical models of the vocal folds, these parameters are estimated from the inverse filtered glottal flow waveform, obtained from audio recordings of the sound pressure at lips. However, this model inversion process is often problematic and affected by accuracy and robustness issues. It is here discussed how video analysis of the fold vibration might be effectively coupled to the parametric estimation algorithms based on voice recordings, to improve accuracy and robustness of model inversio
GLOTTAL EXCITATION EXTRACTION OF VOICED SPEECH - JOINTLY PARAMETRIC AND NONPARAMETRIC APPROACHES
The goal of this dissertation is to develop methods to recover glottal flow pulses, which contain biometrical information about the speaker. The excitation information estimated from an observed speech utterance is modeled as the source of an inverse problem. Windowed linear prediction analysis and inverse filtering are first used to deconvolve the speech signal to obtain a rough estimate of glottal flow pulses. Linear prediction and its inverse filtering can largely eliminate the vocal-tract response which is usually modeled as infinite impulse response filter. Some remaining vocal-tract components that reside in the estimate after inverse filtering are next removed by maximum-phase and minimum-phase decomposition which is implemented by applying the complex cepstrum to the initial estimate of the glottal pulses. The additive and residual errors from inverse filtering can be suppressed by higher-order statistics which is the method used to calculate cepstrum representations. Some features directly provided by the glottal source\u27s cepstrum representation as well as fitting parameters for estimated pulses are used to form feature patterns that were applied to a minimum-distance classifier to realize a speaker identification system with very limited subjects
HMM-based speech synthesis using an acoustic glottal source model
Parametric speech synthesis has received increased attention in recent years following
the development of statistical HMM-based speech synthesis. However, the speech
produced using this method still does not sound as natural as human speech and there
is limited parametric flexibility to replicate voice quality aspects, such as breathiness.
The hypothesis of this thesis is that speech naturalness and voice quality can be
more accurately replicated by a HMM-based speech synthesiser using an acoustic glottal
source model, the Liljencrants-Fant (LF) model, to represent the source component
of speech instead of the traditional impulse train.
Two different analysis-synthesis methods were developed during this thesis, in order
to integrate the LF-model into a baseline HMM-based speech synthesiser, which is
based on the popular HTS system and uses the STRAIGHT vocoder. The first method,
which is called Glottal Post-Filtering (GPF), consists of passing a chosen LF-model
signal through a glottal post-filter to obtain the source signal and then generating
speech, by passing this source signal through the spectral envelope filter. The system
which uses the GPF method (HTS-GPF system) is similar to the baseline system,
but it uses a different source signal instead of the impulse train used by STRAIGHT.
The second method, called Glottal Spectral Separation (GSS), generates speech by
passing the LF-model signal through the vocal tract filter. The major advantage of the
synthesiser which incorporates the GSS method, named HTS-LF, is that the acoustic
properties of the LF-model parameters are automatically learnt by the HMMs.
In this thesis, an initial perceptual experiment was conducted to compare the LFmodel
to the impulse train. The results showed that the LF-model was significantly
better, both in terms of speech naturalness and replication of two basic voice qualities
(breathy and tense). In a second perceptual evaluation, the HTS-LF system was better
than the baseline system, although the difference between the two had been expected to
be more significant. A third experiment was conducted to evaluate the HTS-GPF system
and an improved HTS-LF system, in terms of speech naturalness, voice similarity
and intelligibility. The results showed that the HTS-GPF system performed similarly
to the baseline. However, the HTS-LF system was significantly outperformed by the
baseline. Finally, acoustic measurements were performed on the synthetic speech to
investigate the speech distortion in the HTS-LF system. The results indicated that a
problem in replicating the rapid variations of the vocal tract filter parameters at transitions
between voiced and unvoiced sounds is the most significant cause of speech
distortion. This problem encourages future work to further improve the system