3 research outputs found
Relating Objective and Subjective Performance Measures for AAM-based Visual Speech Synthesizers
We compare two approaches for synthesizing visual speech using Active Appearance Models (AAMs): one that utilizes acoustic features as input, and one that utilizes a phonetic transcription as input. Both synthesizers are trained using the same data and the performance is measured using both objective and subjective testing. We investigate the impact of likely sources of error in the synthesized visual speech by introducing typical errors into real visual speech sequences and subjectively measuring the perceived degradation. When only a small region (e.g. a single syllable) of ground-truth visual speech is incorrect we find that the subjective score for the entire sequence is subjectively lower than sequences generated by our synthesizers. This observation motivates further consideration of an often ignored issue, which is to what extent are subjective measures correlated with objective measures of performance? Significantly, we find that the most commonly used objective measures of performance are not necessarily the best indicator of viewer perception of quality. We empirically evaluate alternatives and show that the cost of a dynamic time warp of synthesized visual speech parameters to the respective ground-truth parameters is a better indicator of subjective quality
Modelling talking human faces
This thesis investigates a number of new approaches for visual speech
synthesis using data-driven methods to implement a talking face.
The main contributions in this thesis are the following. The accuracy
of shared Gaussian process latent variable model (SGPLVM)
built using the active appearance model (AAM) and relative spectral
transform-perceptual linear prediction (RASTAPLP) features is improved
by employing a more accurate AAM. This is the first study
to report that using a more accurate AAM improves the accuracy of
SGPLVM. Objective evaluation via reconstruction error is performed
to compare the proposed approach against previously existing methods.
In addition, it is shown experimentally that the accuracy of AAM
can be improved by using a larger number of landmarks and/or larger
number of samples in the training data.
The second research contribution is a new method for visual speech
synthesis utilising a fully Bayesian method namely the manifold relevance
determination (MRD) for modelling dynamical systems through
probabilistic non-linear dimensionality reduction. This is the first time
MRD was used in the context of generating talking faces from the
input speech signal. The expressive power of this model is in the ability
to consider non-linear mappings between audio and visual features
within a Bayesian approach. An efficient latent space has been learnt
iii
Abstract iv
using a fully Bayesian latent representation relying on conditional nonlinear
independence framework. In the SGPLVM the structure of the
latent space cannot be automatically estimated because of using a maximum
likelihood formulation. In contrast to SGPLVM the Bayesian approaches
allow the automatic determination of the dimensionality of the
latent spaces. The proposed method compares favourably against several
other state-of-the-art methods for visual speech generation, which
is shown in quantitative and qualitative evaluation on two different
datasets.
Finally, the possibility of incremental learning of AAM for inclusion
in the proposed MRD approach for visual speech generation is
investigated. The quantitative results demonstrate that using MRD in
conjunction with incremental AAMs produces only slightly less accurate
results than using batch methods. These results support a way of
training this kind of models on computers with limited resources, for
example in mobile computing.
Overall, this thesis proposes several improvements to the current
state-of-the-art in generating talking faces from speech signal leading
to perceptually more convincing results