Location of Repository

Modelling ‘talking head‘ behaviour

By Craig Hack and Chris Taylor

Abstract

We describe a generative model of ‘talking head ’ facial behaviour, intended for use in both video synthesis and model-based interpretation. The model is learnt, without supervision, from talking head video, parameterised by tracking with an Active Appearance Model (AAM). We present a integrated probabilistic framework for capturing both the short-term visual dynamics and longer-term behavioural structure. We demonstrate that the approach leads to a compact model, capable of generating realistic and relatively subtle talking head behaviour in real time. The results of a forcedchoice psychophysical experiment show that the quality of the generated sequences is significantly better than that obtained using alternative approaches, and is indistinguishable from that of the original training sequence.

Year: 2003
OAI identifier: oai:CiteSeerX.psu:10.1.1.413.840
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.bmva.org/bmvc/2003/... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.