An interactive talking head

Abstract

We demonstrate a novel method for producing a synthetic talking head. The method is based on earlier work in which the behaviour of a synthetic individual is generated by reference to a probabilistic model of interactive beha viour within the visual domain - such models are learnt automatically from typical interactions. We extend this work in to a combined visual and auditory domain and employ a state-of-the-art facial appearance model. The result is a real-time synthetic talking head that responds appropriately and with correct timing to simple forms of greeting with variations in facial expression and intonation

Similar works

This paper was published in White Rose E-theses Online.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.