We demonstrate a novel method for producing a synthetic talking head. The method is based on earlier work in which the behaviour of a synthetic individual is generated by reference to a probabilistic model of interactive beha viour within the visual domain - such models are learnt automatically from typical interactions. We extend this work in to a combined visual and auditory domain and employ a state-of-the-art facial appearance model. The result is a real-time synthetic talking head that responds appropriately and with correct timing to simple forms of greeting with variations in facial expression and intonation
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.