Merl -- A Mitsubishi Electric Research Laboratory

Abstract

workshop web site. Matthew Brand and Ken Shan MERL---a Mitsubishi Electric Research Lab 201 Broadway, Cambridge, MA, 02139 [email protected] We introduce a method for learning a mapping between signals, and use this to drive facial animation directly from vocal cues. Instead of depending on heuristic intermediate representations such as phonemes or visemes, the system learns its own representation, which includes dynamical and contextual information. In principle, this allows the system to make optimal use of context to handle ambiguity and relatively long-lasting facial co-articulation effects. The output is a series of facial control parameters, suitable for driving many different kinds of animation ranging from photo-realistic image warps to 3D cartoon characters. 1. From lip-syncing to facial animation Psychologists and storytellers alike have observed that there is a good deal of mutual information between vocal and facial gesture [23]. Facial information can add significant

    Similar works

    Full text

    thumbnail-image

    Available Versions