Skip to main content
Article thumbnail
Location of Repository

F.: Expressive speech-driven facial animation

By Yong Cao and Wen C. Tien

Abstract

Speech-driven facial motion synthesis is a well explored research topic. However, little has been done to model expressive visual behavior during speech. We address this issue using a machine learning approach that relies on a database of speech related high-fidelity facial motions. From this training set, we derive a generative model of expressive facial motion that incorporates emotion control while maintaining accurate lip-synching. The emotional content of the input speech can be manually specified by the user or automatically extracted from the audio signal using a Support Vector Machine classifier

Topics: Categories and Subject Descriptors, I.3.7 [Computer Graphics, Three-Dimensional Graphics and Realism— Animation General Terms, Animation Additional Key Words and Phrases, Facial animation, lip-synching, expression synthesis
Year: 2005
OAI identifier: oai:CiteSeerX.psu:10.1.1.352.8670
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://people.cs.vt.edu/~yongc... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.