CORE
🇺🇦Â
 make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
A face-to-muscle inversion of a biomechanical face model for audiovisual and motor control research
Authors
Kevin G. Munhall
Michel Pitermann
Publication date
1 January 2001
Publisher
HAL CCSD
Abstract
Colloque avec actes et comité de lecture. internationale.International audienceMuscle-based models of the human face produce high quality animation but rely on recorded muscle activity signals or synthetic muscle signals often derived by trial and error. In this paper we present a dynamic inversion of a muscle-based model that permits the animation to be created from kinematic recordings of facial movements. Using a nonlinear optimizer (Powell's algorithm) the inversion produces a muscle activity set for 16 muscle groups in the lower face that minimize the root mean square error between kinematic data recorded with OPTOTRAK and the corresponding nodes of the modeled facial mesh. This inverted muscle activity is then used to animate the facial model. The results of a first experiment showed that the inversion-synthesis method can accurately reproduce a synthetic facial animation, even for a partial sampling of the face. The results of a second experiment showed that the method is as successful for OPTOTRAK recording of a talker uttering a sentence. The animation was of high quality
Similar works
Full text
Available Versions
HAL-Rennes 1
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:HAL:inria-00100554v1
Last time updated on 21/06/2024
INRIA a CCSD electronic archive server
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:HAL:inria-00100554v1
Last time updated on 10/11/2016