381 research outputs found
Facial Expression Retargeting from Human to Avatar Made Easy
Facial expression retargeting from humans to virtual characters is a useful
technique in computer graphics and animation. Traditional methods use markers
or blendshapes to construct a mapping between the human and avatar faces.
However, these approaches require a tedious 3D modeling process, and the
performance relies on the modelers' experience. In this paper, we propose a
brand-new solution to this cross-domain expression transfer problem via
nonlinear expression embedding and expression domain translation. We first
build low-dimensional latent spaces for the human and avatar facial expressions
with variational autoencoder. Then we construct correspondences between the two
latent spaces guided by geometric and perceptual constraints. Specifically, we
design geometric correspondences to reflect geometric matching and utilize a
triplet data structure to express users' perceptual preference of avatar
expressions. A user-friendly method is proposed to automatically generate
triplets for a system allowing users to easily and efficiently annotate the
correspondences. Using both geometric and perceptual correspondences, we
trained a network for expression domain translation from human to avatar.
Extensive experimental results and user studies demonstrate that even
nonprofessional users can apply our method to generate high-quality facial
expression retargeting results with less time and effort.Comment: IEEE Transactions on Visualization and Computer Graphics (TVCG), to
appea
Easy Rigging of Face by Automatic Registration and Transfer of Skinning Parameters
International audiencePreparing a facial mesh to be animated requires a laborious manualrigging process. The rig specifies how the input animation datadeforms the surface and allows artists to manipulate a character.We present a method that automatically rigs a facial mesh based onRadial Basis Functions and linear blend skinning approach.Our approach transfers the skinning parameters (feature points andtheir envelopes, ie. point-vertex weights),of a reference facial mesh (source) - already rigged - tothe chosen facial mesh (target) by computing an automaticregistration between the two meshes.There is no need to manually mark the correspondence between thesource and target mesh.As a result, inexperienced artists can automatically rig facial meshes and startright away animating their 3D characters, driven for instanceby motion capture data
A framework for automatic and perceptually valid facial expression generation
Facial expressions are facial movements reflecting the internal emotional states of a character or in response to social communications. Realistic facial animation should consider at least two factors: believable visual effect and valid facial movements. However, most research tends to separate these two issues. In this paper, we present a framework for generating 3D facial expressions considering both the visual the dynamics effect. A facial expression mapping approach based on local geometry encoding is proposed, which encodes deformation in the 1-ring vector. This method is capable of mapping subtle facial movements without considering those shape and topological constraints. Facial expression mapping is achieved through three steps: correspondence establishment, deviation transfer and movement mapping. Deviation is transferred to the conformal face space through minimizing the error function. This function is formed by the source neutral and the deformed face model related by those transformation matrices in 1-ring neighborhood. The transformation matrix in 1-ring neighborhood is independent of the face shape and the mesh topology. After the facial expression mapping, dynamic parameters are then integrated with facial expressions for generating valid facial expressions. The dynamic parameters were generated based on psychophysical methods. The efficiency and effectiveness of the proposed methods have been tested using various face models with different shapes and topological representations
Anatomically Constrained Implicit Face Models
Coordinate based implicit neural representations have gained rapid popularity
in recent years as they have been successfully used in image, geometry and
scene modeling tasks. In this work, we present a novel use case for such
implicit representations in the context of learning anatomically constrained
face models. Actor specific anatomically constrained face models are the state
of the art in both facial performance capture and performance retargeting.
Despite their practical success, these anatomical models are slow to evaluate
and often require extensive data capture to be built. We propose the anatomical
implicit face model; an ensemble of implicit neural networks that jointly learn
to model the facial anatomy and the skin surface with high-fidelity, and can
readily be used as a drop in replacement to conventional blendshape models.
Given an arbitrary set of skin surface meshes of an actor and only a neutral
shape with estimated skull and jaw bones, our method can recover a dense
anatomical substructure which constrains every point on the facial surface. We
demonstrate the usefulness of our approach in several tasks ranging from shape
fitting, shape editing, and performance retargeting
- …