1,097 research outputs found
Recommended from our members
Fast and deep deformation approximations
Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character's appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications. Our approach learns the deformations from an existing rig by splitting the mesh deformation into linear and nonlinear portions. The linear deformations are computed directly from the transformations of the rig's underlying skeleton. We use deep learning methods to approximate the remaining nonlinear portion. In the examples we show from production rigs used to animate lead characters, our approach reduces the computational time spent on evaluating deformations by a factor of 5×-10×. This significant savings allows us to run the complex, film-quality rigs in real-time even when using a CPU-only implementation on a mobile device
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
Comparing and Evaluating Real Time Character Engines for Virtual Environments
As animated characters increasingly become vital parts of virtual environments, then the engines that drive these characters increasingly become vital parts of virtual environment software. This paper gives an overview of the state of the art in character engines, and proposes a taxonomy of the features that are commonly found in them. This taxonomy can be used as a tool for comparison and evaluation of different engines. In order to demonstrate this we use it to compare three engines. The first is Cal3D, the most commonly used open source engine. We also introduce two engines created by the authors, Piavca and HALCA. The paper ends with a brief discussion of some other popular engines
A Majorization-Minimization Based Method for Nonconvex Inverse Rig Problems in Facial Animation: Algorithm Derivation
Automated methods for facial animation are a necessary tool in the modern
industry since the standard blendshape head models consist of hundreds of
controllers and a manual approach is painfully slow. Different solutions have
been proposed that produce output in real-time or generalize well for different
face topologies. However, all these prior works consider a linear approximation
of the blendshape function and hence do not provide a high-enough level of
details for modern realistic human face reconstruction. We build a method for
solving the inverse rig in blendshape animation using quadratic corrective
terms, which increase accuracy. At the same time, due to the proposed
construction of the objective function, it yields a sparser estimated weight
vector compared to the state-of-the-art methods. The former feature means lower
demand for subsequent manual corrections of the solution, while the latter
indicates that the manual modifications are also easier to include. Our
algorithm is iterative and employs a Majorization Minimization paradigm to cope
with the increased complexity produced by adding the corrective terms. The
surrogate function is easy to solve and allows for further parallelization on
the component level within each iteration. This paper is complementary to an
accompanying paper, Rackovi\'c et al. (2023), where we provide detailed
experimental results and discussion, including highly-realistic animation data,
and show a clear superiority of the results compared to the state-of-the-art
methods
Rig Inversion by Training a Differentiable Rig Function
Rig inversion is the problem of creating a method that can find the rig
parameter vector that best approximates a given input mesh. In this paper we
propose to solve this problem by first obtaining a differentiable rig function
by training a multi layer perceptron to approximate the rig function. This
differentiable rig function can then be used to train a deep learning model of
rig inversion.Comment: Presented at Siggraph Asia '22 in Daegu, South Kore
Distributed Solution of the Inverse Rig Problem in Blendshape Facial Animation
The problem of rig inversion is central in facial animation as it allows for
a realistic and appealing performance of avatars. With the increasing
complexity of modern blendshape models, execution times increase beyond
practically feasible solutions. A possible approach towards a faster solution
is clustering, which exploits the spacial nature of the face, leading to a
distributed method. In this paper, we go a step further, involving cluster
coupling to get more confident estimates of the overlapping components. Our
algorithm applies the Alternating Direction Method of Multipliers, sharing the
overlapping weights between the subproblems. The results obtained with this
technique show a clear advantage over the naive clustered approach, as measured
in different metrics of success and visual inspection. The method applies to an
arbitrary clustering of the face. We also introduce a novel method for choosing
the number of clusters in a data-free manner. The method tends to find a
clustering such that the resulting clustering graph is sparse but without
losing essential information. Finally, we give a new variant of a data-free
clustering algorithm that produces good scores with respect to the mentioned
strategy for choosing the optimal clustering
Accurate and Interpretable Solution of the Inverse Rig for Realistic Blendshape Models with Quadratic Corrective Terms
We propose a new model-based algorithm solving the inverse rig problem in
facial animation retargeting, exhibiting higher accuracy of the fit and
sparser, more interpretable weight vector compared to SOTA. The proposed method
targets a specific subdomain of human face animation - highly-realistic
blendshape models used in the production of movies and video games. In this
paper, we formulate an optimization problem that takes into account all the
requirements of targeted models. Our objective goes beyond a linear blendshape
model and employs the quadratic corrective terms necessary for correctly
fitting fine details of the mesh. We show that the solution to the proposed
problem yields highly accurate mesh reconstruction even when general-purpose
solvers, like SQP, are used. The results obtained using SQP are highly accurate
in the mesh space but do not exhibit favorable qualities in terms of weight
sparsity and smoothness, and for this reason, we further propose a novel
algorithm relying on a MM technique. The algorithm is specifically suited for
solving the proposed objective, yielding a high-accuracy mesh fit while
respecting the constraints and producing a sparse and smooth set of weights
easy to manipulate and interpret by artists. Our algorithm is benchmarked with
SOTA approaches, and shows an overall superiority of the results, yielding a
smooth animation reconstruction with a relative improvement up to 45 percent in
root mean squared mesh error while keeping the cardinality comparable with
benchmark methods. This paper gives a comprehensive set of evaluation metrics
that cover different aspects of the solution, including mesh accuracy, sparsity
of the weights, and smoothness of the animation curves, as well as the
appearance of the produced animation, which human experts evaluated
Lessons from digital puppetry - Updating a design framework for a perceptual user interface
While digital puppeteering is largely used just to
augment full body motion capture in digital production, its
technology and traditional concepts could inform a more
naturalized multi-modal human computer interaction than is
currently used with the new perceptual systems such as Kinect.
Emerging immersive social media networks with their fully live
virtual or augmented environments and largely inexperienced
users would benefit the most from this strategy. This paper
intends to define digital puppeteering as it is currently
understood, and summarize its broad shortcomings based on
expert evaluation. Based on this evaluation it will suggest updates
and experiments using current perceptual technology and
concepts in cognitive processing for existing human computer
interaction taxonomy. This updated framework may be more
intuitive and suitable in developing extensions to an emerging
perceptual user interface for the general public
- …