834 research outputs found
Recommended from our members
Fast and deep deformation approximations
Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character's appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications. Our approach learns the deformations from an existing rig by splitting the mesh deformation into linear and nonlinear portions. The linear deformations are computed directly from the transformations of the rig's underlying skeleton. We use deep learning methods to approximate the remaining nonlinear portion. In the examples we show from production rigs used to animate lead characters, our approach reduces the computational time spent on evaluating deformations by a factor of 5Ă-10Ă. This significant savings allows us to run the complex, film-quality rigs in real-time even when using a CPU-only implementation on a mobile device
Fast Simulation of Skin Sliding
Skin sliding is the phenomenon of the skin moving over underlying layers of fat, muscle and bone. Due to the complex interconnections between these separate layers and their differing elasticity properties, it is difficult to model and expensive to compute. We present a novel method to simulate this phenomenon at real--time by remeshing the surface based on a parameter space resampling. In order to evaluate the surface parametrization, we borrow a technique from structural engineering known as the force density method which solves for an energy minimizing form with a sparse linear system. Our method creates a realistic approximation of skin sliding in real--time, reducing texture distortions in the region of the deformation. In addition it is flexible, simple to use, and can be incorporated into any animation pipeline
Implicit Skinning: Real-Time Skin Deformation with Contact Modeling
SIGGRAPH 2013 Conference ProceedingsInternational audienceGeometric skinning techniques, such as smooth blending or dualquaternions, are very popular in the industry for their high performances, but fail to mimic realistic deformations. Other methods make use of physical simulation or control volume to better capture the skin behavior, yet they cannot deliver real-time feedback. In this paper, we present the first purely geometric method handling skin contact effects and muscular bulges in real-time. The insight is to exploit the advanced composition mechanism of volumetric, implicit representations for correcting the results of geometric skinning techniques. The mesh is first approximated by a set of implicit surfaces. At each animation step, these surfaces are combined in real-time and used to adjust the position of mesh vertices, starting from their smooth skinning position. This deformation step is done without any loss of detail and seamlessly handles contacts between skin parts. As it acts as a post-process, our method fits well into the standard animation pipeline. Moreover, it requires no intensive computation step such as collision detection, and therefore provides real-time performances
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint
3D shape editing is widely used in a range of applications such as movie
production, computer games and computer aided design. It is also a popular
research topic in computer graphics and computer vision. In past decades,
researchers have developed a series of editing methods to make the editing
process faster, more robust, and more reliable. Traditionally, the deformed
shape is determined by the optimal transformation and weights for an energy
term. With increasing availability of 3D shapes on the Internet, data-driven
methods were proposed to improve the editing results. More recently as the deep
neural networks became popular, many deep learning based editing methods have
been developed in this field, which is naturally data-driven. We mainly survey
recent research works from the geometric viewpoint to those emerging neural
deformation techniques and categorize them into organic shape editing methods
and man-made model editing methods. Both traditional methods and recent neural
network based methods are reviewed
Automatic generation of dynamic skin deformation for animated characters
© 2018 by the authors. Since non-automatic rigging requires heavy human involvements, and various automatic rigging algorithms are less efficient in terms of computational efficiency, especially for current curve-based skin deformation methods, identifying the iso-parametric curves and creating the animation skeleton requires tedious and time-consuming manual work. Although several automatic rigging methods have been developed, but they do not aim at curve-based models. To tackle this issue, this paper proposes a new rigging algorithm for automatic generation of dynamic skin deformation to quickly identify iso-parametric curves and create an animation skeleton in a few milliseconds, which can be seamlessly used in curve-based skin deformation methods to make the rigging process fast enough for highly efficient computer animation applications
Zero-shot Pose Transfer for Unrigged Stylized 3D Characters
Transferring the pose of a reference avatar to stylized 3D characters of
various shapes is a fundamental task in computer graphics. Existing methods
either require the stylized characters to be rigged, or they use the stylized
character in the desired pose as ground truth at training. We present a
zero-shot approach that requires only the widely available deformed
non-stylized avatars in training, and deforms stylized characters of
significantly different shapes at inference. Classical methods achieve strong
generalization by deforming the mesh at the triangle level, but this requires
labelled correspondences. We leverage the power of local deformation, but
without requiring explicit correspondence labels. We introduce a
semi-supervised shape-understanding module to bypass the need for explicit
correspondences at test time, and an implicit pose deformation module that
deforms individual surface points to match the target pose. Furthermore, to
encourage realistic and accurate deformation of stylized characters, we
introduce an efficient volume-based test-time training procedure. Because it
does not need rigging, nor the deformed stylized character at training time,
our model generalizes to categories with scarce annotation, such as stylized
quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed
method compared to the state-of-the-art approaches trained with comparable or
more supervision. Our project page is available at
https://jiashunwang.github.io/ZPTComment: CVPR 202
Composing quadrilateral meshes for animation
The modeling-by-composition paradigm can be a powerful tool in modern animation pipelines. We propose two novel interactive techniques to compose 3D assets that enable the artists to freely remove, detach and combine components of organic models. The idea behind our methods is to preserve most of the original information in the input characters and blend accordingly where necessary.
The first method, QuadMixer, provides a robust tool to compose the quad layouts of watertight pure quadrilateral meshes, exploiting the boolean operations defined on triangles. Quad Layout is a crucial property for many applications since it conveys important information that would otherwise be destroyed by techniques that aim only at preserving the shape. Our technique keeps untouched all the quads in the patches which are not involved in the blending. The resulting meshes preserve the originally designed edge flows that, by construction, are captured and incorporated into the new quads.
SkinMixer extends this approach to compose skinned models, taking into account not only the surface but also the data structures for animating the character. We propose a new operation-based technique that preserves and smoothly merges meshes, skeletons, and skinning weights. The retopology approach of QuadMixer is extended to work on quad-dominant and arbitrary complex surfaces. Instead of relying on boolean operations on triangle meshes, we manipulate signed distance fields to generate an implicit surface. The results preserve most of the information in the input assets, blending accordingly in the intersection regions. The resulting characters are ready to be used in animation pipelines.
Given the high quality of the results generated, we believe that our methods could have a huge impact on the entertainment industry. Integrated into current software for 3D modeling, they would certainly provide a powerful tool for the artists. Allowing them to automatically reuse parts of their well-designed characters could lead to a new approach for creating models, which would significantly reduce the cost of the process
Matrix-based Parameterizations of Skeletal Animated Appearance
Alors que le rendu rĂ©aliste gagne de lâampleur dans lâindustrie, les techniques Ă la
fois photoréalistes et basées sur la physique, complexes en terme de temps de calcul,
requiÚrent souvent une étape de précalcul hors-ligne. Les applications en temps réel,
comme les jeux vidĂ©o et la rĂ©alitĂ© virtuelle, se basent sur des techniques dâapproximation
et de prĂ©calcul pour atteindre des rĂ©sultats rĂ©alistes. Lâobjectif de ce mĂ©moire est lâinvestigation
de diffĂ©rentes paramĂ©trisations animĂ©es pour concevoir une technique dâapproximation
de rendu réaliste en temps réel.
Notre investigation se concentre sur le rendu dâeffets visuels appliquĂ©s Ă des personnages
animĂ©s par modĂšle dâarmature squelettique. Des paramĂ©trisations combinant
des donnĂ©es de mouvement et dâapparence nous permettent lâextraction de paramĂštres
pour le processus en temps rĂ©el. Ătablir une dĂ©pendance linĂ©aire entre le mouvement et
lâapparence est ainsi au coeur de notre mĂ©thode.
Nous nous concentrons sur lâoccultation ambiante, oĂč la simulation de lâoccultation
est causée par des objets à proximité bloquant la lumiÚre environnante, jugée uniforme.
Lâoccultation ambiante est une technique indĂ©pendante du point de vue, et elle est dĂ©sormais
essentielle pour le réalisme en temps réel. Nous examinons plusieurs paramétrisations
qui traitent lâespace du maillage en fonction de lâinformation dâanimation par
squelette et/ou du maillage géométrique.
Nous sommes capables dâapproximer la rĂ©alitĂ© pour lâoccultation ambiante avec une
faible erreur. Notre technique pourrait Ă©galement ĂȘtre Ă©tendue Ă dâautres effets visuels
tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur
dĂ©pendant du point de vue, les dĂ©formations musculaires, la fourrure ou encore les vĂȘtements.While realistic rendering gains more popularity in industry, photorealistic and physically-
based techniques often necessitate offline processing due to their computational
complexity. Real-time applications, such as video games and virtual reality, rely mostly
on approximation and precomputation techniques to achieve realistic results. The objective
of this thesis is to investigate different animated parameterizations in order to devise
a technique that can approximate realistic rendering results in real time.
Our investigation focuses on rendering visual effects applied to skinned skeletonbased
characters. Combined parameterizations of motion and appearance data are used
to extract parameters that can be used in a real-time approximation. Trying to establish
a linear dependency between motion and appearance is the basis of our method.
We focus on ambient occlusion, a simulation of shadowing caused by objects that
block ambient light. Ambient occlusion is a view-independent technique important for
realism. We consider different parameterization techniques that treat the mesh space
depending on skeletal animation information and/or mesh geometry.
We are able to approximate ground-truth ambient occlusion with low error. Our
technique can also be extended to different visual effects, such as rendering human skin
(subsurface scattering), changes in color due to the view orientation, deformation of
muscles, fur, or clothe
- âŠ