465 research outputs found
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Recommended from our members
Fast and deep deformation approximations
Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character's appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications. Our approach learns the deformations from an existing rig by splitting the mesh deformation into linear and nonlinear portions. The linear deformations are computed directly from the transformations of the rig's underlying skeleton. We use deep learning methods to approximate the remaining nonlinear portion. In the examples we show from production rigs used to animate lead characters, our approach reduces the computational time spent on evaluating deformations by a factor of 5×-10×. This significant savings allows us to run the complex, film-quality rigs in real-time even when using a CPU-only implementation on a mobile device
Real-Time Character Animation for Computer Games
The importance of real-time character animation in computer games has increased considerably over the past decade. Due to advances in computer hardware and the achievement of great increases in computational speed, the demand for more realism in computer games is continuously growing. This paper will present and discuss various methods of 3D character animation and prospects of their real-time application, ranging from the animation of simple articulated objects to real-time deformable object meshes
Output-Sensitive Rendering of Detailed Animated Characters for Crowd Simulation
High-quality, detailed animated characters are often represented as textured
polygonal meshes. The problem with this technique is the high cost
that involves rendering and animating each one of these characters. This
problem has become a major limiting factor in crowd simulation. Since we
want to render a huge number of characters in real-time, the purpose of
this thesis is therefore to study the current existing approaches in crowd
rendering to derive a novel approach.
The main limitations we have found when using impostors are (1) the
big amount of memory needed to store them, which also has to be sent
to the graphics card, (2) the lack of visual quality in close-up views, and
(3) some visibility problems. As we wanted to overcome these limitations,
and improve performance results, the found conclusions lead us to present
a new representation for 3D animated characters using relief mapping, thus
supporting an output-sensitive rendering.
The basic idea of our approach is to encode each character through a
small collection of textured boxes storing color and depth values. At runtime,
each box is animated according to the rigid transformation of its associated
bone in the animated skeleton. A fragment shader is used to recover
the original geometry using an adapted version of relief mapping. Unlike
competing output-sensitive approaches, our compact representation is able
to recover high-frequency surface details and reproduces view-motion parallax
e ects. Furthermore, the proposed approach ensures correct visibility
among di erent animated parts, and it does not require us to prede ne the
animation sequences nor to select a subset of discrete views. Finally, a user
study demonstrates that our approach allows for a large number of simulated
agents with negligible visual artifacts
Easy Rigging of Face by Automatic Registration and Transfer of Skinning Parameters
International audiencePreparing a facial mesh to be animated requires a laborious manualrigging process. The rig specifies how the input animation datadeforms the surface and allows artists to manipulate a character.We present a method that automatically rigs a facial mesh based onRadial Basis Functions and linear blend skinning approach.Our approach transfers the skinning parameters (feature points andtheir envelopes, ie. point-vertex weights),of a reference facial mesh (source) - already rigged - tothe chosen facial mesh (target) by computing an automaticregistration between the two meshes.There is no need to manually mark the correspondence between thesource and target mesh.As a result, inexperienced artists can automatically rig facial meshes and startright away animating their 3D characters, driven for instanceby motion capture data
A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint
3D shape editing is widely used in a range of applications such as movie
production, computer games and computer aided design. It is also a popular
research topic in computer graphics and computer vision. In past decades,
researchers have developed a series of editing methods to make the editing
process faster, more robust, and more reliable. Traditionally, the deformed
shape is determined by the optimal transformation and weights for an energy
term. With increasing availability of 3D shapes on the Internet, data-driven
methods were proposed to improve the editing results. More recently as the deep
neural networks became popular, many deep learning based editing methods have
been developed in this field, which is naturally data-driven. We mainly survey
recent research works from the geometric viewpoint to those emerging neural
deformation techniques and categorize them into organic shape editing methods
and man-made model editing methods. Both traditional methods and recent neural
network based methods are reviewed
Matrix-based Parameterizations of Skeletal Animated Appearance
Alors que le rendu réaliste gagne de l’ampleur dans l’industrie, les techniques à la
fois photoréalistes et basées sur la physique, complexes en terme de temps de calcul,
requièrent souvent une étape de précalcul hors-ligne. Les applications en temps réel,
comme les jeux vidéo et la réalité virtuelle, se basent sur des techniques d’approximation
et de précalcul pour atteindre des résultats réalistes. L’objectif de ce mémoire est l’investigation
de différentes paramétrisations animées pour concevoir une technique d’approximation
de rendu réaliste en temps réel.
Notre investigation se concentre sur le rendu d’effets visuels appliqués à des personnages
animés par modèle d’armature squelettique. Des paramétrisations combinant
des données de mouvement et d’apparence nous permettent l’extraction de paramètres
pour le processus en temps réel. Établir une dépendance linéaire entre le mouvement et
l’apparence est ainsi au coeur de notre méthode.
Nous nous concentrons sur l’occultation ambiante, où la simulation de l’occultation
est causée par des objets à proximité bloquant la lumière environnante, jugée uniforme.
L’occultation ambiante est une technique indépendante du point de vue, et elle est désormais
essentielle pour le réalisme en temps réel. Nous examinons plusieurs paramétrisations
qui traitent l’espace du maillage en fonction de l’information d’animation par
squelette et/ou du maillage géométrique.
Nous sommes capables d’approximer la réalité pour l’occultation ambiante avec une
faible erreur. Notre technique pourrait également être étendue à d’autres effets visuels
tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur
dépendant du point de vue, les déformations musculaires, la fourrure ou encore les vêtements.While realistic rendering gains more popularity in industry, photorealistic and physically-
based techniques often necessitate offline processing due to their computational
complexity. Real-time applications, such as video games and virtual reality, rely mostly
on approximation and precomputation techniques to achieve realistic results. The objective
of this thesis is to investigate different animated parameterizations in order to devise
a technique that can approximate realistic rendering results in real time.
Our investigation focuses on rendering visual effects applied to skinned skeletonbased
characters. Combined parameterizations of motion and appearance data are used
to extract parameters that can be used in a real-time approximation. Trying to establish
a linear dependency between motion and appearance is the basis of our method.
We focus on ambient occlusion, a simulation of shadowing caused by objects that
block ambient light. Ambient occlusion is a view-independent technique important for
realism. We consider different parameterization techniques that treat the mesh space
depending on skeletal animation information and/or mesh geometry.
We are able to approximate ground-truth ambient occlusion with low error. Our
technique can also be extended to different visual effects, such as rendering human skin
(subsurface scattering), changes in color due to the view orientation, deformation of
muscles, fur, or clothe
A framework for natural animation of digitized models
We present a novel versatile, fast and simple framework to generate highquality animations of scanned human characters from input motion data. Our method is purely mesh-based and, in contrast to skeleton-based animation, requires only a minimum of manual interaction. The only manual step that is required to create moving virtual people is the placement of a sparse set of correspondences between triangles of an input mesh and triangles of the mesh to be animated. The proposed algorithm implicitly generates realistic body deformations, and can easily transfer motions between human erent shape and proportions. erent types of input data, e.g. other animated meshes and motion capture les, in just the same way. Finally, and most importantly, it creates animations at interactive frame rates. We feature two working prototype systems that demonstrate that our method can generate lifelike character animations from both marker-based and marker-less optical motion capture data
Improving automatic rigging for 3D humanoid characters
In the field of computer animation the process of creating an animated character is usually a long and tedious task. An animation character is usually efined by a 3D mesh (a set of triangles in the space) that gives its external appearance or shape to the character. It also used to have an inner structure, the skeleton. When a skeleton is associated to a character mesh, this association is called skeleton binding, and a skeleton bound to a character mesh is an animation rig.
Rigging from scratch a character can be a very boring process. The definition and creation of a centered skeleton together with the ’painting’, by an artist,of the influence parameters between the skeleton and the mesh (the skinning) s the most demanding part to achieve an acceptable behavior for a character.
This rigging process can be simplified and accelerated using an automatic rigging method. Automatic rigging methods consist in taking as input a 3D mesh, generate a skeleton based in the shape of the original model, bound the input mesh to the generated skeleton, and finally to compute a set of parameters based in a chosen skinning method. The main objective of this thesis is to generate a method for rigging a 3D arbitrary model with minimum user interaction. This can be useful to people without experience in the animation field or to experienced people to accelerate the rigging process from days to hours or minutes depending the needed quality. Having in mind this situation we have designed our method as a set of tools that can be applied to general input models defined by an artist. The contributions made in the development of this thesis can be summarized as:
• Generation of an animation Rig: Having an arbitrary closed mesh we have implemented a thinning method to create first an unrefined geometry skeleton that captures the topology and pose of the input character. Using this geometric skeleton as starting point we use a refining method that creates an adjusted logic skeleton based in a template, or may be defined by the user, that is compatible with the current animation formats. The output logic skeleton is specific for each character, and it is bounded to the input mesh to create an animation rig.
• Skinning: Having defined an animation rig for an arbitrary mesh we have developed an improved skinning method; this method is based on the Linear Blend Skinning(LBS) algorithm. Our contributions in the skinning field can be sub-divided in:
– We propose a segmentation method that works as the core element in a weight assigning algorithm and a skinning lgorithm, we also have developed an automatic algorithm to compute the skin weights of the LBS Skinning of a rigged polygonal mesh.
– Our proposed skinning algorithm uses as base the features of the LBS Skinning. The main purpose of the developed algorithm is to solve the well-known ”candy wrap” artifact; that produces a substantial loss of volume when a link of an animation skeleton is rotated over its own axis. We have compared our results with the most important methods in the skinning field, such as Dual Quaternion Skinning (DQS) and LBS, achieving a better performance over DQS and an improvement in quality over LBS.
• Animation tools: We have developed a set of Autodesk Maya commands that works together as rig tool, using our previous proposed methods.
• Animation loader: Moreover, an animation loader tool has been implemented, that allows the user to load animations from a skeleton with different structure to a rigged 3D model. The contributions previously described has been published in 3 research papers, the first two were presented in international congresses and the third one was acepted for its publication in an JCR indexed journal.En el campo de la animación por computadora el proceso de crear un personaje de animación es comúnmente una tarea larga y tediosa. Un personaje de animación está definido usualmente por una malla tridimensional (un conjunto de triángulos en el espacio) que le dan su apariencia externa y forma al personaje. Es igualmente común que este tenga una estructura interna, un esqueleto de animación. Cuando un esqueleto esta asociado con una malla tridimensional, a esta asociación se le llama ligado de esqueleto, y un esqueleto ligado a la mallade un personaje es conocido en inglés como "animation rig" (el conjunto de elementos necesarios, que unidos sirven para animar un personaje). Hacer el rigging desde cero de un personaje puede ser un proceso muy tedioso. La definición y creación de un esqueleto centrado en la malla junto con el "pintado" por medio de un artista de los parámetros de influencia entre el esqueleto y la malla 3D (lo que se conoce como skinning) es la parte mas demandante para alcanzar un compartimiento aceptable al deformase (moverse) la malla de un personaje. Los métodos de rigging automáticos consisten en tomar una malla tridimensional como elemento de entrada, generar un esqueleto basado en la forma del modelo original, ligar la malla de entrada al esqueleto generado y finamente calcular el conjunto de parámetros utilizados por el método de skinning elegido. El principal objetivo de esta tesis es el generar un método de rigging para un modelo tridimensional arbitrario con una interacción mínima del usuario. Este método puede ser útil para gente con poca experiencia en el campo de la animación, o para gente experimentada que quiera acelerar el proceso de rigging de días a horas o inclusive minutos, dependiendo de la calidad requerida. Teniendo en mente esta situación, hemos diseñado nuestro método como un conjunto de herramientas las cuales pueden ser aplicadas a modelos de entrada generados por cualquier artista. Las contribuciones hechas en el desarrollo de esta tesis pueden resumirse a: -Generación de un rig de animación: Teniendo una malla cerrada cualquiera, hemos implementado un método para crear primero un esqueleto geométrico sin refinar, el cual capture la pose y la topología del personaje usado como elemento de entrada. Tomando este esqueleto geométrico como punto de partida usamos un método de refinado que crea un "esqueleto lógico" adaptado a la forma del geométrico basándonos en una plantilla definida por el usuario o previamente definida, que sea compatible con los formatos actuales de animación. El esqueleto lógico generado será especifico para cada personaje, y esta ligado a la malla de entrada para así crear un rig de animación. - Skinning: Teniendo definido un rig de animación para una malla de entrada arbitraria, hemos desarrollado un método mejorado de skinning, este método sera basado en el algoritmo "Linear Blending Skinnig" (algoritmo de skinning por combinación lineal, LBS por sus siglas en inglés). Nuestras contribuciones en el campo del skinnig son: - Proponemos un nuevo método de segmentación de mallas que sea la parte medular para algoritmos de asignación automática de pesos y de skinning, también hemos desarrollado un algoritmo automático que calcule los pesos utilizados por el algoritmo LBS para una malla poligonal que tenga un rig de animación. - Nuestro algoritmo de skinning propuesto usará como base las características del algoritmo LBS. El principal propósito del algoritmo desarrollado es el solucionar el defecto conocido como "envoltura de caramelo" (candy wrapper artifact), que produce una substancial perdida de volumen al rotar una de las articulaciones del esqueleto de animación sobre su propio eje. Nuestros resultados son comparados con los métodos mas importantes en el campo del skinning tal como Cuaterniones Duales (Dual Quaternions Skinning, DQS) y LBS, alcanzando un mejor desempeño que DQS y una mejora importante sobre LBSPostprint (published version
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
In this paper, we aim to create generalizable and controllable neural signed
distance fields (SDFs) that represent clothed humans from monocular depth
observations. Recent advances in deep learning, especially neural implicit
representations, have enabled human shape reconstruction and controllable
avatar generation from different sensor inputs. However, to generate realistic
cloth deformations from novel input poses, watertight meshes or dense full-body
scans are usually needed as inputs. Furthermore, due to the difficulty of
effectively modeling pose-dependent cloth deformations for diverse body shapes
and cloth types, existing approaches resort to per-subject/cloth-type
optimization from scratch, which is computationally expensive. In contrast, we
propose an approach that can quickly generate realistic clothed human avatars,
represented as controllable neural SDFs, given only monocular depth images. We
achieve this by using meta-learning to learn an initialization of a
hypernetwork that predicts the parameters of neural SDFs. The hypernetwork is
conditioned on human poses and represents a clothed neural avatar that deforms
non-rigidly according to the input poses. Meanwhile, it is meta-learned to
effectively incorporate priors of diverse body shapes and cloth types and thus
can be much faster to fine-tune, compared to models trained from scratch. We
qualitatively and quantitatively show that our approach outperforms
state-of-the-art approaches that require complete meshes as inputs while our
approach requires only depth frames as inputs and runs orders of magnitudes
faster. Furthermore, we demonstrate that our meta-learned hypernetwork is very
robust, being the first to generate avatars with realistic dynamic cloth
deformations given as few as 8 monocular depth frames.Comment: 17 pages, 9 figures. Project page:
https://neuralbodies.github.io/metavatar
- …