710 research outputs found
Recommended from our members
Fast and deep deformation approximations
Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character's appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications. Our approach learns the deformations from an existing rig by splitting the mesh deformation into linear and nonlinear portions. The linear deformations are computed directly from the transformations of the rig's underlying skeleton. We use deep learning methods to approximate the remaining nonlinear portion. In the examples we show from production rigs used to animate lead characters, our approach reduces the computational time spent on evaluating deformations by a factor of 5Ă-10Ă. This significant savings allows us to run the complex, film-quality rigs in real-time even when using a CPU-only implementation on a mobile device
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Fast Simulation of Skin Sliding
Skin sliding is the phenomenon of the skin moving over underlying layers of fat, muscle and bone. Due to the complex interconnections between these separate layers and their differing elasticity properties, it is difficult to model and expensive to compute. We present a novel method to simulate this phenomenon at real--time by remeshing the surface based on a parameter space resampling. In order to evaluate the surface parametrization, we borrow a technique from structural engineering known as the force density method which solves for an energy minimizing form with a sparse linear system. Our method creates a realistic approximation of skin sliding in real--time, reducing texture distortions in the region of the deformation. In addition it is flexible, simple to use, and can be incorporated into any animation pipeline
TapMo: Shape-aware Motion Generation of Skeleton-free Characters
Previous motion generation methods are limited to the pre-rigged 3D human
model, hindering their applications in the animation of various non-rigged
characters. In this work, we present TapMo, a Text-driven Animation Pipeline
for synthesizing Motion in a broad spectrum of skeleton-free 3D characters. The
pivotal innovation in TapMo is its use of shape deformation-aware features as a
condition to guide the diffusion model, thereby enabling the generation of
mesh-specific motions for various characters. Specifically, TapMo comprises two
main components - Mesh Handle Predictor and Shape-aware Diffusion Module. Mesh
Handle Predictor predicts the skinning weights and clusters mesh vertices into
adaptive handles for deformation control, which eliminates the need for
traditional skeletal rigging. Shape-aware Motion Diffusion synthesizes motion
with mesh-specific adaptations. This module employs text-guided motions and
mesh features extracted during the first stage, preserving the geometric
integrity of the animations by accounting for the character's shape and
deformation. Trained in a weakly-supervised manner, TapMo can accommodate a
multitude of non-human meshes, both with and without associated text motions.
We demonstrate the effectiveness and generalizability of TapMo through rigorous
qualitative and quantitative experiments. Our results reveal that TapMo
consistently outperforms existing auto-animation methods, delivering
superior-quality animations for both seen or unseen heterogeneous 3D
characters
Character customization: Animated hair and clothing
Treball final de Grau en Disseny i Desenvolupament de Videojocs. Codi: VJ1241. Curs acadĂšmic: 2018/2019This project consists in designing and implementing a 3D female character editor.
It is focused in modeling and animating the female character, hairstyle and clothes. This
editor will be developed using the Unity 3D Game Engine. It will consist in an interface
that allows changing skin and eye color, style and color of hair and, lastly, the clothes the
character is to wear among a catalogue of predefined models. With each change, the
character will respond with an animation in order to improve the experience of perceiving
the final style of the character
Procedural Generation of 2D Creatures
KĂ€esoleva bakalaureusetöö raames arendati 2D olendite genereerimise sĂŒsteem ning selle sĂŒsteemi implementatsioon programmeerimiskeeles JavaScript. SĂŒsteem tekitab mitmekesiseid olendeid ning nendega seotud andmed, sealhulgas skelett, geomeetria ja tekstuur. Bakalaureusetöö sisaldab sĂŒsteemi kirjeldust. SĂŒsteemi iga sammu kohta on vĂ€lja toodud tĂ€htsamad pĂ”himĂ”tted ning seletatud mĂ”ned implementatsiooni ĂŒksikasjad.Töös analĂŒĂŒsitakse sĂŒsteemi tervikuna ning selle implementatsiooni. Tuuakse vĂ€lja sĂŒsteemi probleemid ning nĂ”rgad kohad ja mÔÔdetakse implementatsiooni jĂ”udlust. Töö lĂ”pus tuuakse vĂ€lja sĂŒsteemi kasutusvĂ”imalused ja vĂ”imalused selle edasi arendamiseks.The purpose of this thesis is the development of a system capable of generating a large variety of 2D creatures and their associated data, such as skeletons, meshes and textures. A JavaScript implementation of the system was developed for this thesis. This thesis contains a description of the developed system and a description of each step of the generation process and its principles with some additional notes about the specifics of the implementation.The creature generation system as a whole and its implementation are analysed and their advantages and drawbacks brought out. The performance of the implementation is also tested. Several possible improvements are proposed at the end of the thesis, as well as possible uses
Matrix-based Parameterizations of Skeletal Animated Appearance
Alors que le rendu rĂ©aliste gagne de lâampleur dans lâindustrie, les techniques Ă la
fois photoréalistes et basées sur la physique, complexes en terme de temps de calcul,
requiÚrent souvent une étape de précalcul hors-ligne. Les applications en temps réel,
comme les jeux vidĂ©o et la rĂ©alitĂ© virtuelle, se basent sur des techniques dâapproximation
et de prĂ©calcul pour atteindre des rĂ©sultats rĂ©alistes. Lâobjectif de ce mĂ©moire est lâinvestigation
de diffĂ©rentes paramĂ©trisations animĂ©es pour concevoir une technique dâapproximation
de rendu réaliste en temps réel.
Notre investigation se concentre sur le rendu dâeffets visuels appliquĂ©s Ă des personnages
animĂ©s par modĂšle dâarmature squelettique. Des paramĂ©trisations combinant
des donnĂ©es de mouvement et dâapparence nous permettent lâextraction de paramĂštres
pour le processus en temps rĂ©el. Ătablir une dĂ©pendance linĂ©aire entre le mouvement et
lâapparence est ainsi au coeur de notre mĂ©thode.
Nous nous concentrons sur lâoccultation ambiante, oĂč la simulation de lâoccultation
est causée par des objets à proximité bloquant la lumiÚre environnante, jugée uniforme.
Lâoccultation ambiante est une technique indĂ©pendante du point de vue, et elle est dĂ©sormais
essentielle pour le réalisme en temps réel. Nous examinons plusieurs paramétrisations
qui traitent lâespace du maillage en fonction de lâinformation dâanimation par
squelette et/ou du maillage géométrique.
Nous sommes capables dâapproximer la rĂ©alitĂ© pour lâoccultation ambiante avec une
faible erreur. Notre technique pourrait Ă©galement ĂȘtre Ă©tendue Ă dâautres effets visuels
tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur
dĂ©pendant du point de vue, les dĂ©formations musculaires, la fourrure ou encore les vĂȘtements.While realistic rendering gains more popularity in industry, photorealistic and physically-
based techniques often necessitate offline processing due to their computational
complexity. Real-time applications, such as video games and virtual reality, rely mostly
on approximation and precomputation techniques to achieve realistic results. The objective
of this thesis is to investigate different animated parameterizations in order to devise
a technique that can approximate realistic rendering results in real time.
Our investigation focuses on rendering visual effects applied to skinned skeletonbased
characters. Combined parameterizations of motion and appearance data are used
to extract parameters that can be used in a real-time approximation. Trying to establish
a linear dependency between motion and appearance is the basis of our method.
We focus on ambient occlusion, a simulation of shadowing caused by objects that
block ambient light. Ambient occlusion is a view-independent technique important for
realism. We consider different parameterization techniques that treat the mesh space
depending on skeletal animation information and/or mesh geometry.
We are able to approximate ground-truth ambient occlusion with low error. Our
technique can also be extended to different visual effects, such as rendering human skin
(subsurface scattering), changes in color due to the view orientation, deformation of
muscles, fur, or clothe
- âŠ