1,635 research outputs found
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Real-time content-aware texturing for deformable surfaces
Animation of models often introduces distortions to their parameterisation, as these are typically optimised for a single frame. The net effect is that under deformation, the mapped features, i.e. UV texture maps, bump maps or displacement maps, may appear to stretch or scale in an undesirable way. Ideally, what we would like is for the appearance of such features to remain feasible given any underlying deformation. In this paper we introduce a real-time technique that reduces such distortions based on a distortion control (rigidity) map. In two versions of our proposed technique, the parameter space is warped in either an axis or a non-axis aligned manner based on the minimisation of a non-linear distortion metric. This in turn is solved using a highly optimised hybrid CPU-GPU strategy. The result is real-time dynamic content-aware texturing that reduces distortions in a controlled way. The technique can be applied to reduce distortions in a variety of scenarios, including reusing a low geometric complexity animated sequence with a multitude of detail maps, dynamic procedurally defined features mapped on deformable geometry and animation authoring previews on texture-mapped models. © 2013 ACM
COLLADA + MPEG-4 or X3D + MPEG-4
The paper is an overview of 3D graphics assets and applications standards.The authors analyzed the three main open standards dealing with three-dimensional (3-D) graphics content and applications, X3D, COLLADA, and MPEG4, to clarify the role of each with respect to the following criteria: ability to describe only the graphics assets in a synthetic 3-D scene or also its behavior as an application, compression capacities, and appropriateness for authoring, transmission, and publishing. COLLADA could become the interchange format for authoring tools; MPEG4 on top of it (as specified in MPEG-4 Part 25), the publishing format for graphics assets; and X3D, the standard for interactive applications, enriched by MPEG-4 compression in the case of online ones. The authors also mentioned that in order to build a mobile application, a developer has to consider different hardware configurations and performances, different operating systems, different screen sizes, and input controls
Recommended from our members
Geometry videos
We present the "Geometry Video," a new data structure to encode animated meshes. Being able to encode animated meshes in a generic source-independent format allows people to share experiences. Changing the viewpoint allows more interaction than the fixed view supported by 2D video. Geometry videos are based on the "Geometry Image" mesh representation introduced by Gu et al. Our novel data structure provides a way to treat an animated mesh as a video sequence (i.e., 3D image) and is well suited for network streaming. This representation also offers the possibility of applying and adapting existing mature video processing and compression techniques (such as MPEG encoding) to animated meshes. This paper describes an algorithm to generate geometry videos from animated meshes.The main insight of this paper, is that Geometry Videos re-sample and re-organize the geometry information, in such a way, that it becomes very compressible. They provide a unified and intuitive method for level-of-detail control, both in terms of mesh resolution (by scaling the two spatial dimensions) and of frame rate (by scaling the temporal dimension). Geometry Videos have a very uniform and regular structure. Their resource and computational requirements can be calculated exactly, hence making them also suitable for applications requiring level of service guarantees.Engineering and Applied Science
Recommended from our members
Fast and deep deformation approximations
Character rigs are procedural systems that compute the shape of an animated character for a given pose. They can be highly complex and must account for bulges, wrinkles, and other aspects of a character's appearance. When comparing film-quality character rigs with those designed for real-time applications, there is typically a substantial and readily apparent difference in the quality of the mesh deformations. Real-time rigs are limited by a computational budget and often trade realism for performance. Rigs for film do not have this same limitation, and character riggers can make the rig as complicated as necessary to achieve realistic deformations. However, increasing the rig complexity slows rig evaluation, and the animators working with it can become less efficient and may experience frustration. In this paper, we present a method to reduce the time required to compute mesh deformations for film-quality rigs, allowing better interactivity during animation authoring and use in real-time games and applications. Our approach learns the deformations from an existing rig by splitting the mesh deformation into linear and nonlinear portions. The linear deformations are computed directly from the transformations of the rig's underlying skeleton. We use deep learning methods to approximate the remaining nonlinear portion. In the examples we show from production rigs used to animate lead characters, our approach reduces the computational time spent on evaluating deformations by a factor of 5Ă—-10Ă—. This significant savings allows us to run the complex, film-quality rigs in real-time even when using a CPU-only implementation on a mobile device
Human Performance Modeling and Rendering via Neural Animated Mesh
We have recently seen tremendous progress in the neural advances for
photo-real human modeling and rendering. However, it's still challenging to
integrate them into an existing mesh-based pipeline for downstream
applications. In this paper, we present a comprehensive neural approach for
high-quality reconstruction, compression, and rendering of human performances
from dense multi-view videos. Our core intuition is to bridge the traditional
animated mesh workflow with a new class of highly efficient neural techniques.
We first introduce a neural surface reconstructor for high-quality surface
generation in minutes. It marries the implicit volumetric rendering of the
truncated signed distance field (TSDF) with multi-resolution hash encoding. We
further propose a hybrid neural tracker to generate animated meshes, which
combines explicit non-rigid tracking with implicit dynamic deformation in a
self-supervised framework. The former provides the coarse warping back into the
canonical space, while the latter implicit one further predicts the
displacements using the 4D hash encoding as in our reconstructor. Then, we
discuss the rendering schemes using the obtained animated meshes, ranging from
dynamic texturing to lumigraph rendering under various bandwidth settings. To
strike an intricate balance between quality and bandwidth, we propose a
hierarchical solution by first rendering 6 virtual views covering the performer
and then conducting occlusion-aware neural texture blending. We demonstrate the
efficacy of our approach in a variety of mesh-based applications and
photo-realistic free-view experiences on various platforms, i.e., inserting
virtual human performances into real environments through mobile AR or
immersively watching talent shows with VR headsets.Comment: 18 pages, 17 figure
Artimate: an articulatory animation framework for audiovisual speech synthesis
We present a modular framework for articulatory animation synthesis using
speech motion capture data obtained with electromagnetic articulography (EMA).
Adapting a skeletal animation approach, the articulatory motion data is applied
to a three-dimensional (3D) model of the vocal tract, creating a portable
resource that can be integrated in an audiovisual (AV) speech synthesis
platform to provide realistic animation of the tongue and teeth for a virtual
character. The framework also provides an interface to articulatory animation
synthesis, as well as an example application to illustrate its use with a 3D
game engine. We rely on cross-platform, open-source software and open standards
to provide a lightweight, accessible, and portable workflow.Comment: Workshop on Innovation and Applications in Speech Technology (2012
Matrix-based Parameterizations of Skeletal Animated Appearance
Alors que le rendu réaliste gagne de l’ampleur dans l’industrie, les techniques à la
fois photoréalistes et basées sur la physique, complexes en terme de temps de calcul,
requièrent souvent une étape de précalcul hors-ligne. Les applications en temps réel,
comme les jeux vidéo et la réalité virtuelle, se basent sur des techniques d’approximation
et de précalcul pour atteindre des résultats réalistes. L’objectif de ce mémoire est l’investigation
de différentes paramétrisations animées pour concevoir une technique d’approximation
de rendu réaliste en temps réel.
Notre investigation se concentre sur le rendu d’effets visuels appliqués à des personnages
animés par modèle d’armature squelettique. Des paramétrisations combinant
des données de mouvement et d’apparence nous permettent l’extraction de paramètres
pour le processus en temps réel. Établir une dépendance linéaire entre le mouvement et
l’apparence est ainsi au coeur de notre méthode.
Nous nous concentrons sur l’occultation ambiante, où la simulation de l’occultation
est causée par des objets à proximité bloquant la lumière environnante, jugée uniforme.
L’occultation ambiante est une technique indépendante du point de vue, et elle est désormais
essentielle pour le réalisme en temps réel. Nous examinons plusieurs paramétrisations
qui traitent l’espace du maillage en fonction de l’information d’animation par
squelette et/ou du maillage géométrique.
Nous sommes capables d’approximer la réalité pour l’occultation ambiante avec une
faible erreur. Notre technique pourrait également être étendue à d’autres effets visuels
tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur
dépendant du point de vue, les déformations musculaires, la fourrure ou encore les vêtements.While realistic rendering gains more popularity in industry, photorealistic and physically-
based techniques often necessitate offline processing due to their computational
complexity. Real-time applications, such as video games and virtual reality, rely mostly
on approximation and precomputation techniques to achieve realistic results. The objective
of this thesis is to investigate different animated parameterizations in order to devise
a technique that can approximate realistic rendering results in real time.
Our investigation focuses on rendering visual effects applied to skinned skeletonbased
characters. Combined parameterizations of motion and appearance data are used
to extract parameters that can be used in a real-time approximation. Trying to establish
a linear dependency between motion and appearance is the basis of our method.
We focus on ambient occlusion, a simulation of shadowing caused by objects that
block ambient light. Ambient occlusion is a view-independent technique important for
realism. We consider different parameterization techniques that treat the mesh space
depending on skeletal animation information and/or mesh geometry.
We are able to approximate ground-truth ambient occlusion with low error. Our
technique can also be extended to different visual effects, such as rendering human skin
(subsurface scattering), changes in color due to the view orientation, deformation of
muscles, fur, or clothe
- …