1,635 research outputs found

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    Real-time content-aware texturing for deformable surfaces

    Get PDF
    Animation of models often introduces distortions to their parameterisation, as these are typically optimised for a single frame. The net effect is that under deformation, the mapped features, i.e. UV texture maps, bump maps or displacement maps, may appear to stretch or scale in an undesirable way. Ideally, what we would like is for the appearance of such features to remain feasible given any underlying deformation. In this paper we introduce a real-time technique that reduces such distortions based on a distortion control (rigidity) map. In two versions of our proposed technique, the parameter space is warped in either an axis or a non-axis aligned manner based on the minimisation of a non-linear distortion metric. This in turn is solved using a highly optimised hybrid CPU-GPU strategy. The result is real-time dynamic content-aware texturing that reduces distortions in a controlled way. The technique can be applied to reduce distortions in a variety of scenarios, including reusing a low geometric complexity animated sequence with a multitude of detail maps, dynamic procedurally defined features mapped on deformable geometry and animation authoring previews on texture-mapped models. © 2013 ACM

    COLLADA + MPEG-4 or X3D + MPEG-4

    Full text link
    The paper is an overview of 3D graphics assets and applications standards.The authors analyzed the three main open standards dealing with three-dimensional (3-D) graphics content and applications, X3D, COLLADA, and MPEG4, to clarify the role of each with respect to the following criteria: ability to describe only the graphics assets in a synthetic 3-D scene or also its behavior as an application, compression capacities, and appropriateness for authoring, transmission, and publishing. COLLADA could become the interchange format for authoring tools; MPEG4 on top of it (as specified in MPEG-4 Part 25), the publishing format for graphics assets; and X3D, the standard for interactive applications, enriched by MPEG-4 compression in the case of online ones. The authors also mentioned that in order to build a mobile application, a developer has to consider different hardware configurations and performances, different operating systems, different screen sizes, and input controls

    Human Performance Modeling and Rendering via Neural Animated Mesh

    Full text link
    We have recently seen tremendous progress in the neural advances for photo-real human modeling and rendering. However, it's still challenging to integrate them into an existing mesh-based pipeline for downstream applications. In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos. Our core intuition is to bridge the traditional animated mesh workflow with a new class of highly efficient neural techniques. We first introduce a neural surface reconstructor for high-quality surface generation in minutes. It marries the implicit volumetric rendering of the truncated signed distance field (TSDF) with multi-resolution hash encoding. We further propose a hybrid neural tracker to generate animated meshes, which combines explicit non-rigid tracking with implicit dynamic deformation in a self-supervised framework. The former provides the coarse warping back into the canonical space, while the latter implicit one further predicts the displacements using the 4D hash encoding as in our reconstructor. Then, we discuss the rendering schemes using the obtained animated meshes, ranging from dynamic texturing to lumigraph rendering under various bandwidth settings. To strike an intricate balance between quality and bandwidth, we propose a hierarchical solution by first rendering 6 virtual views covering the performer and then conducting occlusion-aware neural texture blending. We demonstrate the efficacy of our approach in a variety of mesh-based applications and photo-realistic free-view experiences on various platforms, i.e., inserting virtual human performances into real environments through mobile AR or immersively watching talent shows with VR headsets.Comment: 18 pages, 17 figure

    Artimate: an articulatory animation framework for audiovisual speech synthesis

    Get PDF
    We present a modular framework for articulatory animation synthesis using speech motion capture data obtained with electromagnetic articulography (EMA). Adapting a skeletal animation approach, the articulatory motion data is applied to a three-dimensional (3D) model of the vocal tract, creating a portable resource that can be integrated in an audiovisual (AV) speech synthesis platform to provide realistic animation of the tongue and teeth for a virtual character. The framework also provides an interface to articulatory animation synthesis, as well as an example application to illustrate its use with a 3D game engine. We rely on cross-platform, open-source software and open standards to provide a lightweight, accessible, and portable workflow.Comment: Workshop on Innovation and Applications in Speech Technology (2012

    Matrix-based Parameterizations of Skeletal Animated Appearance

    Full text link
    Alors que le rendu réaliste gagne de l’ampleur dans l’industrie, les techniques à la fois photoréalistes et basées sur la physique, complexes en terme de temps de calcul, requièrent souvent une étape de précalcul hors-ligne. Les applications en temps réel, comme les jeux vidéo et la réalité virtuelle, se basent sur des techniques d’approximation et de précalcul pour atteindre des résultats réalistes. L’objectif de ce mémoire est l’investigation de différentes paramétrisations animées pour concevoir une technique d’approximation de rendu réaliste en temps réel. Notre investigation se concentre sur le rendu d’effets visuels appliqués à des personnages animés par modèle d’armature squelettique. Des paramétrisations combinant des données de mouvement et d’apparence nous permettent l’extraction de paramètres pour le processus en temps réel. Établir une dépendance linéaire entre le mouvement et l’apparence est ainsi au coeur de notre méthode. Nous nous concentrons sur l’occultation ambiante, où la simulation de l’occultation est causée par des objets à proximité bloquant la lumière environnante, jugée uniforme. L’occultation ambiante est une technique indépendante du point de vue, et elle est désormais essentielle pour le réalisme en temps réel. Nous examinons plusieurs paramétrisations qui traitent l’espace du maillage en fonction de l’information d’animation par squelette et/ou du maillage géométrique. Nous sommes capables d’approximer la réalité pour l’occultation ambiante avec une faible erreur. Notre technique pourrait également être étendue à d’autres effets visuels tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur dépendant du point de vue, les déformations musculaires, la fourrure ou encore les vêtements.While realistic rendering gains more popularity in industry, photorealistic and physically- based techniques often necessitate offline processing due to their computational complexity. Real-time applications, such as video games and virtual reality, rely mostly on approximation and precomputation techniques to achieve realistic results. The objective of this thesis is to investigate different animated parameterizations in order to devise a technique that can approximate realistic rendering results in real time. Our investigation focuses on rendering visual effects applied to skinned skeletonbased characters. Combined parameterizations of motion and appearance data are used to extract parameters that can be used in a real-time approximation. Trying to establish a linear dependency between motion and appearance is the basis of our method. We focus on ambient occlusion, a simulation of shadowing caused by objects that block ambient light. Ambient occlusion is a view-independent technique important for realism. We consider different parameterization techniques that treat the mesh space depending on skeletal animation information and/or mesh geometry. We are able to approximate ground-truth ambient occlusion with low error. Our technique can also be extended to different visual effects, such as rendering human skin (subsurface scattering), changes in color due to the view orientation, deformation of muscles, fur, or clothe
    • …
    corecore