834 research outputs found

    Fast Simulation of Skin Sliding

    Get PDF
    Skin sliding is the phenomenon of the skin moving over underlying layers of fat, muscle and bone. Due to the complex interconnections between these separate layers and their differing elasticity properties, it is difficult to model and expensive to compute. We present a novel method to simulate this phenomenon at real--time by remeshing the surface based on a parameter space resampling. In order to evaluate the surface parametrization, we borrow a technique from structural engineering known as the force density method which solves for an energy minimizing form with a sparse linear system. Our method creates a realistic approximation of skin sliding in real--time, reducing texture distortions in the region of the deformation. In addition it is flexible, simple to use, and can be incorporated into any animation pipeline

    Implicit Skinning: Real-Time Skin Deformation with Contact Modeling

    Get PDF
    SIGGRAPH 2013 Conference ProceedingsInternational audienceGeometric skinning techniques, such as smooth blending or dualquaternions, are very popular in the industry for their high performances, but fail to mimic realistic deformations. Other methods make use of physical simulation or control volume to better capture the skin behavior, yet they cannot deliver real-time feedback. In this paper, we present the first purely geometric method handling skin contact effects and muscular bulges in real-time. The insight is to exploit the advanced composition mechanism of volumetric, implicit representations for correcting the results of geometric skinning techniques. The mesh is first approximated by a set of implicit surfaces. At each animation step, these surfaces are combined in real-time and used to adjust the position of mesh vertices, starting from their smooth skinning position. This deformation step is done without any loss of detail and seamlessly handles contacts between skin parts. As it acts as a post-process, our method fits well into the standard animation pipeline. Moreover, it requires no intensive computation step such as collision detection, and therefore provides real-time performances

    A survey of real-time crowd rendering

    Get PDF
    In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft

    A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint

    Get PDF
    3D shape editing is widely used in a range of applications such as movie production, computer games and computer aided design. It is also a popular research topic in computer graphics and computer vision. In past decades, researchers have developed a series of editing methods to make the editing process faster, more robust, and more reliable. Traditionally, the deformed shape is determined by the optimal transformation and weights for an energy term. With increasing availability of 3D shapes on the Internet, data-driven methods were proposed to improve the editing results. More recently as the deep neural networks became popular, many deep learning based editing methods have been developed in this field, which is naturally data-driven. We mainly survey recent research works from the geometric viewpoint to those emerging neural deformation techniques and categorize them into organic shape editing methods and man-made model editing methods. Both traditional methods and recent neural network based methods are reviewed

    Automatic generation of dynamic skin deformation for animated characters

    Get PDF
    © 2018 by the authors. Since non-automatic rigging requires heavy human involvements, and various automatic rigging algorithms are less efficient in terms of computational efficiency, especially for current curve-based skin deformation methods, identifying the iso-parametric curves and creating the animation skeleton requires tedious and time-consuming manual work. Although several automatic rigging methods have been developed, but they do not aim at curve-based models. To tackle this issue, this paper proposes a new rigging algorithm for automatic generation of dynamic skin deformation to quickly identify iso-parametric curves and create an animation skeleton in a few milliseconds, which can be seamlessly used in curve-based skin deformation methods to make the rigging process fast enough for highly efficient computer animation applications

    Zero-shot Pose Transfer for Unrigged Stylized 3D Characters

    Full text link
    Transferring the pose of a reference avatar to stylized 3D characters of various shapes is a fundamental task in computer graphics. Existing methods either require the stylized characters to be rigged, or they use the stylized character in the desired pose as ground truth at training. We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training, and deforms stylized characters of significantly different shapes at inference. Classical methods achieve strong generalization by deforming the mesh at the triangle level, but this requires labelled correspondences. We leverage the power of local deformation, but without requiring explicit correspondence labels. We introduce a semi-supervised shape-understanding module to bypass the need for explicit correspondences at test time, and an implicit pose deformation module that deforms individual surface points to match the target pose. Furthermore, to encourage realistic and accurate deformation of stylized characters, we introduce an efficient volume-based test-time training procedure. Because it does not need rigging, nor the deformed stylized character at training time, our model generalizes to categories with scarce annotation, such as stylized quadrupeds. Extensive experiments demonstrate the effectiveness of the proposed method compared to the state-of-the-art approaches trained with comparable or more supervision. Our project page is available at https://jiashunwang.github.io/ZPTComment: CVPR 202

    Composing quadrilateral meshes for animation

    Get PDF
    The modeling-by-composition paradigm can be a powerful tool in modern animation pipelines. We propose two novel interactive techniques to compose 3D assets that enable the artists to freely remove, detach and combine components of organic models. The idea behind our methods is to preserve most of the original information in the input characters and blend accordingly where necessary. The first method, QuadMixer, provides a robust tool to compose the quad layouts of watertight pure quadrilateral meshes, exploiting the boolean operations defined on triangles. Quad Layout is a crucial property for many applications since it conveys important information that would otherwise be destroyed by techniques that aim only at preserving the shape. Our technique keeps untouched all the quads in the patches which are not involved in the blending. The resulting meshes preserve the originally designed edge flows that, by construction, are captured and incorporated into the new quads. SkinMixer extends this approach to compose skinned models, taking into account not only the surface but also the data structures for animating the character. We propose a new operation-based technique that preserves and smoothly merges meshes, skeletons, and skinning weights. The retopology approach of QuadMixer is extended to work on quad-dominant and arbitrary complex surfaces. Instead of relying on boolean operations on triangle meshes, we manipulate signed distance fields to generate an implicit surface. The results preserve most of the information in the input assets, blending accordingly in the intersection regions. The resulting characters are ready to be used in animation pipelines. Given the high quality of the results generated, we believe that our methods could have a huge impact on the entertainment industry. Integrated into current software for 3D modeling, they would certainly provide a powerful tool for the artists. Allowing them to automatically reuse parts of their well-designed characters could lead to a new approach for creating models, which would significantly reduce the cost of the process

    Matrix-based Parameterizations of Skeletal Animated Appearance

    Full text link
    Alors que le rendu rĂ©aliste gagne de l’ampleur dans l’industrie, les techniques Ă  la fois photorĂ©alistes et basĂ©es sur la physique, complexes en terme de temps de calcul, requiĂšrent souvent une Ă©tape de prĂ©calcul hors-ligne. Les applications en temps rĂ©el, comme les jeux vidĂ©o et la rĂ©alitĂ© virtuelle, se basent sur des techniques d’approximation et de prĂ©calcul pour atteindre des rĂ©sultats rĂ©alistes. L’objectif de ce mĂ©moire est l’investigation de diffĂ©rentes paramĂ©trisations animĂ©es pour concevoir une technique d’approximation de rendu rĂ©aliste en temps rĂ©el. Notre investigation se concentre sur le rendu d’effets visuels appliquĂ©s Ă  des personnages animĂ©s par modĂšle d’armature squelettique. Des paramĂ©trisations combinant des donnĂ©es de mouvement et d’apparence nous permettent l’extraction de paramĂštres pour le processus en temps rĂ©el. Établir une dĂ©pendance linĂ©aire entre le mouvement et l’apparence est ainsi au coeur de notre mĂ©thode. Nous nous concentrons sur l’occultation ambiante, oĂč la simulation de l’occultation est causĂ©e par des objets Ă  proximitĂ© bloquant la lumiĂšre environnante, jugĂ©e uniforme. L’occultation ambiante est une technique indĂ©pendante du point de vue, et elle est dĂ©sormais essentielle pour le rĂ©alisme en temps rĂ©el. Nous examinons plusieurs paramĂ©trisations qui traitent l’espace du maillage en fonction de l’information d’animation par squelette et/ou du maillage gĂ©omĂ©trique. Nous sommes capables d’approximer la rĂ©alitĂ© pour l’occultation ambiante avec une faible erreur. Notre technique pourrait Ă©galement ĂȘtre Ă©tendue Ă  d’autres effets visuels tels le rendu de la peau humaine (diffusion sous-surface), les changements de couleur dĂ©pendant du point de vue, les dĂ©formations musculaires, la fourrure ou encore les vĂȘtements.While realistic rendering gains more popularity in industry, photorealistic and physically- based techniques often necessitate offline processing due to their computational complexity. Real-time applications, such as video games and virtual reality, rely mostly on approximation and precomputation techniques to achieve realistic results. The objective of this thesis is to investigate different animated parameterizations in order to devise a technique that can approximate realistic rendering results in real time. Our investigation focuses on rendering visual effects applied to skinned skeletonbased characters. Combined parameterizations of motion and appearance data are used to extract parameters that can be used in a real-time approximation. Trying to establish a linear dependency between motion and appearance is the basis of our method. We focus on ambient occlusion, a simulation of shadowing caused by objects that block ambient light. Ambient occlusion is a view-independent technique important for realism. We consider different parameterization techniques that treat the mesh space depending on skeletal animation information and/or mesh geometry. We are able to approximate ground-truth ambient occlusion with low error. Our technique can also be extended to different visual effects, such as rendering human skin (subsurface scattering), changes in color due to the view orientation, deformation of muscles, fur, or clothe
    • 

    corecore