6,393 research outputs found
Theoretical and numerical comparison of hyperelastic and hypoelastic formulations for Eulerian non-linear elastoplasticity
The aim of this paper is to compare a hyperelastic with a hypoelastic model
describing the Eulerian dynamics of solids in the context of non-linear
elastoplastic deformations. Specifically, we consider the well-known
hypoelastic Wilkins model, which is compared against a hyperelastic model based
on the work of Godunov and Romenski. First, we discuss some general conceptual
differences between the two approaches. Second, a detailed study of both models
is proposed, where differences are made evident at the aid of deriving a
hypoelastic-type model corresponding to the hyperelastic model and a particular
equation of state used in this paper. Third, using the same high order ADER
Finite Volume and Discontinuous Galerkin methods on fixed and moving
unstructured meshes for both models, a wide range of numerical benchmark test
problems has been solved. The numerical solutions obtained for the two
different models are directly compared with each other. For small elastic
deformations, the two models produce very similar solutions that are close to
each other. However, if large elastic or elastoplastic deformations occur, the
solutions present larger differences.Comment: 14 figure
A survey of real-time crowd rendering
In this survey we review, classify and compare existing approaches for real-time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level-of-detail (LoD) rendering of animated characters, including polygon-based, point-based, and image-based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo-instancing, palette skinning, and dynamic key-pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.Peer ReviewedPostprint (author's final draft
Multi-scale space-variant FRep cellular structures
Existing mesh and voxel based modeling methods encounter difficulties when dealing with objects containing cellular structures
on several scale levels and varying their parameters in space. We describe an alternative approach based on using real functions evaluated procedurally at any given point. This allows for modeling fully parameterized, nested and multi-scale cellular
structures with dynamic variations in geometric and cellular properties. The geometry of a base unit cell is defined using Function Representation (FRep) based primitives and operations. The unit cell is then replicated in space using periodic
space mappings such as sawtooth and triangle waves. While being replicated, the unit cell can vary its geometry and topology due
to the use of dynamic parameterization. We illustrate this approach by several examples of microstructure generation within a given volume or
along a given surface. We also outline some methods for direct rendering and fabrication not involving auxiliary mesh and voxel
representations
2D-to-3D facial expression transfer
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape --obtained from standard factorization approaches over the input video-- using a triangular mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.Peer ReviewedPostprint (author's final draft
- …