1,864 research outputs found
Automatic facial expression tracking for 4D range scans
This paper presents a fully automatic approach of spatio-temporal facial expression tracking for 4D range scans without any manual interventions (such as specifying landmarks). The approach consists of three steps: rigid registration, facial model reconstruction, and facial expression tracking. A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registration between a template facial model and a range scan with consideration of the scale problem. A deformable model, physically based on thin shells, is proposed to faithfully reconstruct the facial surface and texture from that range data. And then the reconstructed facial model is used to track facial expressions presented in a sequence of range scans by the deformable model
Scalable partitioning for parallel position based dynamics
We introduce a practical partitioning technique designed for parallelizing Position Based Dynamics, and exploiting
the ubiquitous multi-core processors present in current commodity GPUs. The input is a set of particles whose
dynamics is influenced by spatial constraints. In the initialization phase, we build a graph in which each node
corresponds to a constraint and two constraints are connected by an edge if they influence at least one common
particle. We introduce a novel greedy algorithm for inserting additional constraints (phantoms) in the graph
such that the resulting topology is q-colourable, where ˆ qˆ ≥ 2 is an arbitrary number. We color the graph, and
the constraints with the same color are assigned to the same partition. Then, the set of constraints belonging to
each partition is solved in parallel during the animation phase. We demonstrate this by using our partitioning
technique; the performance hit caused by the GPU kernel calls is significantly decreased, leaving unaffected the
visual quality, robustness and speed of serial position based dynamics
An enhance framework on hair modeling and real-time animation
Master'sMASTER OF SCIENC
Shape: A 3D Modeling Tool for Astrophysics
We present a flexible interactive 3D morpho-kinematical modeling application
for astrophysics. Compared to other systems, our application reduces the
restrictions on the physical assumptions, data type and amount that is required
for a reconstruction of an object's morphology. It is one of the first publicly
available tools to apply interactive graphics to astrophysical modeling. The
tool allows astrophysicists to provide a-priori knowledge about the object by
interactively defining 3D structural elements. By direct comparison of model
prediction with observational data, model parameters can then be automatically
optimized to fit the observation. The tool has already been successfully used
in a number of astrophysical research projects.Comment: 13 pages, 11 figures, accepted for publication in the "IEEE
Transactions on Visualization and Computer Graphics
Interactive Procedural Modelling of Coherent Waterfall Scenes
International audienceCombining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of the waterfalls. We present the first solution for the interactive procedural design of coherent waterfall scenes. Our system combines vectorial editing, where the user assembles elements to create a waterfall network over an existing terrain, with a procedural model that parametrizes these elements from hydraulic exchanges; enforces consistency between the terrain and the flow; and generates detailed geometry, animated textures and shaders for the waterfalls and their surroundings. The tool is interactive, yielding visual feedback after each edit
Shape Animation with Combined Captured and Simulated Dynamics
We present a novel volumetric animation generation framework to create new
types of animations from raw 3D surface or point cloud sequence of captured
real performances. The framework considers as input time incoherent 3D
observations of a moving shape, and is thus particularly suitable for the
output of performance capture platforms. In our system, a suitable virtual
representation of the actor is built from real captures that allows seamless
combination and simulation with virtual external forces and objects, in which
the original captured actor can be reshaped, disassembled or reassembled from
user-specified virtual physics. Instead of using the dominant surface-based
geometric representation of the capture, which is less suitable for volumetric
effects, our pipeline exploits Centroidal Voronoi tessellation decompositions
as unified volumetric representation of the real captured actor, which we show
can be used seamlessly as a building block for all processing stages, from
capture and tracking to virtual physic simulation. The representation makes no
human specific assumption and can be used to capture and re-simulate the actor
with props or other moving scenery elements. We demonstrate the potential of
this pipeline for virtual reanimation of a real captured event with various
unprecedented volumetric visual effects, such as volumetric distortion,
erosion, morphing, gravity pull, or collisions
- …