6,663 research outputs found

    Geometry Modeling for Unstructured Mesh Adaptation

    Get PDF
    The quantification and control of discretization error is critical to obtaining reliable simulation results. Adaptive mesh techniques have the potential to automate discretization error control, but have made limited impact on production analysis workflow. Recent progress has matured a number of independent implementations of flow solvers, error estimation methods, and anisotropic mesh adaptation mechanics. However, the poor integration of initial mesh generation and adaptive mesh mechanics to typical sources of geometry has hindered adoption of adaptive mesh techniques, where these geometries are often created in Mechanical Computer- Aided Design (MCAD) systems. The difficulty of this coupling is compounded by two factors: the inherent complexity of the model (e.g., large range of scales, bodies in proximity, details not required for analysis) and unintended geometry construction artifacts (e.g., translation, uneven parameterization, degeneracy, self-intersection, sliver faces, gaps, large tolerances be- tween topological elements, local high curvature to enforce continuity). Manual preparation of geometry is commonly employed to enable fixed-grid and adaptive-grid workflows by reducing the severity and negative impacts of these construction artifacts, but manual process interaction inhibits workflow automation. Techniques to permit the use of complex geometry models and reduce the impact of geometry construction artifacts on unstructured grid workflows are models from the AIAA Sonic Boom and High Lift Prediction are shown to demonstrate the utility of the current approach

    Shape Animation with Combined Captured and Simulated Dynamics

    Get PDF
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions
    • …
    corecore