4,873 research outputs found

    Parallel Mesh Processing

    Get PDF
    Die aktuelle Forschung im Bereich der Computergrafik versucht den zunehmenden AnsprĂŒchen der Anwender gerecht zu werden und erzeugt immer realistischer wirkende Bilder. Dementsprechend werden die Szenen und Verfahren, die zur Darstellung der Bilder genutzt werden, immer komplexer. So eine Entwicklung ist unweigerlich mit der Steigerung der erforderlichen Rechenleistung verbunden, da die Modelle, aus denen eine Szene besteht, aus Milliarden von Polygonen bestehen können und in Echtzeit dargestellt werden mĂŒssen. Die realistische Bilddarstellung ruht auf drei SĂ€ulen: Modelle, Materialien und Beleuchtung. Heutzutage gibt es einige Verfahren fĂŒr effiziente und realistische Approximation der globalen Beleuchtung. Genauso existieren Algorithmen zur Erstellung von realistischen Materialien. Es gibt zwar auch Verfahren fĂŒr das Rendering von Modellen in Echtzeit, diese funktionieren aber meist nur fĂŒr Szenen mittlerer KomplexitĂ€t und scheitern bei sehr komplexen Szenen. Die Modelle bilden die Grundlage einer Szene; deren Optimierung hat unmittelbare Auswirkungen auf die Effizienz der Verfahren zur Materialdarstellung und Beleuchtung, so dass erst eine optimierte ModellreprĂ€sentation eine Echtzeitdarstellung ermöglicht. Viele der in der Computergrafik verwendeten Modelle werden mit Hilfe der Dreiecksnetze reprĂ€sentiert. Das darin enthaltende Datenvolumen ist enorm, um letztlich den Detailreichtum der jeweiligen Objekte darstellen bzw. den wachsenden RealitĂ€tsanspruch bewĂ€ltigen zu können. Das Rendern von komplexen, aus Millionen von Dreiecken bestehenden Modellen stellt selbst fĂŒr moderne Grafikkarten eine große Herausforderung dar. Daher ist es insbesondere fĂŒr die Echtzeitsimulationen notwendig, effiziente Algorithmen zu entwickeln. Solche Algorithmen sollten einerseits Visibility Culling1, Level-of-Detail, (LOD), Out-of-Core Speicherverwaltung und Kompression unterstĂŒtzen. Anderseits sollte diese Optimierung sehr effizient arbeiten, um das Rendering nicht noch zusĂ€tzlich zu behindern. Dies erfordert die Entwicklung paralleler Verfahren, die in der Lage sind, die enorme Datenflut effizient zu verarbeiten. Der Kernbeitrag dieser Arbeit sind neuartige Algorithmen und Datenstrukturen, die speziell fĂŒr eine effiziente parallele Datenverarbeitung entwickelt wurden und in der Lage sind sehr komplexe Modelle und Szenen in Echtzeit darzustellen, sowie zu modellieren. Diese Algorithmen arbeiten in zwei Phasen: ZunĂ€chst wird in einer Offline-Phase die Datenstruktur erzeugt und fĂŒr parallele Verarbeitung optimiert. Die optimierte Datenstruktur wird dann in der zweiten Phase fĂŒr das Echtzeitrendering verwendet. Ein weiterer Beitrag dieser Arbeit ist ein Algorithmus, welcher in der Lage ist, einen sehr realistisch wirkenden Planeten prozedural zu generieren und in Echtzeit zu rendern

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    External Memory View-Dependent Simplification

    Full text link

    Large-scale Geometric Data Decomposition, Processing and Structured Mesh Generation

    Get PDF
    Mesh generation is a fundamental and critical problem in geometric data modeling and processing. In most scientific and engineering tasks that involve numerical computations and simulations on 2D/3D regions or on curved geometric objects, discretizing or approximating the geometric data using a polygonal or polyhedral meshes is always the first step of the procedure. The quality of this tessellation often dictates the subsequent computation accuracy, efficiency, and numerical stability. When compared with unstructured meshes, the structured meshes are favored in many scientific/engineering tasks due to their good properties. However, generating high-quality structured mesh remains challenging, especially for complex or large-scale geometric data. In industrial Computer-aided Design/Engineering (CAD/CAE) pipelines, the geometry processing to create a desirable structural mesh of the complex model is the most costly step. This step is semi-manual, and often takes up to several weeks to finish. Several technical challenges remains unsolved in existing structured mesh generation techniques. This dissertation studies the effective generation of structural mesh on large and complex geometric data. We study a general geometric computation paradigm to solve this problem via model partitioning and divide-and-conquer. To apply effective divide-and-conquer, we study two key technical components: the shape decomposition in the divide stage, and the structured meshing in the conquer stage. We test our algorithm on vairous data set, the results demonstrate the efficiency and effectiveness of our framework. The comparisons also show our algorithm outperforms existing partitioning methods in final meshing quality. We also show our pipeline scales up efficiently on HPC environment

    High-order implicit palindromic discontinuous Galerkin method for kinetic-relaxation approximation

    Get PDF
    We construct a high order discontinuous Galerkin method for solving general hyperbolic systems of conservation laws. The method is CFL-less, matrix-free, has the complexity of an explicit scheme and can be of arbitrary order in space and time. The construction is based on: (a) the representation of the system of conservation laws by a kinetic vectorial representation with a stiff relaxation term; (b) a matrix-free, CFL-less implicit discontinuous Galerkin transport solver; and (c) a stiffly accurate composition method for time integration. The method is validated on several one-dimensional test cases. It is then applied on two-dimensional and three-dimensional test cases: flow past a cylinder, magnetohydrodynamics and multifluid sedimentation

    A Thermodynamically-Based Mesh Objective Work Potential Theory for Predicting Intralaminar Progressive Damage and Failure in Fiber-Reinforced Laminates

    Get PDF
    A thermodynamically-based work potential theory for modeling progressive damage and failure in fiber-reinforced laminates is presented. The current, multiple-internal state variable (ISV) formulation, enhanced Schapery theory (EST), utilizes separate ISVs for modeling the effects of damage and failure. Damage is considered to be the effect of any structural changes in a material that manifest as pre-peak non-linearity in the stress versus strain response. Conversely, failure is taken to be the effect of the evolution of any mechanisms that results in post-peak strain softening. It is assumed that matrix microdamage is the dominant damage mechanism in continuous fiber-reinforced polymer matrix laminates, and its evolution is controlled with a single ISV. Three additional ISVs are introduced to account for failure due to mode I transverse cracking, mode II transverse cracking, and mode I axial failure. Typically, failure evolution (i.e., post-peak strain softening) results in pathologically mesh dependent solutions within a finite element method (FEM) setting. Therefore, consistent character element lengths are introduced into the formulation of the evolution of the three failure ISVs. Using the stationarity of the total work potential with respect to each ISV, a set of thermodynamically consistent evolution equations for the ISVs is derived. The theory is implemented into commercial FEM software. Objectivity of total energy dissipated during the failure process, with regards to refinements in the FEM mesh, is demonstrated. The model is also verified against experimental results from two laminated, T800/3900-2 panels containing a central notch and different fiber-orientation stacking sequences. Global load versus displacement, global load versus local strain gage data, and macroscopic failure paths obtained from the models are compared to the experiments
    • 

    corecore