68 research outputs found

    Parallel extraction and simplification of large isosurfaces using an extended tandem algorithm

    Get PDF
    International audienceIn order to deal with the common trend in size increase of volumetric datasets, in the past few years research in isosurface extraction has focused on related aspects such as surface simplification and load-balanced parallel algorithms. We present a parallel, block-wise extension of the tandem algorithm by Attali et al., which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate block splitting and merging strategy along with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc.) that permits further improved exploration scenarios (small component removal or particularly oriented component selection for instance). For ease of implementation, we carefully describe a master and worker algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a geophysical dataset of size 7000 Ă— 1600 Ă— 2000

    Isosurface extraction and interpretation on very large datasets in geophysics

    Get PDF
    International audienceIn order to deal with the heavy trend in size increase of volumetric datasets, research in isosurface extraction has focused in the past few years on related aspects such as surface simplification and load balanced parallel algorithms. We present in this paper a parallel, bloc-wise extension of the tandem algorithm [Attali et al. 2005], which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate bloc splitting and merging strategy and with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc) that will allow further improved exploration scenarios (small components removal or particularly oriented components selection for instance). For ease of implementation, we carefully describe a master and slave algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a 7000Ă—1600Ă—2000 geophysics dataset

    Visualizing the inner structure of N-body data

    Get PDF
    N-body simulations produce a large amount of data that must be visualized in order to extract interesting features. This data consists of thousands of particles, each with a number of properties such as velocity, mass, and acceleration, that are persisted across hundreds of time steps. Rendering these points using traditional means quickly leads to a image that is satu- rated because the number of particles is greater than the number of pixels available. Volume rendering, and particularly splatting, seeks to remedy this and allows one to see inside the volume and discern the inner structure. The definition of inner structure itself can vary and is usually determined using one of the observable properties of the particles

    Developing & tailoring multi-functional carbon foams for multi-field response

    Get PDF
    As technological advances occur, many conventional materials are incapable of providing the unique multi-functional characteristics demanded thus driving an accelerated focus to create new material systems such as carbon and graphite foams. The improvement of their mechanical stiffness and strength, and tailoring of thermal and electrical conductivities are two areas of multi-functionality with active interest and investment by researchers. The present research focuses on developing models to facilitate and assess multi-functional carbon foams in an effort to expand knowledge. The foundation of the models relies on a unique approach to finite element meshing which captures the morphology of carbon foams. The developed models also include ligament anisotropy and coatings to provide comprehensive information to guide processing researchers in their pursuit of tailorable performance. Several illustrations are undertaken at multiple scales to explore the response of multi-functional carbon foams under coupled field environments providing valuable insight for design engineers in emerging technologies. The illustrations highlight the importance of individual moduli in the anisotropic stiffness matrix as well as the impact of common processing defects when tailoring the bulk stiffness. Furthermore, complete coating coverage and quality interface conditions are critical when utilizing copper to improve thermal and electrical conductivity of carbon foams

    Parallel Mesh Processing

    Get PDF
    Die aktuelle Forschung im Bereich der Computergrafik versucht den zunehmenden Ansprüchen der Anwender gerecht zu werden und erzeugt immer realistischer wirkende Bilder. Dementsprechend werden die Szenen und Verfahren, die zur Darstellung der Bilder genutzt werden, immer komplexer. So eine Entwicklung ist unweigerlich mit der Steigerung der erforderlichen Rechenleistung verbunden, da die Modelle, aus denen eine Szene besteht, aus Milliarden von Polygonen bestehen können und in Echtzeit dargestellt werden müssen. Die realistische Bilddarstellung ruht auf drei Säulen: Modelle, Materialien und Beleuchtung. Heutzutage gibt es einige Verfahren für effiziente und realistische Approximation der globalen Beleuchtung. Genauso existieren Algorithmen zur Erstellung von realistischen Materialien. Es gibt zwar auch Verfahren für das Rendering von Modellen in Echtzeit, diese funktionieren aber meist nur für Szenen mittlerer Komplexität und scheitern bei sehr komplexen Szenen. Die Modelle bilden die Grundlage einer Szene; deren Optimierung hat unmittelbare Auswirkungen auf die Effizienz der Verfahren zur Materialdarstellung und Beleuchtung, so dass erst eine optimierte Modellrepräsentation eine Echtzeitdarstellung ermöglicht. Viele der in der Computergrafik verwendeten Modelle werden mit Hilfe der Dreiecksnetze repräsentiert. Das darin enthaltende Datenvolumen ist enorm, um letztlich den Detailreichtum der jeweiligen Objekte darstellen bzw. den wachsenden Realitätsanspruch bewältigen zu können. Das Rendern von komplexen, aus Millionen von Dreiecken bestehenden Modellen stellt selbst für moderne Grafikkarten eine große Herausforderung dar. Daher ist es insbesondere für die Echtzeitsimulationen notwendig, effiziente Algorithmen zu entwickeln. Solche Algorithmen sollten einerseits Visibility Culling1, Level-of-Detail, (LOD), Out-of-Core Speicherverwaltung und Kompression unterstützen. Anderseits sollte diese Optimierung sehr effizient arbeiten, um das Rendering nicht noch zusätzlich zu behindern. Dies erfordert die Entwicklung paralleler Verfahren, die in der Lage sind, die enorme Datenflut effizient zu verarbeiten. Der Kernbeitrag dieser Arbeit sind neuartige Algorithmen und Datenstrukturen, die speziell für eine effiziente parallele Datenverarbeitung entwickelt wurden und in der Lage sind sehr komplexe Modelle und Szenen in Echtzeit darzustellen, sowie zu modellieren. Diese Algorithmen arbeiten in zwei Phasen: Zunächst wird in einer Offline-Phase die Datenstruktur erzeugt und für parallele Verarbeitung optimiert. Die optimierte Datenstruktur wird dann in der zweiten Phase für das Echtzeitrendering verwendet. Ein weiterer Beitrag dieser Arbeit ist ein Algorithmus, welcher in der Lage ist, einen sehr realistisch wirkenden Planeten prozedural zu generieren und in Echtzeit zu rendern

    Isoflächenrekonstruktion aus Serienschnitten

    Get PDF

    Bimolecular Recombination in Non-Langevin ZZ115 Based Blend Systems for Organic Photovoltaics

    Get PDF
    Organic solar cells are currently positioned as a renewable energy source, potentially solving issues with current solar cells in application, cost and production. However, organic solar cells suffer from low efficiencies compared to their traditional counterparts. Bimolecular recombination of charge carriers before collection serves as a major loss process in organic solar cells and is a significant contributor to these low efficiencies. In this work, potential factors influencing this bimolecular recombination process in a conjugated polymer (ZZ115) blended with fullerenes were studied, where these ZZ115/fullerene blends show a suppressed, non-Langevin behaviour. This study was done via the combination of an experimental section using UV-VIS spectroscopy, photoluminescence spectroscopy and Transient Absorption Spectroscopy, and a computational section using Density Functional Theory and the VOTCA software for large system modelling. Overall results found a polaron feature in ZZ115 pristine and two polaron features in ZZ115 blends. These features were identified as a ZZ115 polaron in a ZZ115 crystalline medium in both pristine and blend, and a ZZ115 polaron in a mixed amorphous region unique to the blend. Blend ratio variation shows no influence of fullerene concentration on the mixed amorphous polaron feature past 10% concentration while a gradual decrease in the recombination kinetics is observed for the crystalline polaron feature. The latter is attributed to the addition of fullerene disrupting the crystalline ordered phase leading to deeper trap states slowing down charge recombination. Computational calculations have highlighted the lack of unusual molecular properties for ZZ115, but large-scale computational studies are still a work in progress to corroborate experimental findings. Nevertheless, important insights into the morphological and electronic impact on bimolecular recombination in ZZ115 were achieved in this study, providing both device optimisation guidelines and a better understanding of the bimolecular recombination process

    On Three-Dimensional Reconstruction

    Get PDF

    A Data Driven Modeling Approach for Store Distributed Load and Trajectory Prediction

    Get PDF
    The task of achieving successful store separation from aircraft and spacecraft has historically been and continues to be, a critical issue for the aerospace industry. Whether it be from store-on-store wake interactions, store-parent body interactions or free stream turbulence, a failed case of store separation poses a serious risk to aircraft operators. Cases of failed store separation do not simply imply missing an intended target, but also bring the risk of collision with, and destruction of, the parent body vehicle. Given this risk, numerous well-tested procedures have been developed to help analyze store separation within the safe confines of wind tunnels. However, due to increased complexity in store separation configurations, such as rotorcraft and cavity-based separation, there is a growing desire to incorporate computational fluid dynamics (CFD) into the early stages of the store separation analysis. A viable method for achieving this objective is available through data-driven surrogate modeling of store distributed loads. This dissertation investigates the practicality of applying various data-driven modeling techniques to the field of store separation. These modeling methods will be applied to four demonstration scenarios: reduced order modeling of a moving store, design optimization, supersonic store separation, and rotorcraft store separation. For the first demonstration scenario, results are presented for three sub-tasks. In the first sub-task proper orthogonal decomposition (POD), dynamic mode decomposition (DMD), and convolutional neural networks (CNN) were compared for their capability to replicate distributed pressure loads of a pitching up prolate spheroid. Results indicated that POD was the most efficient approach for surrogate model generation. For the second sub-task, a POD-based surrogate model was derived from CFD simulations of an oscillating prolate spheroid subject to varying reduced frequency and amplitude of oscillation. The obtained surrogate model was shown to provide high-fidelity predictions for new combinations of reduced frequency and amplitude with a maximum percent error of integrated loads of less than 3\%. Therefore, it was demonstrated that the surrogate model was capable of predicting accurately at intermediate states. Further analysis showed a similar surrogate model could be generated to provide accurate store trajectory modeling under subsonic, transonic, and supersonic conditions. In the second demonstration scenario, a POD-based surrogate model is derived from a series of CFD simulations of isolated rotors in hover and forward flight. The derived surrogate models for hover and forward flight were shown to provide integrated load predictions within 1% of direct CFD simulation. Additionally, results indicated that computational expense could be reduced from 20 hours on 440 CPUs to less than a second on a single CPU. Given the reduction of cost and high fidelity of the surrogate model, the derived model was leveraged to optimize the twist and taper ratio of the rotor such that the efficiency of the rotor was maximized. For the third demonstration scenario, a POD and CNN surrogate model was derived for fixed-wing based supersonic store separation. Results demonstrated that both models were capable of providing high-fidelity predictions of the store\u27s distributed loads and subsequent trajectory. For the final demonstration scenario, a POD-based surrogate model was derived for the case of a store launching from a rotorcraft. The surrogate model was derived from three CFD simulations while varying ejection force. This surrogate model was then validated against CFD simulation of a new store ejection force. Results indicated that while the surrogate model struggled to provide detailed predictions of store distributed loads, mean load variations could be modeled well at a massively reduced computational cost. For each rotorcraft store separation CFD simulation, the computational cost required 10 days of simulation time across 880. While using the surrogate model, comparable predictions could be produced in under a minute on a single core. Overall findings from this study indicate that massive CFD generated data-sets can be efficiently leveraged to create meaningful surrogate models capable of being deployed to highly iterative design tasks relevant to store separation. Through further improvements, similar surrogate models can be combined with a control strategy to achieve trajectory optimization and control

    Regular Grids: An Irregular Approach to the 3D Modelling Pipeline

    Get PDF
    The 3D modelling pipeline covers the process by which a physical object is scanned to create a set of points that lay on its surface. These data are then cleaned to remove outliers or noise, and the points are reconstructed into a digital representation of the original object. The aim of this thesis is to present novel grid-based methods and provide several case studies of areas in the 3D modelling pipeline in which they may be effectively put to use. The first is a demonstration of how using a grid can allow a significant reduction in memory required to perform the reconstruction. The second is the detection of surface features (ridges, peaks, troughs, etc.) during the surface reconstruction process. The third contribution is the alignment of two meshes with zero prior knowledge. This is particularly suited to aligning two related, but not identical, models. The final contribution is the comparison of two similar meshes with support for both qualitative and quantitative outputs
    • …
    corecore