2,214 research outputs found

    One machine, one minute, three billion tetrahedra

    Full text link
    This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure, an efficient sorting of the points and the optimization of the insertion algorithm have permitted to accelerate reference implementations by a factor three. Our second contribution is a multi-threaded version of the Delaunay kernel that is able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding heavy synchronization overheads. Conflicts are managed by modifying the partitions with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second. We finally show how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator which takes as input the triangulated surface boundary of the volume to mesh

    Load-Balancing for Parallel Delaunay Triangulations

    Get PDF
    Computing the Delaunay triangulation (DT) of a given point set in RD\mathbb{R}^D is one of the fundamental operations in computational geometry. Recently, Funke and Sanders (2017) presented a divide-and-conquer DT algorithm that merges two partial triangulations by re-triangulating a small subset of their vertices - the border vertices - and combining the three triangulations efficiently via parallel hash table lookups. The input point division should therefore yield roughly equal-sized partitions for good load-balancing and also result in a small number of border vertices for fast merging. In this paper, we present a novel divide-step based on partitioning the triangulation of a small sample of the input points. In experiments on synthetic and real-world data sets, we achieve nearly perfectly balanced partitions and small border triangulations. This almost cuts running time in half compared to non-data-sensitive division schemes on inputs exhibiting an exploitable underlying structure.Comment: Short version submitted to EuroPar 201

    Meshing Deforming Spacetime for Visualization and Analysis

    Full text link
    We introduce a novel paradigm that simplifies the visualization and analysis of data that have a spatially/temporally varying frame of reference. The primary application driver is tokamak fusion plasma, where science variables (e.g., density and temperature) are interpolated in a complex magnetic field-line-following coordinate system. We also see a similar challenge in rotational fluid mechanics, cosmology, and Lagrangian ocean analysis; such physics implies a deforming spacetime and induces high complexity in volume rendering, isosurfacing, and feature tracking, among various visualization tasks. Without loss of generality, this paper proposes an algorithm to build a simplicial complex -- a tetrahedral mesh, for the deforming 3D spacetime derived from two 2D triangular meshes representing consecutive timesteps. Without introducing new nodes, the resulting mesh fills the gap between 2D meshes with tetrahedral cells while satisfying given constraints on how nodes connect between the two input meshes. In the algorithm we first divide the spacetime into smaller partitions to reduce complexity based on the input geometries and constraints. We then independently search for a feasible tessellation of each partition taking nonconvexity into consideration. We demonstrate multiple use cases for a simplified visualization analysis scheme with a synthetic case and fusion plasma applications

    Large-scale Geometric Data Decomposition, Processing and Structured Mesh Generation

    Get PDF
    Mesh generation is a fundamental and critical problem in geometric data modeling and processing. In most scientific and engineering tasks that involve numerical computations and simulations on 2D/3D regions or on curved geometric objects, discretizing or approximating the geometric data using a polygonal or polyhedral meshes is always the first step of the procedure. The quality of this tessellation often dictates the subsequent computation accuracy, efficiency, and numerical stability. When compared with unstructured meshes, the structured meshes are favored in many scientific/engineering tasks due to their good properties. However, generating high-quality structured mesh remains challenging, especially for complex or large-scale geometric data. In industrial Computer-aided Design/Engineering (CAD/CAE) pipelines, the geometry processing to create a desirable structural mesh of the complex model is the most costly step. This step is semi-manual, and often takes up to several weeks to finish. Several technical challenges remains unsolved in existing structured mesh generation techniques. This dissertation studies the effective generation of structural mesh on large and complex geometric data. We study a general geometric computation paradigm to solve this problem via model partitioning and divide-and-conquer. To apply effective divide-and-conquer, we study two key technical components: the shape decomposition in the divide stage, and the structured meshing in the conquer stage. We test our algorithm on vairous data set, the results demonstrate the efficiency and effectiveness of our framework. The comparisons also show our algorithm outperforms existing partitioning methods in final meshing quality. We also show our pipeline scales up efficiently on HPC environment

    Numerically improved computational scheme for the optical conductivity tensor in layered systems

    Full text link
    The contour integration technique applied to calculate the optical conductivity tensor at finite temperatures in the case of layered systems within the framework of the spin-polarized relativistic screened Korringa-Kohn-Rostoker band structure method is improved from the computational point of view by applying the Gauss-Konrod quadrature for the integrals along the different parts of the contour and by designing a cumulative special points scheme for two-dimensional Brillouin zone integrals corresponding to cubic systems.Comment: 17 pages, LaTeX + 4 figures (Encapsulated PostScript), submitted to J. Phys.: Condensed Matter (19 Sept. 2000
    • …
    corecore