2,158 research outputs found

    Well-Centered Triangulation

    Get PDF
    Meshes composed of well-centered simplices have nice orthogonal dual meshes (the dual Voronoi diagram). This is useful for certain numerical algorithms that prefer such primal-dual mesh pairs. We prove that well-centered meshes also have optimality properties and relationships to Delaunay and minmax angle triangulations. We present an iterative algorithm that seeks to transform a given triangulation in two or three dimensions into a well-centered one by minimizing a cost function and moving the interior vertices while keeping the mesh connectivity and boundary vertices fixed. The cost function is a direct result of a new characterization of well-centeredness in arbitrary dimensions that we present. Ours is the first optimization-based heuristic for well-centeredness, and the first one that applies in both two and three dimensions. We show the results of applying our algorithm to small and large two-dimensional meshes, some with a complex boundary, and obtain a well-centered tetrahedralization of the cube. We also show numerical evidence that our algorithm preserves gradation and that it improves the maximum and minimum angles of acute triangulations created by the best known previous method.Comment: Content has been added to experimental results section. Significant edits in introduction and in summary of current and previous results. Minor edits elsewher

    Three-dimensional unstructured grid generation via incremental insertion and local optimization

    Get PDF
    Algorithms for the generation of 3D unstructured surface and volume grids are discussed. These algorithms are based on incremental insertion and local optimization. The present algorithms are very general and permit local grid optimization based on various measures of grid quality. This is very important; unlike the 2D Delaunay triangulation, the 3D Delaunay triangulation appears not to have a lexicographic characterization of angularity. (The Delaunay triangulation is known to minimize that maximum containment sphere, but unfortunately this is not true lexicographically). Consequently, Delaunay triangulations in three-space can result in poorly shaped tetrahedral elements. Using the present algorithms, 3D meshes can be constructed which optimize a certain angle measure, albeit locally. We also discuss the combinatorial aspects of the algorithm as well as implementational details

    One machine, one minute, three billion tetrahedra

    Full text link
    This paper presents a new scalable parallelization scheme to generate the 3D Delaunay triangulation of a given set of points. Our first contribution is an efficient serial implementation of the incremental Delaunay insertion algorithm. A simple dedicated data structure, an efficient sorting of the points and the optimization of the insertion algorithm have permitted to accelerate reference implementations by a factor three. Our second contribution is a multi-threaded version of the Delaunay kernel that is able to concurrently insert vertices. Moore curve coordinates are used to partition the point set, avoiding heavy synchronization overheads. Conflicts are managed by modifying the partitions with a simple rescaling of the space-filling curve. The performances of our implementation have been measured on three different processors, an Intel core-i7, an Intel Xeon Phi and an AMD EPYC, on which we have been able to compute 3 billion tetrahedra in 53 seconds. This corresponds to a generation rate of over 55 million tetrahedra per second. We finally show how this very efficient parallel Delaunay triangulation can be integrated in a Delaunay refinement mesh generator which takes as input the triangulated surface boundary of the volume to mesh

    Discrete Geometric Structures in Homogenization and Inverse Homogenization with application to EIT

    Get PDF
    We introduce a new geometric approach for the homogenization and inverse homogenization of the divergence form elliptic operator with rough conductivity coefficients σ(x)\sigma(x) in dimension two. We show that conductivity coefficients are in one-to-one correspondence with divergence-free matrices and convex functions s(x)s(x) over the domain Ω\Omega. Although homogenization is a non-linear and non-injective operator when applied directly to conductivity coefficients, homogenization becomes a linear interpolation operator over triangulations of Ω\Omega when re-expressed using convex functions, and is a volume averaging operator when re-expressed with divergence-free matrices. Using optimal weighted Delaunay triangulations for linearly interpolating convex functions, we obtain an optimally robust homogenization algorithm for arbitrary rough coefficients. Next, we consider inverse homogenization and show how to decompose it into a linear ill-posed problem and a well-posed non-linear problem. We apply this new geometric approach to Electrical Impedance Tomography (EIT). It is known that the EIT problem admits at most one isotropic solution. If an isotropic solution exists, we show how to compute it from any conductivity having the same boundary Dirichlet-to-Neumann map. It is known that the EIT problem admits a unique (stable with respect to GG-convergence) solution in the space of divergence-free matrices. As such we suggest that the space of convex functions is the natural space in which to parameterize solutions of the EIT problem

    Adaptive, Anisotropic and Hierarchical cones of Discrete Convex functions

    Full text link
    We address the discretization of optimization problems posed on the cone of convex functions, motivated in particular by the principal agent problem in economics, which models the impact of monopoly on product quality. Consider a two dimensional domain, sampled on a grid of N points. We show that the cone of restrictions to the grid of convex functions is in general characterized by N^2 linear inequalities; a direct computational use of this description therefore has a prohibitive complexity. We thus introduce a hierarchy of sub-cones of discrete convex functions, associated to stencils which can be adaptively, locally, and anisotropically refined. Numerical experiments optimize the accuracy/complexity tradeoff through the use of a-posteriori stencil refinement strategies.Comment: 35 pages, 11 figures. (Second version fixes a small bug in Lemma 3.2. Modifications are anecdotic.
    corecore