1,748 research outputs found

    On the minimum number of simplex shapes in longest edge bisection refinement of a regular n-simplex

    Get PDF
    In several areas like Global Optimization using branch-and-bound methods, the unit n-simplex is refined by bisecting the longest edge such that a binary search tree appears. This process generates simplices belonging to different shape classes. Having less simplex shapes facilitates the prediction of the further workload from a node in the binary tree, because the same shape leads to the same sub-tree. Irregular sub-simplices generated in the refinement process may have more than one longest edge when n\geqslant 3. The question is how to choose the longest edge to be bisected such that the number of shape classes is as small as possible. We develop a Branch-and-Bound (B&B) algorithm to find the minimum number of classes in the refinement process. The developed B&B algorithm provides a minimum number of eight classes for a regular 3-simplex. Due to the high computational cost of solving this combinatorial problem, future research focuses on using high performance computing to derive the minimum number of shapes in higher dimensions

    ColDICE: a parallel Vlasov-Poisson solver using moving adaptive simplicial tessellation

    Full text link
    Resolving numerically Vlasov-Poisson equations for initially cold systems can be reduced to following the evolution of a three-dimensional sheet evolving in six-dimensional phase-space. We describe a public parallel numerical algorithm consisting in representing the phase-space sheet with a conforming, self-adaptive simplicial tessellation of which the vertices follow the Lagrangian equations of motion. The algorithm is implemented both in six- and four-dimensional phase-space. Refinement of the tessellation mesh is performed using the bisection method and a local representation of the phase-space sheet at second order relying on additional tracers created when needed at runtime. In order to preserve in the best way the Hamiltonian nature of the system, refinement is anisotropic and constrained by measurements of local Poincar\'e invariants. Resolution of Poisson equation is performed using the fast Fourier method on a regular rectangular grid, similarly to particle in cells codes. To compute the density projected onto this grid, the intersection of the tessellation and the grid is calculated using the method of Franklin and Kankanhalli (1993) generalised to linear order. As preliminary tests of the code, we study in four dimensional phase-space the evolution of an initially small patch in a chaotic potential and the cosmological collapse of a fluctuation composed of two sinusoidal waves. We also perform a "warm" dark matter simulation in six-dimensional phase-space that we use to check the parallel scaling of the code.Comment: Code and illustration movies available at: http://www.vlasix.org/index.php?n=Main.ColDICE - Article submitted to Journal of Computational Physic

    Average Interpolation Under the Maximum Angle Condition

    Full text link
    Interpolation error estimates needed in common finite element applications using simplicial meshes typically impose restrictions on the both the smoothness of the interpolated functions and the shape of the simplices. While the simplest theory can be generalized to admit less smooth functions (e.g., functions in H^1(\Omega) rather than H^2(\Omega)) and more general shapes (e.g., the maximum angle condition rather than the minimum angle condition), existing theory does not allow these extensions to be performed simultaneously. By localizing over a well-shaped auxiliary spatial partition, error estimates are established under minimal function smoothness and mesh regularity. This construction is especially important in two cases: L^p(\Omega) estimates for data in W^{1,p}(\Omega) hold for meshes without any restrictions on simplex shape, and W^{1,p}(\Omega) estimates for data in W^{2,p}(\Omega) hold under a generalization of the maximum angle condition which previously required p>2 for standard Lagrange interpolation

    Semantic 3D Reconstruction with Finite Element Bases

    Full text link
    We propose a novel framework for the discretisation of multi-label problems on arbitrary, continuous domains. Our work bridges the gap between general FEM discretisations, and labeling problems that arise in a variety of computer vision tasks, including for instance those derived from the generalised Potts model. Starting from the popular formulation of labeling as a convex relaxation by functional lifting, we show that FEM discretisation is valid for the most general case, where the regulariser is anisotropic and non-metric. While our findings are generic and applicable to different vision problems, we demonstrate their practical implementation in the context of semantic 3D reconstruction, where such regularisers have proved particularly beneficial. The proposed FEM approach leads to a smaller memory footprint as well as faster computation, and it constitutes a very simple way to enable variable, adaptive resolution within the same model

    Local Covering Optimality of Lattices: Leech Lattice versus Root Lattice E8

    Full text link
    We show that the Leech lattice gives a sphere covering which is locally least dense among lattice coverings. We show that a similar result is false for the root lattice E8. For this we construct a less dense covering lattice whose Delone subdivision has a common refinement with the Delone subdivision of E8. The new lattice yields a sphere covering which is more than 12% less dense than the formerly best known given by the lattice A8*. Currently, the Leech lattice is the first and only known example of a locally optimal lattice covering having a non-simplicial Delone subdivision. We hereby in particular answer a question of Dickson posed in 1968. By showing that the Leech lattice is rigid our answer is even strongest possible in a sense.Comment: 13 pages; (v2) major revision: proof of rigidity corrected, full discussion of E8-case included, src of (v3) contains MAGMA program, (v4) some correction

    A tetrahedral space-filling curve for non-conforming adaptive meshes

    Full text link
    We introduce a space-filling curve for triangular and tetrahedral red-refinement that can be computed using bitwise interleaving operations similar to the well-known Z-order or Morton curve for cubical meshes. To store sufficient information for random access, we define a low-memory encoding using 10 bytes per triangle and 14 bytes per tetrahedron. We present algorithms that compute the parent, children, and face-neighbors of a mesh element in constant time, as well as the next and previous element in the space-filling curve and whether a given element is on the boundary of the root simplex or not. Our presentation concludes with a scalability demonstration that creates and adapts selected meshes on a large distributed-memory system.Comment: 33 pages, 12 figures, 8 table
    corecore