429 research outputs found

    Low-order continuous finite element spaces on hybrid non-conforming hexahedral-tetrahedral meshes

    Get PDF
    This article deals with solving partial differential equations with the finite element method on hybrid non-conforming hexahedral-tetrahedral meshes. By non-conforming, we mean that a quadrangular face of a hexahedron can be connected to two triangular faces of tetrahedra. We introduce a set of low-order continuous (C0) finite element spaces defined on these meshes. They are built from standard tri-linear and quadratic Lagrange finite elements with an extra set of constraints at non-conforming hexahedra-tetrahedra junctions to recover continuity. We consider both the continuity of the geometry and the continuity of the function basis as follows: the continuity of the geometry is achieved by using quadratic mappings for tetrahedra connected to tri-affine hexahedra and the continuity of interpolating functions is enforced in a similar manner by using quadratic Lagrange basis on tetrahedra with constraints at non-conforming junctions to match tri-linear hexahedra. The so-defined function spaces are validated numerically on simple Poisson and linear elasticity problems for which an analytical solution is known. We observe that using a hybrid mesh with the proposed function spaces results in an accuracy significantly better than when using linear tetrahedra and slightly worse than when solely using tri-linear hexahedra. As a consequence, the proposed function spaces may be a promising alternative for complex geometries that are out of reach of existing full hexahedral meshing methods

    Robustness and Efficiency of Geometric Programs The Predicate Construction Kit (PCK)

    Get PDF
    International audienceIn this article, I focus on the robustness of geometric programs (e.g., De-launay triangulation, intersection between surfacic or volumetric meshes, Voronoi-based meshing. . .) w.r.t. numerical degeneracies. Some of these geometric programs require " exotic " predicates, not available in standard libraries (e.g., J.-R. Shewchuk's implementation and CGAL). I propose a complete methodology and a sample Open Source implementation of a toolset (PCK: Predicate Construction Kit) that makes it reasonably easy to design geometric programs free of numerical errors. The C++ code of the predicates is automatically generated from its formula, written in a simple specification language. Robustness is obtained through a combination of arithmetic filters, expansion arithmetics and symbolic perturbation. As an example of my approach, I give the formulas and PCK source-code for the 4 predicates used to compute the intersection between a 3d Voronoi diagram and a tetrahedral mesh, as well as symbolic perturbations that provalby escapes the corner cases. This allows to robustly compute the intersection between a Voronoi diagram and a triangle mesh, or the intersection between a Voronoi diagram and a tetrahedral mesh. Such an algorithm may have several applications, including surface and volume meshing based on Lloyd relaxation

    Numerical Methods for Digital Geometry Processing

    Get PDF
    Digital Geometry Processing recently appeared (in the middle of the 90's) as a promising avenue to solve the geometric modeling problems encountered when manipulating surfaces represented by discrete elements (i.e. meshes). Since a mesh may be considered to be a sampling of a surface - in other words a signal - the DGP (digital signal processing) formalism was a natural theoretic background for this discipline (see e.g. Taubin 95). In this discipline, discrete fairing Kobbelt 97 and mesh parameterization Floater 98 have been two active research topics these last few years. In parallel with the evolution of this discipline, acquisition techniques have made huge advances, and todays meshes acquired from real objects by range-laser scanners are larger and larger (30 million triangles is now common). This causes difficulties when trying to apply DGP tools to these meshes. The kernel of a DGP algorithm is a numerical method, used either to solve a linear system, or to minimize a multivariate function. The Gauss-Seidel iteration and gradient descent methods used at the early ages of DGP do not scale-up when applied to huge meshes. In this presentation, our goal is to give a survey of classic and more recent numerical methods, to show how they can be applied to DGP problems, from a theoretic point of view down to implementation. We will focus on two different classes of DGP problems (mesh fairing and mesh parameterization), show solutions for linear problems, quadratic problems, and general non-linear problems, with and without constraint. In particular, we give a general formulation of quadratic problems with reduced degrees of freedom that can be used as a general framework to solve a wide class of DGP problems. Our method is implemented in the OpenNL library, freely available on the web. The presentation will be illustrated with live demos of the methods

    What you seam is what you get

    Get PDF
    3D paint systems opened the door to new texturing tools, directly operating on 3D objects. However, although time and effort was devoted to mesh parameterization, UV unwrapping is still known to be a tedious and time-consuming process in Computer Graphics production. We think that this is mainly due to the lack of well-adapted segmentation method. To make UV unwrapping easier, we propose a new system, based on three components : * A novel spectral segmentation method that proposes reasonable initial seams to the user; * Several tools to edit and constrain the seams. During editing, a parameterization is interactively updated, allowing for direct feedback. Our interactive constrained parameterization method is based on simple (yet original) modifications of the ABF++ method, that make it behave as an interactive constraint solver; * A method to map the two halves of symmetric objects to the same texels in UV space, thus halving texture memory requirements for symmetric objects

    Simple and Scalable Surface Reconstruction

    Get PDF
    We present a practical reconstruction algorithm that generates a surface triangulation from an input pointset. In the result, theinput points appear as vertices of the generated triangulation. The algorithm has several interesting properties: it is very simpleto implement, it is time and memory efficient, and it is trivially parallelized. On a standard hardware (core i7, 16Gb) it takes less than 10 seconds to reconstruct a surface from 1 million points, and scales-up to 36 million points (then it takes 350 seconds). On a telephone (ARMV7 Neon, quad core), it takes 55 seconds to reconstruct a surface from 900K points. The algorithm computes the Delaunay triangulation of the input pointset restricted to a "thickening" of the pointset (similarly to several existing methods, like alpha-shapes, crust and co-cone). By considering the problem from the Voronoi point of view (rather than Delaunay), we use a simple observation (radius of security) that makes the problem simpler. The Delaunay triangulation data structure and associated algorithms are replaced by simpler ones (KD-Tree and convex clipping) while the same set of triangles is provably obtained. The restricted Delaunay triangulation can thus be computed by an algorithm that is not longer than 200 lines of code, memory efficient and parallel. The so-computed restricted Delaunay triangulation is finally post-processed to remove the non-manifold triangles that appear in regions where the sampling was not regular/dense enough.Sensitivity to outliers and noise is not addressed here. Noisy inputs need to be pre-processed with a pointset filtering method. In the presented experimental results, we are using two iterations of projection onto the best approximating plane of the 30 nearest neighbours (more sophisticated ones may be used if the input pointset has many outliers)

    Lp Centroidal Voronoi Tesselation and its applications

    Get PDF
    International audienceThis paper introduces Lp -Centroidal Voronoi Tessellation (Lp -CVT), a generalization of CVT that minimizes a higher-order moment of the coordinates on the Voronoi cells. This generalization allows for aligning the axes of the Voronoi cells with a predefined background tensor field (anisotropy). Lp -CVT is computed by a quasi-Newton optimization framework, based on closed-form derivations of the objective function and its gradient. The derivations are given for both surface meshing (Ω is a triangulated mesh with per-facet anisotropy) and volume meshing (Ω is the interior of a closed triangulated mesh with a 3D anisotropy field). Applications to anisotropic, quad-dominant surface remeshing and to hex-dominant volume meshing are presented. Unlike previous work, Lp -CVT captures sharp features and intersections without requiring any pre-tagging

    ARDECO: Automatic Region DEtection and Conversion

    Get PDF
    We present Ardeco, a new algorithm for image abstraction and conversion from bitmap images into vector graphics. Given a bitmap image, our algorithm automatically computes the set of vector primitives and gradients that best approximates the image. In addition, more details can be generated in user-selected important regions, defined from eye-tracking data or from an importance map painted by the user. Our algorithm is based on a new two-level variational parametric segmentation algorithm, minimizing Mumford and Shah's energy and operating on an intermediate triangulation, well adapted to the features of the image

    Dynamic Mesh Optimization for Free Surfaces in Fluid Simulation

    Get PDF
    International audienceWe propose a new free surface fluid front tracking algorithm. Based on Centroidal Voronoi Diagram optimizations, our method creates a serie of either isotropic or anisotropic meshes that confroms with an evolving surface

    Optimal Transport Reconstruction of Biased Tracers in Redshift Space

    Full text link
    Recent research has emphasized the benefits of accurately reconstructing the initial Lagrangian positions of biased tracers from their positions at a later time, to gain cosmological information. A weighted semi-discrete optimal transport algorithm can achieve the required accuracy, provided the late-time positions are known, with minimal information about the background cosmology. The algorithm's performance relies on knowing the masses of the biased tracers, and depends on how one models the distribution of the remaining mass that is not associated with these tracers. We demonstrate that simple models of the remaining mass result in accurate retrieval of the initial Lagrangian positions, which we quantify using pair statistics and the void probability function. This is true even if the input positions are affected by redshift-space distortions. The most sophisticated models assume that the masses of the tracers, and the amount and clustering of the missing mass are known; we show that the method is robust to realistic errors in the masses of the tracers and remains so as the model for the missing mass becomes increasingly crude.Comment: 15 pages, 18 figure

    A one point integration rule over star convex polytopes

    Get PDF
    In this paper, the recently proposed linearly consistent one point integration rule for the meshfree methods is extended to arbitrary polytopes. The salient feature of the proposed technique is that it requires only one integration point within each n-sided polytope as opposed to 3n in Francis et al. (2017) and 13n integration points in the conventional approach for numerically integrating the weak form in two dimensions. The essence of the proposed technique is to approximate the compatible strain by a linear smoothing function and evaluate the smoothed nodal derivatives by the discrete form of the divergence theorem at the geometric center. This is done by Taylor's expansion of the weak form which facilitates the use of the smoothed nodal derivatives acting as the stabilization term. This translates to 50% and 30% reduction in the overall computational time in the two and three dimensions, respectively, whilst preserving the accuracy and the convergence rates. Th
    corecore