569 research outputs found
Approximation and geometric modeling with simplex B-splines associated with irregular triangles
Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the domain. We emphasize here that the vertices of the triangles initially define the knots of the B-splines and do generally not coincide with the abscissae of the data. Thus, this approach is well suited to process scattered data.\ud
\ud
With each vertex of a given triangle we associate two additional points which give rise to six configurations of five knots defining six linearly independent bivariate quadratic B-splines supported on the convex hull of the corresponding five knots.\ud
\ud
If we consider the vertices of the triangulation as threefold knots, the bivariate quadratic B-splines turn into the well known bivariate quadratic Bernstein-Bézier-form polynomials on triangles. Thus we might be led to think of B-splines as of smoothed versions of Bernstein-Bézier polynomials with respect to the entire domain. From the degenerate Bernstein-Bézier situation we deduce rules how to locate the additional points associated with each vertex to establish knot configurations that allow the modeling of discontinuities of the function itself or any of its directional derivatives. We find that four collinear knots out of the set of five defining an individual quadratic B-spline generate a discontinuity in the surface along the line they constitute, and that analogously three collinear knots generate a discontinuity in a first derivative.\ud
Finally, the coefficients of the linear combinations of normalized simplicial B-splines are visualized as geometric control points satisfying the convex hull property.\ud
Thus, bivariate quadratic B-splines associated with irregular triangles provide a great flexibility to approximate and model fast changing or even functions with any given discontinuities from scattered data.\ud
An example for least squares approximation with simplex splines is presented
Two triangulations methods based on edge refinement
In this paper two curvature adaptive methods of surface triangulation
are presented. Both methods are based on edge refinement to obtain a
triangulation compatible with the curvature requirements. The first
method applies an incremental and constrained Delaunay triangulation
and uses curvature bounds to determine if an edge of the triangulation
is admissible. The second method uses this function also in the edge
refinement process, i.e. in the computation of the location of a
refining point, and in the re-triangulation needed after the insertion
of this refining point. Results are presented, comparing both
approachesPostprint (published version
Three-dimensional unstructured grid generation via incremental insertion and local optimization
Algorithms for the generation of 3D unstructured surface and volume grids are discussed. These algorithms are based on incremental insertion and local optimization. The present algorithms are very general and permit local grid optimization based on various measures of grid quality. This is very important; unlike the 2D Delaunay triangulation, the 3D Delaunay triangulation appears not to have a lexicographic characterization of angularity. (The Delaunay triangulation is known to minimize that maximum containment sphere, but unfortunately this is not true lexicographically). Consequently, Delaunay triangulations in three-space can result in poorly shaped tetrahedral elements. Using the present algorithms, 3D meshes can be constructed which optimize a certain angle measure, albeit locally. We also discuss the combinatorial aspects of the algorithm as well as implementational details
Adaptive, Anisotropic and Hierarchical cones of Discrete Convex functions
We address the discretization of optimization problems posed on the cone of
convex functions, motivated in particular by the principal agent problem in
economics, which models the impact of monopoly on product quality. Consider a
two dimensional domain, sampled on a grid of N points. We show that the cone of
restrictions to the grid of convex functions is in general characterized by N^2
linear inequalities; a direct computational use of this description therefore
has a prohibitive complexity. We thus introduce a hierarchy of sub-cones of
discrete convex functions, associated to stencils which can be adaptively,
locally, and anisotropically refined. Numerical experiments optimize the
accuracy/complexity tradeoff through the use of a-posteriori stencil refinement
strategies.Comment: 35 pages, 11 figures. (Second version fixes a small bug in Lemma 3.2.
Modifications are anecdotic.
There are Plane Spanners of Maximum Degree 4
Let E be the complete Euclidean graph on a set of points embedded in the
plane. Given a constant t >= 1, a spanning subgraph G of E is said to be a
t-spanner, or simply a spanner, if for any pair of vertices u,v in E the
distance between u and v in G is at most t times their distance in E. A spanner
is plane if its edges do not cross.
This paper considers the question: "What is the smallest maximum degree that
can always be achieved for a plane spanner of E?" Without the planarity
constraint, it is known that the answer is 3 which is thus the best known lower
bound on the degree of any plane spanner. With the planarity requirement, the
best known upper bound on the maximum degree is 6, the last in a long sequence
of results improving the upper bound. In this paper we show that the complete
Euclidean graph always contains a plane spanner of maximum degree at most 4 and
make a big step toward closing the question. Our construction leads to an
efficient algorithm for obtaining the spanner from Chew's L1-Delaunay
triangulation
Discrete Geometric Structures in Homogenization and Inverse Homogenization with application to EIT
We introduce a new geometric approach for the homogenization and inverse
homogenization of the divergence form elliptic operator with rough conductivity
coefficients in dimension two. We show that conductivity
coefficients are in one-to-one correspondence with divergence-free matrices and
convex functions over the domain . Although homogenization is a
non-linear and non-injective operator when applied directly to conductivity
coefficients, homogenization becomes a linear interpolation operator over
triangulations of when re-expressed using convex functions, and is a
volume averaging operator when re-expressed with divergence-free matrices.
Using optimal weighted Delaunay triangulations for linearly interpolating
convex functions, we obtain an optimally robust homogenization algorithm for
arbitrary rough coefficients. Next, we consider inverse homogenization and show
how to decompose it into a linear ill-posed problem and a well-posed non-linear
problem. We apply this new geometric approach to Electrical Impedance
Tomography (EIT). It is known that the EIT problem admits at most one isotropic
solution. If an isotropic solution exists, we show how to compute it from any
conductivity having the same boundary Dirichlet-to-Neumann map. It is known
that the EIT problem admits a unique (stable with respect to -convergence)
solution in the space of divergence-free matrices. As such we suggest that the
space of convex functions is the natural space in which to parameterize
solutions of the EIT problem
Anisotropic Mesh Adaptation for Image Representation
Triangular meshes have gained much interest in image representation and have
been widely used in image processing. This paper introduces a framework of
anisotropic mesh adaptation (AMA) methods to image representation and proposes
a GPRAMA method that is based on AMA and greedy-point removal (GPR) scheme.
Different than many other methods that triangulate sample points to form the
mesh, the AMA methods start directly with a triangular mesh and then adapt the
mesh based on a user-defined metric tensor to represent the image. The AMA
methods have clear mathematical framework and provides flexibility for both
image representation and image reconstruction. A mesh patching technique is
developed for the implementation of the GPRAMA method, which leads to an
improved version of the popular GPRFS-ED method. The GPRAMA method can achieve
better quality than the GPRFS-ED method but with lower computational cost.Comment: 25 pages, 15 figure
Triangulating the Square and Squaring the Triangle: Quadtrees and Delaunay Triangulations are Equivalent
We show that Delaunay triangulations and compressed quadtrees are equivalent
structures. More precisely, we give two algorithms: the first computes a
compressed quadtree for a planar point set, given the Delaunay triangulation;
the second finds the Delaunay triangulation, given a compressed quadtree. Both
algorithms run in deterministic linear time on a pointer machine. Our work
builds on and extends previous results by Krznaric and Levcopolous and Buchin
and Mulzer. Our main tool for the second algorithm is the well-separated pair
decomposition(WSPD), a structure that has been used previously to find
Euclidean minimum spanning trees in higher dimensions (Eppstein). We show that
knowing the WSPD (and a quadtree) suffices to compute a planar Euclidean
minimum spanning tree (EMST) in linear time. With the EMST at hand, we can find
the Delaunay triangulation in linear time.
As a corollary, we obtain deterministic versions of many previous algorithms
related to Delaunay triangulations, such as splitting planar Delaunay
triangulations, preprocessing imprecise points for faster Delaunay computation,
and transdichotomous Delaunay triangulations.Comment: 37 pages, 13 figures, full version of a paper that appeared in SODA
201
- …