19 research outputs found

    Economical Delone Sets for Approximating Convex Bodies

    Get PDF
    Convex bodies are ubiquitous in computational geometry and optimization theory. The high combinatorial complexity of multidimensional convex polytopes has motivated the development of algorithms and data structures for approximate representations. This paper demonstrates an intriguing connection between convex approximation and the classical concept of Delone sets from the theory of metric spaces. It shows that with the help of a classical structure from convexity theory, called a Macbeath region, it is possible to construct an epsilon-approximation of any convex body as the union of O(1/epsilon^{(d-1)/2}) ellipsoids, where the center points of these ellipsoids form a Delone set in the Hilbert metric associated with the convex body. Furthermore, a hierarchy of such approximations yields a data structure that answers epsilon-approximate polytope membership queries in O(log (1/epsilon)) time. This matches the best asymptotic results for this problem, by a data structure that both is simpler and arguably more elegant

    Smooth Distance Approximation

    Get PDF
    Traditional problems in computational geometry involve aspects that are both discrete and continuous. One such example is nearest-neighbor searching, where the input is discrete, but the result depends on distances, which vary continuously. In many real-world applications of geometric data structures, it is assumed that query results are continuous, free of jump discontinuities. This is at odds with many modern data structures in computational geometry, which employ approximations to achieve efficiency, but these approximations often suffer from discontinuities. In this paper, we present a general method for transforming an approximate but discontinuous data structure into one that produces a smooth approximation, while matching the asymptotic space efficiencies of the original. We achieve this by adapting an approach called the partition-of-unity method, which smoothly blends multiple local approximations into a single smooth global approximation. We illustrate the use of this technique in a specific application of approximating the distance to the boundary of a convex polytope in ?^d from any point in its interior. We begin by developing a novel data structure that efficiently computes an absolute ?-approximation to this query in time O(log (1/?)) using O(1/?^{d/2}) storage space. Then, we proceed to apply the proposed partition-of-unity blending to guarantee the smoothness of the approximate distance field, establishing optimal asymptotic bounds on the norms of its gradient and Hessian

    Smooth Distance Approximation

    Full text link
    Traditional problems in computational geometry involve aspects that are both discrete and continuous. One such example is nearest-neighbor searching, where the input is discrete, but the result depends on distances, which vary continuously. In many real-world applications of geometric data structures, it is assumed that query results are continuous, free of jump discontinuities. This is at odds with many modern data structures in computational geometry, which employ approximations to achieve efficiency, but these approximations often suffer from discontinuities. In this paper, we present a general method for transforming an approximate but discontinuous data structure into one that produces a smooth approximation, while matching the asymptotic space efficiencies of the original. We achieve this by adapting an approach called the partition-of-unity method, which smoothly blends multiple local approximations into a single smooth global approximation. We illustrate the use of this technique in a specific application of approximating the distance to the boundary of a convex polytope in Rd\mathbb{R}^d from any point in its interior. We begin by developing a novel data structure that efficiently computes an absolute ε\varepsilon-approximation to this query in time O(log(1/ε))O(\log (1/\varepsilon)) using O(1/εd/2)O(1/\varepsilon^{d/2}) storage space. Then, we proceed to apply the proposed partition-of-unity blending to guarantee the smoothness of the approximate distance field, establishing optimal asymptotic bounds on the norms of its gradient and Hessian.Comment: To appear in the European Symposium on Algorithms (ESA) 202

    Adaptive Sampling for Geometric Approximation

    Get PDF
    Geometric approximation of multi-dimensional data sets is an essential algorithmic component for applications in machine learning, computer graphics, and scientific computing. This dissertation promotes an algorithmic sampling methodology for a number of fundamental approximation problems in computational geometry. For each problem, the proposed sampling technique is carefully adapted to the geometry of the input data and the functions to be approximated. In particular, we study proximity queries in spaces of constant dimension and mesh generation in 3D. We start with polytope membership queries, where query points are tested for inclusion in a convex polytope. Trading-off accuracy for efficiency, we tolerate one-sided errors for points within an epsilon-expansion of the polytope. We propose a sampling strategy for the placement of covering ellipsoids sensitive to the local shape of the polytope. The key insight is to realize the samples as Delone sets in the intrinsic Hilbert metric. Using this intrinsic formulation, we considerably simplify state-of-the-art techniques yielding an intuitive and optimal data structure. Next, we study nearest-neighbor queries which retrieve the most similar data point to a given query point. To accommodate more general measures of similarity, we consider non-Euclidean distances including convex distance functions and Bregman divergences. Again, we tolerate multiplicative errors retrieving any point no farther than (1+epsilon) times the distance to the nearest neighbor. We propose a sampling strategy sensitive to the local distribution of points and the gradient of the distance functions. Combined with a careful regularization of the distance minimizers, we obtain a generalized data structure that essentially matches state-of-the-art results specific to the Euclidean distance. Finally, we investigate the generation of Voronoi meshes, where a given domain is decomposed into Voronoi cells as desired for a number of important solvers in computational fluid dynamics. The challenge is to arrange the cells near the boundary to yield an accurate surface approximation without sacrificing quality. We propose a sampling algorithm for the placement of seeds to induce a boundary-conforming Voronoi mesh of the correct topology, with a careful treatment of sharp and non-manifold features. The proposed algorithm achieves significant quality improvements over state-of-the-art polyhedral meshing based on clipped Voronoi cells

    Voronoi Diagrams in the Hilbert Metric

    Get PDF
    The Hilbert metric is a distance function defined for points lying within a convex body. It generalizes the Cayley-Klein model of hyperbolic geometry to any convex set, and it has numerous applications in the analysis and processing of convex bodies. In this paper, we study the geometric and combinatorial properties of the Voronoi diagram of a set of point sites under the Hilbert metric. Given any m-sided convex polygon ? in the plane, we present two randomized incremental algorithms and one deterministic algorithm. The first randomized algorithm and the deterministic algorithm compute the Voronoi diagram of a set of n point sites. The second randomized algorithm extends this to compute the Voronoi diagram of the set of n sites, each of which may be a point or a line segment. Our algorithms all run in expected time O(m n log n). The algorithms use O(m n) storage, which matches the worst-case combinatorial complexity of the Voronoi diagram in the Hilbert metric

    Approximate Nearest-Neighbor Search for Line Segments

    Get PDF
    Approximate nearest-neighbor search is a fundamental algorithmic problem that continues to inspire study due its essential role in numerous contexts. In contrast to most prior work, which has focused on point sets, we consider nearest-neighbor queries against a set of line segments in Rd\mathbb{R}^d, for constant dimension dd. Given a set SS of nn disjoint line segments in Rd\mathbb{R}^d and an error parameter ε>0\varepsilon > 0, the objective is to build a data structure such that for any query point qq, it is possible to return a line segment whose Euclidean distance from qq is at most (1+ε)(1+\varepsilon) times the distance from qq to its nearest line segment. We present a data structure for this problem with storage O((n2/εd)log(Δ/ε))O((n^2/\varepsilon^{d}) \log (\Delta/\varepsilon)) and query time O(log(max(n,Δ)/ε))O(\log (\max(n,\Delta)/\varepsilon)), where Δ\Delta is the spread of the set of segments SS. Our approach is based on a covering of space by anisotropic elements, which align themselves according to the orientations of nearby segments.Comment: 20 pages (including appendix), 5 figure

    16th Scandinavian Symposium and Workshops on Algorithm Theory: SWAT 2018, June 18-20, 2018, Malmö University, Malmö, Sweden

    Get PDF

    Covering convex bodies and the closest vector problem

    Get PDF
    We are concerned with the computational problem of determining the covering radius of a rational polytope. This parameter is defined as the minimal dilation factor that is needed for the lattice translates of the correspondingly dilated polytope to cover the whole space. As our main result, we describe a new algorithm for this problem, which is simpler, more efficient and easier to implement than the only prior algorithm of Kannan (1992). Motivated by a variant of the famous Lonely Runner Conjecture, we use its geometric interpretation in terms of covering radii of zonotopes, and apply our algorithm to prove the first open case of three runners with individual starting points

    Approximate Nearest Neighbor Searching with Non-Euclidean and Weighted Distances

    Full text link
    We present a new approach to approximate nearest-neighbor queries in fixed dimension under a variety of non-Euclidean distances. We are given a set SS of nn points in Rd\mathbb{R}^d, an approximation parameter ε>0\varepsilon > 0, and a distance function that satisfies certain smoothness and growth-rate assumptions. The objective is to preprocess SS into a data structure so that for any query point qq in Rd\mathbb{R}^d, it is possible to efficiently report any point of SS whose distance from qq is within a factor of 1+ε1+\varepsilon of the actual closest point. Prior to this work, the most efficient data structures for approximate nearest-neighbor searching in spaces of constant dimensionality applied only to the Euclidean metric. This paper overcomes this limitation through a method called convexification. For admissible distance functions, the proposed data structures answer queries in logarithmic time using O(nlog(1/ε)/εd/2)O(n \log (1 / \varepsilon) / \varepsilon^{d/2}) space, nearly matching the best known bounds for the Euclidean metric. These results apply to both convex scaling distance functions (including the Mahalanobis distance and weighted Minkowski metrics) and Bregman divergences (including the Kullback-Leibler divergence and the Itakura-Saito distance)

    LIPIcs, Volume 258, SoCG 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 258, SoCG 2023, Complete Volum
    corecore