550 research outputs found

    A Look at the Generalized Heron Problem through the Lens of Majorization-Minimization

    Full text link
    In a recent issue of this journal, Mordukhovich et al.\ pose and solve an interesting non-differentiable generalization of the Heron problem in the framework of modern convex analysis. In the generalized Heron problem one is given k+1k+1 closed convex sets in \Real^d equipped with its Euclidean norm and asked to find the point in the last set such that the sum of the distances to the first kk sets is minimal. In later work the authors generalize the Heron problem even further, relax its convexity assumptions, study its theoretical properties, and pursue subgradient algorithms for solving the convex case. Here, we revisit the original problem solely from the numerical perspective. By exploiting the majorization-minimization (MM) principle of computational statistics and rudimentary techniques from differential calculus, we are able to construct a very fast algorithm for solving the Euclidean version of the generalized Heron problem.Comment: 21 pages, 3 figure

    Dense point sets have sparse Delaunay triangulations

    Full text link
    The spread of a finite set of points is the ratio between the longest and shortest pairwise distances. We prove that the Delaunay triangulation of any set of n points in R^3 with spread D has complexity O(D^3). This bound is tight in the worst case for all D = O(sqrt{n}). In particular, the Delaunay triangulation of any dense point set has linear complexity. We also generalize this upper bound to regular triangulations of k-ply systems of balls, unions of several dense point sets, and uniform samples of smooth surfaces. On the other hand, for any n and D=O(n), we construct a regular triangulation of complexity Omega(nD) whose n vertices have spread D.Comment: 31 pages, 11 figures. Full version of SODA 2002 paper. Also available at http://www.cs.uiuc.edu/~jeffe/pubs/screw.htm

    Analysis of parametric biological models with non-linear dynamics

    Full text link
    In this paper we present recent results on parametric analysis of biological models. The underlying method is based on the algorithms for computing trajectory sets of hybrid systems with polynomial dynamics. The method is then applied to two case studies of biological systems: one is a cardiac cell model for studying the conditions for cardiac abnormalities, and the second is a model of insect nest-site choice.Comment: In Proceedings HSB 2012, arXiv:1208.315

    Bregman Voronoi Diagrams: Properties, Algorithms and Applications

    Get PDF
    The Voronoi diagram of a finite set of objects is a fundamental geometric structure that subdivides the embedding space into regions, each region consisting of the points that are closer to a given object than to the others. We may define many variants of Voronoi diagrams depending on the class of objects, the distance functions and the embedding space. In this paper, we investigate a framework for defining and building Voronoi diagrams for a broad class of distance functions called Bregman divergences. Bregman divergences include not only the traditional (squared) Euclidean distance but also various divergence measures based on entropic functions. Accordingly, Bregman Voronoi diagrams allow to define information-theoretic Voronoi diagrams in statistical parametric spaces based on the relative entropy of distributions. We define several types of Bregman diagrams, establish correspondences between those diagrams (using the Legendre transformation), and show how to compute them efficiently. We also introduce extensions of these diagrams, e.g. k-order and k-bag Bregman Voronoi diagrams, and introduce Bregman triangulations of a set of points and their connexion with Bregman Voronoi diagrams. We show that these triangulations capture many of the properties of the celebrated Delaunay triangulation. Finally, we give some applications of Bregman Voronoi diagrams which are of interest in the context of computational geometry and machine learning.Comment: Extend the proceedings abstract of SODA 2007 (46 pages, 15 figures

    Responsible Scoring Mechanisms Through Function Sampling

    Full text link
    Human decision-makers often receive assistance from data-driven algorithmic systems that provide a score for evaluating objects, including individuals. The scores are generated by a function (mechanism) that takes a set of features as input and generates a score.The scoring functions are either machine-learned or human-designed and can be used for different decision purposes such as ranking or classification. Given the potential impact of these scoring mechanisms on individuals' lives and on society, it is important to make sure these scores are computed responsibly. Hence we need tools for responsible scoring mechanism design. In this paper, focusing on linear scoring functions, we highlight the importance of unbiased function sampling and perturbation in the function space for devising such tools. We provide unbiased samplers for the entire function space, as well as a θ\theta-vicinity around a given function. We then illustrate the value of these samplers for designing effective algorithms in three diverse problem scenarios in the context of ranking. Finally, as a fundamental method for designing responsible scoring mechanisms, we propose a novel approach for approximating the construction of the arrangement of hyperplanes. Despite the exponential complexity of an arrangement in the number of dimensions, using function sampling, our algorithm is linear in the number of samples and hyperplanes, and independent of the number of dimensions

    Orthogonal weighted linear L1 and L∞ approximation and applications

    Get PDF
    AbstractLet S={s1,s2,...,sn} be a set of sites in Ed, where every site si has a positive real weight ωi. This paper gives algorithms to find weighted orthogonal L∞ and L1 approximating hyperplanes for S. The algorithm for the weighted orthogonal L1 approximation is shown to require O(nd) worst-case time and O(n) space for d ≥ 2. The algorithm for the weighted orthogonal L∞ approximation is shown to require O(n log n) worst-case time and O(n) space for d = 2, and O(n⌊dl2 + 1⌋) worst-case time and O(n⌊(d+1)/2⌋) space for d > 2. In the latter case, the expected time complexity may be reduced to O(n⌊(d+1)/2⌋). The L∞ approximation algorithm can be modified to solve the problem of finding the width of a set of n points in Ed, and the problem of finding a stabbing hyperplane for a set of n hyperspheres in Ed with varying radii. The time and space complexities of the width and stabbing algorithms are seen to be the same as those of the L∞ approximation algorithm

    On the multisource hyperplanes location problem to fitting set of points

    Get PDF
    In this paper we study the problem of locating a given number of hyperplanes minimizing an objective function of the closest distances from a set of points. We propose a general framework for the problem in which norm-based distances between points and hyperplanes are aggregated by means of ordered median functions. A compact Mixed Integer Linear (or Non Linear) programming formulation is presented for the problem and also an extended set partitioning formulation with an exponential number of variables is derived. We develop a column generation procedure embedded within a branch-and-price algorithm for solving the problem by adequately performing its preprocessing, pricing and branching. We also analyze geometrically the optimal solutions of the problem, deriving properties which are exploited to generate initial solutions for the proposed algorithms. Finally, the results of an extensive computational experience are reported. The issue of scalability is also addressed showing theoretical upper bounds on the errors assumed by replacing the original datasets by aggregated versions.Comment: 30 pages, 5 Tables, 3 Figure
    • …
    corecore