2,441,642 research outputs found

    Deconstructing Approximate Offsets

    Full text link
    We consider the offset-deconstruction problem: Given a polygonal shape Q with n vertices, can it be expressed, up to a tolerance \eps in Hausdorff distance, as the Minkowski sum of another polygonal shape P with a disk of fixed radius? If it does, we also seek a preferably simple-looking solution P; then, P's offset constitutes an accurate, vertex-reduced, and smoothened approximation of Q. We give an O(n log n)-time exact decision algorithm that handles any polygonal shape, assuming the real-RAM model of computation. A variant of the algorithm, which we have implemented using CGAL, is based on rational arithmetic and answers the same deconstruction problem up to an uncertainty parameter \delta; its running time additionally depends on \delta. If the input shape is found to be approximable, this algorithm also computes an approximate solution for the problem. It also allows us to solve parameter-optimization problems induced by the offset-deconstruction problem. For convex shapes, the complexity of the exact decision algorithm drops to O(n), which is also the time required to compute a solution P with at most one more vertex than a vertex-minimal one.Comment: 18 pages, 11 figures, previous version accepted at SoCG 2011, submitted to DC

    Approximate Least Squares

    Full text link
    We present a novel iterative algorithm for approximating the linear least squares solution with low complexity. After a motivation of the algorithm we discuss the algorithm's properties including its complexity, and we present theoretical results as well as simulation based performance results. We describe the analysis of its convergence behavior and show that in the noise free case the algorithm converges to the least squares solution.Comment: Preprint of the paper submitted to IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 201

    Approximate Bayesian Computational methods

    Full text link
    Also known as likelihood-free methods, approximate Bayesian computational (ABC) methods have appeared in the past ten years as the most satisfactory approach to untractable likelihood problems, first in genetics then in a broader spectrum of applications. However, these methods suffer to some degree from calibration difficulties that make them rather volatile in their implementation and thus render them suspicious to the users of more traditional Monte Carlo methods. In this survey, we study the various improvements and extensions made to the original ABC algorithm over the recent years.Comment: 7 figure

    Linear Approximate Groups

    Full text link
    This is an informal announcement of results to be described and proved in detail in a paper to appear. We give various results on the structure of approximate subgroups in linear groups such as \SL_n(k). For example, generalising a result of Helfgott (who handled the cases n=2n = 2 and 3), we show that any approximate subgroup of \SL_n(\F_q) which generates the group must be either very small or else nearly all of \SL_n(\F_q). The argument is valid for all Chevalley groups G(\F_q).Comment: 11 pages. Submitted, Electronic Research Announcements. Small change

    Approximate kernel clustering

    Full text link
    In the kernel clustering problem we are given a large n×nn\times n positive semi-definite matrix A=(aij)A=(a_{ij}) with i,j=1naij=0\sum_{i,j=1}^na_{ij}=0 and a small k×kk\times k positive semi-definite matrix B=(bij)B=(b_{ij}). The goal is to find a partition S1,...,SkS_1,...,S_k of {1,...n}\{1,... n\} which maximizes the quantity i,j=1k((i,j)Si×Sjaij)bij. \sum_{i,j=1}^k (\sum_{(i,j)\in S_i\times S_j}a_{ij})b_{ij}. We study the computational complexity of this generic clustering problem which originates in the theory of machine learning. We design a constant factor polynomial time approximation algorithm for this problem, answering a question posed by Song, Smola, Gretton and Borgwardt. In some cases we manage to compute the sharp approximation threshold for this problem assuming the Unique Games Conjecture (UGC). In particular, when BB is the 3×33\times 3 identity matrix the UGC hardness threshold of this problem is exactly 16π27\frac{16\pi}{27}. We present and study a geometric conjecture of independent interest which we show would imply that the UGC threshold when BB is the k×kk\times k identity matrix is 8π9(11k)\frac{8\pi}{9}(1-\frac{1}{k}) for every k3k\ge 3
    corecore