17 research outputs found

    Plantinga-Vegter algorithm takes average polynomial time

    Full text link
    We exhibit a condition-based analysis of the adaptive subdivision algorithm due to Plantinga and Vegter. The first complexity analysis of the PV Algorithm is due to Burr, Gao and Tsigaridas who proved a O(2τd4log⁥d)O\big(2^{\tau d^{4}\log d}\big) worst-case cost bound for degree dd plane curves with maximum coefficient bit-size τ\tau. This exponential bound, it was observed, is in stark contrast with the good performance of the algorithm in practice. More in line with this performance, we show that, with respect to a broad family of measures, the expected time complexity of the PV Algorithm is bounded by O(d7)O(d^7) for real, degree dd, plane curves. We also exhibit a smoothed analysis of the PV Algorithm that yields similar complexity estimates. To obtain these results we combine robust probabilistic techniques coming from geometric functional analysis with condition numbers and the continuous amortization paradigm introduced by Burr, Krahmer and Yap. We hope this will motivate a fruitful exchange of ideas between the different approaches to numerical computation.Comment: 8 pages, correction of typo

    Condition Numbers for the Cube. I: Univariate Polynomials and Hypersurfaces

    Get PDF
    The condition-based complexity analysis framework is one of the gems of modern numerical algebraic geometry and theoretical computer science. One of the challenges that it poses is to expand the currently limited range of random polynomials that we can handle. Despite important recent progress, the available tools cannot handle random sparse polynomials and Gaussian polynomials, that is polynomials whose coefficients are i.i.d. Gaussian random variables. We initiate a condition-based complexity framework based on the norm of the cube that is a step in this direction. We present this framework for real hypersurfaces and univariate polynomials. We demonstrate its capabilities in two problems, under very mild probabilistic assumptions. On the one hand, we show that the average run-time of the Plantinga-Vegter algorithm is polynomial in the degree for random sparse (alas a restricted sparseness structure) polynomials and random Gaussian polynomials. On the other hand, we study the size of the subdivision tree for Descartes' solver and run-time of the solver by Jindal and Sagraloff (arXiv:1704.06979). In both cases, we provide a bound that is polynomial in the size of the input (size of the support plus logarithm of the degree) for not only on the average, but all higher moments.Comment: 34 pages. Version 1, conference version; from version 2, journal versio

    Condition Numbers for the Cube. I: Univariate Polynomials and Hypersurfaces

    Get PDF
    The condition-based complexity analysis framework is one of the gems of modern numerical algebraic geometry and theoretical computer science. One of the challenges that it poses is to expand the currently limited range of random polynomials that we can handle. Despite important recent progress, the available tools cannot handle random sparse polynomials and Gaussian polynomials, that is polynomials whose coefficients are i.i.d. Gaussian random variables.We initiate a condition-based complexity framework based on the norm of the cube that is a step in this direction. We present this framework for real hypersurfaces and univariate polynomials. We demonstrate its capabilities in two problems, under very mild probabilistic assumptions. On the one hand, we show that the average run-time of the Plantinga-Vegter algorithm is polynomial in the degree for random sparse (alas a restricted sparseness structure) polynomials and random Gaussian polynomials. On the other hand, we study the size of the subdivision tree for Descartes' solver and run-time of the solver by Jindal and Sagraloff (2017). In both cases, we provide a bound that is polynomial in the size of the input (size of the support plus logarithm of the degree) for not only on the average, but all higher moments.[This is the journal version of the conference paper with the same title.

    Condition Numbers for the Cube. I: Univariate Polynomials and Hypersurfaces

    Get PDF
    International audienceThe condition-based complexity analysis framework is one of the gems of modern numerical algebraic geometry and theoretical computer science. One of the challenges that it poses is to expand the currently limited range of random polynomials that we can handle. Despite important recent progress, the available tools cannot handle random sparse polynomials and Gaussian polynomials, that is polynomials whose coefficients are i.i.d. Gaussian random variables. We initiate a condition-based complexity framework based on the norm of the cube, that is a step in this direction. We present this framework for real hypersurfaces. We demonstrate its capabilities by providing a new probabilistic complexity analysis for the Plantinga-Vegter algorithm, which covers both random sparse (alas a restricted sparseness structure) polynomials and random Gaussian polynomials. We present explicit results with structured random polynomials for problems with two or more dimensions. Additionally, we provide some estimates of the separation bound of a univariate polynomial in our current framework

    Certified Approximation Algorithms for the Fermat Point and n-Ellipses

    Get PDF
    Given a set A of n points in ?^d with weight function w: A??_{> 0}, the Fermat distance function is ?(x): = ?_{a?A}w(a)?x-a?. A classic problem in facility location dating back to 1643, is to find the Fermat point x*, the point that minimizes the function ?. We consider the problem of computing a point x?* that is an ?-approximation of x* in the sense that ?x?*-x*? ?(x*) and d = 2. Finally, all our planar (d = 2) algorithms are implemented in order to experimentally evaluate them, using both synthetic as well as real world datasets. These experiments show the practicality of our techniques

    Bivariate systems and topology of plane curves: algebraic and numerical methods

    Get PDF
    The work presented in this thesis belongs to the domain of non-linear computational geometry in lowdimension. More precisely it focuses on solving bivariate systems and computing the topology of curvesin the plane. When the input is given by polynomials, the natural tools come from computer algebra.Our contributions are algorithms proven efficient in a deterministic or a Las Vegas settings together witha practical efficient software for topology certified drawing of a plane algebraic curve. When the input isnot restricted to be polynomials but given by interval functions, we design algorithms based on certifiednumerical approches using subdivision and interval arithmetic. The input is then required to fulfill somegeneric assumptions and our algorithms are certified in the sense that they terminate if and only if theassumptions are satisfied.Le travail prĂ©sentĂ© dans cette thĂšse appartient au domaine de la gĂ©omĂ©trie computationnelle non linĂ©aireen petite dimension. Plus prĂ©cisĂ©ment, il se concentre sur la rĂ©solution de systĂšmes bivariĂ©s et le calcul dela topologie des courbes dans le plan. Lorsque l’entrĂ©e est donnĂ©e par des polynĂŽmes, les outils naturelsproviennent du calcul formel. Nos contributions sont des algorithmes dont l’efficacitĂ© a Ă©tĂ© prouvĂ©e dansun cadre dĂ©terministe ou Las Vegas, ainsi qu’un logiciel efficace pour le dessin certifiĂ© de la topologied’une courbe algĂ©brique plane. Lorsque les donnĂ©es d’entrĂ©e ne sont pas limitĂ©es aux polynĂŽmes maissont donnĂ©es par des fonctions d’intervalles, nous concevons des algorithmes basĂ©s sur des approchesnumĂ©riques certifiĂ©es utilisant la subdivision et l’arithmĂ©tique d’intervalles. L’entrĂ©e doit alors satisfairecertaines hypothĂšses gĂ©nĂ©riques et nos algorithmes sont certifiĂ©s dans le sens oĂč ils se terminent si etseulement si les hypothĂšses sont satisfaites

    On the number of iterations of the DBA algorithm

    Full text link
    The DTW Barycenter Averaging (DBA) algorithm is a widely used algorithm for estimating the mean of a given set of point sequences. In this context, the mean is defined as a point sequence that minimises the sum of dynamic time warping distances (DTW). The algorithm is similar to the kk-means algorithm in the sense that it alternately repeats two steps: (1) computing an optimal assignment to the points of the current mean, and (2) computing an optimal mean under the current assignment. The popularity of DBA can be attributed to the fact that it works well in practice, despite any theoretical guarantees to be known. In our paper, we aim to initiate a theoretical study of the number of iterations that DBA performs until convergence. We assume the algorithm is given nn sequences of mm points in Rd\mathbb{R}^d and a parameter kk that specifies the length of the mean sequence to be computed. We show that, in contrast to its fast running time in practice, the number of iterations can be exponential in kk in the worst case - even if the number of input sequences is n=2n=2. We complement these findings with experiments on real-world data that suggest this worst-case behaviour is likely degenerate. To better understand the performance of the algorithm on non-degenerate input, we study DBA in the model of smoothed analysis, upper-bounding the expected number of iterations in the worst case under random perturbations of the input. Our smoothed upper bound is polynomial in kk, nn and dd, and for constant nn, it is also polynomial in mm. For our analysis, we adapt the set of techniques that were developed for analysing kk-means and observe that this set of techniques is not sufficient to obtain tight bounds for general nn
    corecore