152 research outputs found

    Convergence analysis of a Lasserre hierarchy of upper bounds for polynomial minimization on the sphere

    Get PDF
    We study the convergence rate of a hierarchy of upper bounds for polynomial minimization problems, proposed by Lasserre [SIAM J. Optim. 21(3) (2011), pp. 864-885], for the special case when the feasible set is the unit (hyper)sphere. The upper bound at level r of the hierarchy is defined as the minimal expected value of the polynomial over all probability distributions on the sphere, when the probability density function is a sum-of-squares polynomial of degree at most 2r with respect to the surface measure. We show that the exact rate of convergence is Theta(1/r^2), and explore the implications for the related rate of convergence for the generalized problem of moments on the sphere.Comment: 14 pages, 2 figure

    Worst-case examples for Lasserre's measure--based hierarchy for polynomial optimization on the hypercube

    Get PDF
    We study the convergence rate of a hierarchy of upper bounds for polynomial optimization problems, proposed by Lasserre [SIAM J. Optim. 21(3) (2011), pp. 864-885], and a related hierarchy by De Klerk, Hess and Laurent [SIAM J. Optim. 27(1), (2017) pp. 347-367]. For polynomial optimization over the hypercube, we show a refined convergence analysis for the first hierarchy. We also show lower bounds on the convergence rate for both hierarchies on a class of examples. These lower bounds match the upper bounds and thus establish the true rate of convergence on these examples. Interestingly, these convergence rates are determined by the distribution of extremal zeroes of certain families of orthogonal polynomials.Comment: 17 pages, no figure

    Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation

    Get PDF
    We provide new tools for worst-case performance analysis of the gradient (or steepest descent) method of Cauchy for smooth strongly convex functions, and Newton's method for self-concordant functions, including the case of inexact search directions. The analysis uses semidefinite programming performance estimation, as pioneered by Drori and Teboulle [Mathematical Programming, 145(1-2):451-482, 2014], and extends recent performance estimation results for the method of Cauchy by the authors [Optimization Letters, 11(7), 1185-1199, 2017]. To illustrate the applicability of the tools, we demonstrate a novel complexity analysis of short step interior point methods using inexact search directions. As an example in this framework, we sketch how to give a rigorous worst-case complexity analysis of a recent interior point method by Abernethy and Hazan [PMLR, 48:2520-2528, 2016].Comment: 22 pages, 1 figure. Title of earlier version was "Worst-case convergence analysis of gradient and Newton methods through semidefinite programming performance estimation

    An error analysis for polynomial optimization over the simplex based on the multivariate hypergeometric distribution

    Get PDF
    We study the minimization of fixed-degree polynomials over the simplex. This problem is well-known to be NP-hard, as it contains the maximum stable set problem in graph theory as a special case. In this paper, we consider a rational approximation by taking the minimum over the regular grid, which consists of rational points with denominator rr (for given rr). We show that the associated convergence rate is O(1/r2)O(1/r^2) for quadratic polynomials. For general polynomials, if there exists a rational global minimizer over the simplex, we show that the convergence rate is also of the order O(1/r2)O(1/r^2). Our results answer a question posed by De Klerk et al. (2013) and improves on previously known O(1/r)O(1/r) bounds in the quadratic case.Comment: 17 page

    Polynomial Norms

    Get PDF
    In this paper, we study polynomial norms, i.e. norms that are the dthd^{\text{th}} root of a degree-dd homogeneous polynomial ff. We first show that a necessary and sufficient condition for f1/df^{1/d} to be a norm is for ff to be strictly convex, or equivalently, convex and positive definite. Though not all norms come from dthd^{\text{th}} roots of polynomials, we prove that any norm can be approximated arbitrarily well by a polynomial norm. We then investigate the computational problem of testing whether a form gives a polynomial norm. We show that this problem is strongly NP-hard already when the degree of the form is 4, but can always be answered by testing feasibility of a semidefinite program (of possibly large size). We further study the problem of optimizing over the set of polynomial norms using semidefinite programming. To do this, we introduce the notion of r-sos-convexity and extend a result of Reznick on sum of squares representation of positive definite forms to positive definite biforms. We conclude with some applications of polynomial norms to statistics and dynamical systems

    Convergence rates of RLT and Lasserre-type hierarchies for the generalized moment problem over the simplex and the sphere

    Get PDF
    We consider the generalized moment problem (GMP) over the simplex and the sphere. This is a rich setting and it contains NP-hard problems as special cases, like constructing optimal cubature schemes and rational optimization. Using the reformulation-linearization technique (RLT) and Lasserre-type hierarchies, relaxations of the problem are introduced and analyzed. For our analysis we assume throughout the existence of a dual optimal solution as well as strong duality. For the GMP over the simplex we prove a convergence rate of O(1/r) for a linear programming, RLT-type hierarchy, where r is the level of the hierarchy, using a quantitative version of Pólya’s Positivstellensatz. As an extension of a recent result by Fang and Fawzi (Math Program, 2020. https://doi.org/10.1007/s10107-020-01537-7) we prove the Lasserre hierarchy of the GMP (Lasserre in Math Program 112(1):65–92, 2008. https://doi.org/10.1007/s10107-006-0085-1) over the sphere has a convergence rate of O(1/r2). Moreover, we show the introduced linear RLT-relaxation is a generalization of a hierarchy for minimizing forms of degree d over the simplex, introduced by De Klerk et al. (J Theor Comput Sci 361(2–3):210–225, 2006)

    Jordan symmetry reduction for conic optimization over the doubly nonnegative cone: theory and software

    Full text link
    A common computational approach for polynomial optimization problems (POPs) is to use (hierarchies of) semidefinite programming (SDP) relaxations. When the variables in the POP are required to be nonnegative, these SDP problems typically involve nonnegative matrices, i.e. they are conic optimization problems over the doubly nonnegative cone. The Jordan reduction, a symmetry reduction method for conic optimization, was recently introduced for symmetric cones by Parrilo and Permenter [Mathematical Programming 181(1), 2020]. We extend this method to the doubly nonnegative cone, and investigate its application to known relaxations of the quadratic assignment and maximum stable set problems. We also introduce new Julia software where the symmetry reduction is implemented.Comment: 19 pages, titled change from earlier version. arXiv admin note: text overlap with arXiv:1908.0087

    An Analytic Center Cutting Plane Method to Determine Complete Positivity of a Matrix

    Get PDF
    We propose an analytic center cutting plane method to determine if a matrix is completely positive, and return a cut that separates it from the completely positive cone if not. This was stated as an open (computational) problem by Berman, D\"ur, and Shaked-Monderer [Electronic Journal of Linear Algebra, 2015]. Our method optimizes over the intersection of a ball and the copositive cone, where membership is determined by solving a mixed-integer linear program suggested by Xia, Vera, and Zuluaga [INFORMS Journal on Computing, 2018]. Thus, our algorithm can, more generally, be used to solve any copositive optimization problem, provided one knows the radius of a ball containing an optimal solution. Numerical experiments show that the number of oracle calls (matrix copositivity checks) for our implementation scales well with the matrix size, growing roughly like O(d2)O(d^2) for dĂ—dd\times d matrices. The method is implemented in Julia, and available at https://github.com/rileybadenbroek/CopositiveAnalyticCenter.jl.Comment: 16 pages, 1 figur

    Worst-case examples for Lasserre's measure--based hierarchy for polynomial optimization on the hypercube

    Full text link
    We study the convergence rate of a hierarchy of upper bounds for polynomial optimization problems, proposed by Lasserre [SIAM J. Optim. 21(3) (2011), pp. 864-885], and a related hierarchy by De Klerk, Hess and Laurent [SIAM J. Optim. 27(1), (2017) pp. 347-367]. For polynomial optimization over the hypercube, we show a refined convergence analysis for the first hierarchy. We also show lower bounds on the convergence rate for both hierarchies on a class of examples. These lower bounds match the upper bounds and thus establish the true rate of convergence on these examples. Interestingly, these convergence rates are determined by the distribution of extremal zeroes of certain families of orthogonal polynomials.Comment: 17 pages, no figure
    • …
    corecore