40 research outputs found

    Exploiting symmetries in SDP-relaxations for polynomial optimization

    Full text link
    In this paper we study various approaches for exploiting symmetries in polynomial optimization problems within the framework of semi definite programming relaxations. Our special focus is on constrained problems especially when the symmetric group is acting on the variables. In particular, we investigate the concept of block decomposition within the framework of constrained polynomial optimization problems, show how the degree principle for the symmetric group can be computationally exploited and also propose some methods to efficiently compute in the geometric quotient.Comment: (v3) Minor revision. To appear in Math. of Operations Researc

    Finding largest small polygons with GloptiPoly

    Get PDF
    A small polygon is a convex polygon of unit diameter. We are interested in small polygons which have the largest area for a given number of vertices nn. Many instances are already solved in the literature, namely for all odd nn, and for n=4,6n=4, 6 and 8. Thus, for even n≄10n\geq 10, instances of this problem remain open. Finding those largest small polygons can be formulated as nonconvex quadratic programming problems which can challenge state-of-the-art global optimization algorithms. We show that a recently developed technique for global polynomial optimization, based on a semidefinite programming approach to the generalized problem of moments and implemented in the public-domain Matlab package GloptiPoly, can successfully find largest small polygons for n=10n=10 and n=12n=12. Therefore this significantly improves existing results in the domain. When coupled with accurate convex conic solvers, GloptiPoly can provide numerical guarantees of global optimality, as well as rigorous guarantees relying on interval arithmetic

    Certification of inequalities involving transcendental functions: combining SDP and max-plus approximation

    Get PDF
    We consider the problem of certifying an inequality of the form f(x)≄0f(x)\geq 0, ∀x∈K\forall x\in K, where ff is a multivariate transcendental function, and KK is a compact semialgebraic set. We introduce a certification method, combining semialgebraic optimization and max-plus approximation. We assume that ff is given by a syntaxic tree, the constituents of which involve semialgebraic operations as well as some transcendental functions like cos⁥\cos, sin⁥\sin, exp⁥\exp, etc. We bound some of these constituents by suprema or infima of quadratic forms (max-plus approximation method, initially introduced in optimal control), leading to semialgebraic optimization problems which we solve by semidefinite relaxations. The max-plus approximation is iteratively refined and combined with branch and bound techniques to reduce the relaxation gap. Illustrative examples of application of this algorithm are provided, explaining how we solved tight inequalities issued from the Flyspeck project (one of the main purposes of which is to certify numerical inequalities used in the proof of the Kepler conjecture by Thomas Hales).Comment: 7 pages, 3 figures, 3 tables, Appears in the Proceedings of the European Control Conference ECC'13, July 17-19, 2013, Zurich, pp. 2244--2250, copyright EUCA 201

    Certification of Bounds of Non-linear Functions: the Templates Method

    Get PDF
    The aim of this work is to certify lower bounds for real-valued multivariate functions, defined by semialgebraic or transcendental expressions. The certificate must be, eventually, formally provable in a proof system such as Coq. The application range for such a tool is widespread; for instance Hales' proof of Kepler's conjecture yields thousands of inequalities. We introduce an approximation algorithm, which combines ideas of the max-plus basis method (in optimal control) and of the linear templates method developed by Manna et al. (in static analysis). This algorithm consists in bounding some of the constituents of the function by suprema of quadratic forms with a well chosen curvature. This leads to semialgebraic optimization problems, solved by sum-of-squares relaxations. Templates limit the blow up of these relaxations at the price of coarsening the approximation. We illustrate the efficiency of our framework with various examples from the literature and discuss the interfacing with Coq.Comment: 16 pages, 3 figures, 2 table

    A Polynomial Optimization Approach to Constant Rebalanced Portfolio Selection

    Get PDF
    We address the multi-period portfolio optimization problem with the constant rebalancing strategy. This problem is formulated as a polynomial optimization problem (POP) by using a mean-variance criterion. In order to solve the POPs of high degree, we develop a cutting-plane algorithm based on semidefinite programming. Our algorithm can solve problems that can not be handled by any of known polynomial optimization solvers.Multi-period portfolio optimization;Polynomial optimization problem;Constant rebalancing;Semidefinite programming;Mean-variance criterion

    Real root finding for equivariant semi-algebraic systems

    Get PDF
    Let RR be a real closed field. We consider basic semi-algebraic sets defined by nn-variate equations/inequalities of ss symmetric polynomials and an equivariant family of polynomials, all of them of degree bounded by 2d<n2d < n. Such a semi-algebraic set is invariant by the action of the symmetric group. We show that such a set is either empty or it contains a point with at most 2d−12d-1 distinct coordinates. Combining this geometric result with efficient algorithms for real root finding (based on the critical point method), one can decide the emptiness of basic semi-algebraic sets defined by ss polynomials of degree dd in time (sn)O(d)(sn)^{O(d)}. This improves the state-of-the-art which is exponential in nn. When the variables x1,
,xnx_1, \ldots, x_n are quantified and the coefficients of the input system depend on parameters y1,
,yty_1, \ldots, y_t, one also demonstrates that the corresponding one-block quantifier elimination problem can be solved in time (sn)O(dt)(sn)^{O(dt)}
    corecore