5,389 research outputs found

    Dualities in Convex Algebraic Geometry

    Full text link
    Convex algebraic geometry concerns the interplay between optimization theory and real algebraic geometry. Its objects of study include convex semialgebraic sets that arise in semidefinite programming and from sums of squares. This article compares three notions of duality that are relevant in these contexts: duality of convex bodies, duality of projective varieties, and the Karush-Kuhn-Tucker conditions derived from Lagrange duality. We show that the optimal value of a polynomial program is an algebraic function whose minimal polynomial is expressed by the hypersurface projectively dual to the constraint set. We give an exposition of recent results on the boundary structure of the convex hull of a compact variety, we contrast this to Lasserre's representation as a spectrahedral shadow, and we explore the geometric underpinnings of semidefinite programming duality.Comment: 48 pages, 11 figure

    On the Number of Zeros of Abelian Integrals: A Constructive Solution of the Infinitesimal Hilbert Sixteenth Problem

    Full text link
    We prove that the number of limit cycles generated by a small non-conservative perturbation of a Hamiltonian polynomial vector field on the plane, is bounded by a double exponential of the degree of the fields. This solves the long-standing tangential Hilbert 16th problem. The proof uses only the fact that Abelian integrals of a given degree are horizontal sections of a regular flat meromorphic connection (Gauss-Manin connection) with a quasiunipotent monodromy group.Comment: Final revisio

    Approximation of high-dimensional parametric PDEs

    Get PDF
    Parametrized families of PDEs arise in various contexts such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. This article is directed at (i) identifying and understanding which properties of parametric equations allow one to avoid this curse and (ii) developing and analyzing effective numerical methodd which fully exploit these properties and, in turn, are immune to the growth in dimensionality. The first part of this article studies the smoothness and approximability of the solution map, that is, the map au(a)a\mapsto u(a) where aa is the parameter value and u(a)u(a) is the corresponding solution to the PDE. It is shown that for many relevant parametric PDEs, the parametric smoothness of this map is typically holomorphic and also highly anisotropic in that the relevant parameters are of widely varying importance in describing the solution. These two properties are then exploited to establish convergence rates of nn-term approximations to the solution map for which each term is separable in the parametric and physical variables. These results reveal that, at least on a theoretical level, the solution map can be well approximated by discretizations of moderate complexity, thereby showing how the curse of dimensionality is broken. This theoretical analysis is carried out through concepts of approximation theory such as best nn-term approximation, sparsity, and nn-widths. These notions determine a priori the best possible performance of numerical methods and thus serve as a benchmark for concrete algorithms. The second part of this article turns to the development of numerical algorithms based on the theoretically established sparse separable approximations. The numerical methods studied fall into two general categories. The first uses polynomial expansions in terms of the parameters to approximate the solution map. The second one searches for suitable low dimensional spaces for simultaneously approximating all members of the parametric family. The numerical implementation of these approaches is carried out through adaptive and greedy algorithms. An a priori analysis of the performance of these algorithms establishes how well they meet the theoretical benchmarks

    Optimal Point Placement for Mesh Smoothing

    Full text link
    We study the problem of moving a vertex in an unstructured mesh of triangular, quadrilateral, or tetrahedral elements to optimize the shapes of adjacent elements. We show that many such problems can be solved in linear time using generalized linear programming. We also give efficient algorithms for some mesh smoothing problems that do not fit into the generalized linear programming paradigm.Comment: 12 pages, 3 figures. A preliminary version of this paper was presented at the 8th ACM/SIAM Symp. on Discrete Algorithms (SODA '97). This is the final version, and will appear in a special issue of J. Algorithms for papers from SODA '9

    Convex inner approximations of nonconvex semialgebraic sets applied to fixed-order controller design

    Full text link
    We describe an elementary algorithm to build convex inner approximations of nonconvex sets. Both input and output sets are basic semialgebraic sets given as lists of defining multivariate polynomials. Even though no optimality guarantees can be given (e.g. in terms of volume maximization for bounded sets), the algorithm is designed to preserve convex boundaries as much as possible, while removing regions with concave boundaries. In particular, the algorithm leaves invariant a given convex set. The algorithm is based on Gloptipoly 3, a public-domain Matlab package solving nonconvex polynomial optimization problems with the help of convex semidefinite programming (optimization over linear matrix inequalities, or LMIs). We illustrate how the algorithm can be used to design fixed-order controllers for linear systems, following a polynomial approach

    Geometric combinatorics and computational molecular biology: branching polytopes for RNA sequences

    Full text link
    Questions in computational molecular biology generate various discrete optimization problems, such as DNA sequence alignment and RNA secondary structure prediction. However, the optimal solutions are fundamentally dependent on the parameters used in the objective functions. The goal of a parametric analysis is to elucidate such dependencies, especially as they pertain to the accuracy and robustness of the optimal solutions. Techniques from geometric combinatorics, including polytopes and their normal fans, have been used previously to give parametric analyses of simple models for DNA sequence alignment and RNA branching configurations. Here, we present a new computational framework, and proof-of-principle results, which give the first complete parametric analysis of the branching portion of the nearest neighbor thermodynamic model for secondary structure prediction for real RNA sequences.Comment: 17 pages, 8 figure

    Connections Between Adaptive Control and Optimization in Machine Learning

    Full text link
    This paper demonstrates many immediate connections between adaptive control and optimization methods commonly employed in machine learning. Starting from common output error formulations, similarities in update law modifications are examined. Concepts in stability, performance, and learning, common to both fields are then discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis are provided. In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.Comment: 18 page
    corecore