11 research outputs found

    Inner Approximations for Polynomial Matrix Inequalities and Robust Stability Regions

    Full text link

    Waring-like decompositions of polynomials - 1

    Get PDF
    Let FF be a homogeneous form of degree dd in nn variables. A Waring decomposition of FF is a way to express FF as a sum of dthd^{th} powers of linear forms. In this paper we consider the decompositions of a form as a sum of expressions, each of which is a fixed monomial evaluated at linear forms.Comment: 12 pages; Section 5 added in this versio

    Region of Attraction Estimation Using Invariant Sets and Rational Lyapunov Functions

    Full text link
    This work addresses the problem of estimating the region of attraction (RA) of equilibrium points of nonlinear dynamical systems. The estimates we provide are given by positively invariant sets which are not necessarily defined by level sets of a Lyapunov function. Moreover, we present conditions for the existence of Lyapunov functions linked to the positively invariant set formulation we propose. Connections to fundamental results on estimates of the RA are presented and support the search of Lyapunov functions of a rational nature. We then restrict our attention to systems governed by polynomial vector fields and provide an algorithm that is guaranteed to enlarge the estimate of the RA at each iteration

    Expressing a General Form as a Sum of Determinants

    Full text link
    Let A= (a_{ij}) be a non-negative integer k x k matrix. A is a homogeneous matrix if a_{ij} + a_{kl}=a_{il} + a_{kj} for any choice of the four indexes. We ask: If A is a homogeneous matrix and if F is a form in C[x_1, \dots x_n] with deg(F) = trace(A), what is the least integer, s(A), so that F = det M_1 + ... + det M_{s(A)}, where the M_i's are k x k matrices of forms with degree matrix A? We consider this problem for n>3 and we prove that s(A) is at most k^{n-3} and s(A) <k^{n-3} in infinitely many cases. However s(A) = k^{n-3} when the entries of A are large with respect to k

    Convex computation of the region of attraction of polynomial control systems

    Get PDF
    We address the long-standing problem of computing the region of attraction (ROA) of a target set (e.g., a neighborhood of an equilibrium point) of a controlled nonlinear system with polynomial dynamics and semialgebraic state and input constraints. We show that the ROA can be computed by solving an infinite-dimensional convex linear programming (LP) problem over the space of measures. In turn, this problem can be solved approximately via a classical converging hierarchy of convex finite-dimensional linear matrix inequalities (LMIs). Our approach is genuinely primal in the sense that convexity of the problem of computing the ROA is an outcome of optimizing directly over system trajectories. The dual infinite-dimensional LP on nonnegative continuous functions (approximated by polynomial sum-of-squares) allows us to generate a hierarchy of semialgebraic outer approximations of the ROA at the price of solving a sequence of LMI problems with asymptotically vanishing conservatism. This sharply contrasts with the existing literature which follows an exclusively dual Lyapunov approach yielding either nonconvex bilinear matrix inequalities or conservative LMI conditions. The approach is simple and readily applicable as the outer approximations are the outcome of a single semidefinite program with no additional data required besides the problem description

    Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

    Full text link
    The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, we show that the sequential subgradient projection method is perturbation resilient. By this we mean that under appropriate conditions the sequence generated by the method converges weakly, and sometimes also strongly, to a point in the intersection of the given subsets of the feasibility problem, despite certain perturbations which are allowed in each iterative step. Unlike previous works on solving the convex feasibility problem, the involved functions, which induce the feasibility problem's subsets, need not be convex. Instead, we allow them to belong to a wider and richer class of functions satisfying a weaker condition, that we call "zero-convexity". This class, which is introduced and discussed here, holds a promise to solve optimization problems in various areas, especially in non-smooth and non-convex optimization. The relevance of this study to approximate minimization and to the recent superiorization methodology for constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio

    Convex Computation of the Region of Attraction of Polynomial Control Systems

    Full text link

    Inner approximations for polynomial matrix inequalities and robust stability regions

    No full text
    A mistake is fixed in the proof of Lemma 1. It does not affect the remainder of the paper.International audienceFollowing a polynomial approach, many robust fixed-order controller design problems can be formulated as optimization problems whose set of feasible solutions is modelled by parametrized polynomial matrix inequalities (PMI). These feasibility sets are typically nonconvex. Given a parametrized PMI set, we provide a hierarchy of linear matrix inequality (LMI) problems whose optimal solutions generate inner approximations modelled by a single polynomial sublevel set. Those inner approximations converge in a strong analytic sense to the nonconvex original feasible set, with asymptotically vanishing conservatism. One may also impose the hierarchy of inner approximations to be nested or convex. In the latter case they do not converge any more to the feasible set, but they can be used in a convex optimization framework at the price of some conservatism. Finally, we show that the specific geometry of nonconvex polynomial stability regions can be exploited to improve convergence of the hierarchy of inner approximations
    corecore