5 research outputs found

    Solving Fractional Polynomial Problems by Polynomial Optimization Theory

    Full text link
    This work aims to introduce the framework of polynomial optimization theory to solve fractional polynomial problems (FPPs). Unlike other widely used optimization frameworks, the proposed one applies to a larger class of FPPs, not necessarily defined by concave and convex functions. An iterative algorithm that is provably convergent and enjoys asymptotic optimality properties is proposed. Numerical results are used to validate its accuracy in the non-asymptotic regime when applied to the energy efficiency maximization in multiuser multiple-input multiple-output communication systems.Comment: 5 pages, 2 figures, 1 table, submitted to Signal Processing Letter

    Sum of squares generalizations for conic sets

    Full text link
    In polynomial optimization problems, nonnegativity constraints are typically handled using the sum of squares condition. This can be efficiently enforced using semidefinite programming formulations, or as more recently proposed by Papp and Yildiz [18], using the sum of squares cone directly in a nonsymmetric interior point algorithm. Beyond nonnegativity, more complicated polynomial constraints (in particular, generalizations of the positive semidefinite, second order and 1\ell_1-norm cones) can also be modeled through structured sum of squares programs. We take a different approach and propose using more specialized polynomial cones instead. This can result in lower dimensional formulations, more efficient oracles for interior point methods, or self-concordant barriers with smaller parameters. In most cases, these algorithmic advantages also translate to faster solving times in practice
    corecore