5,894 research outputs found

    A sequential semidefinite programming method and an application in passive reduced-order modeling

    Full text link
    We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. In particular, a suitable symmetrization procedure needs to be chosen for the linearization of the complementarity condition. The choice of the symmetrization procedure can be shifted in a very natural way to certain linear semidefinite subproblems, and can thus be reduced to a well-studied problem. The resulting sequential semidefinite programming (SSP) method is a generalization of the well-known SQP method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class

    On the Burer-Monteiro method for general semidefinite programs

    Full text link
    Consider a semidefinite program (SDP) involving an n×nn\times n positive semidefinite matrix XX. The Burer-Monteiro method uses the substitution X=YYTX=Y Y^T to obtain a nonconvex optimization problem in terms of an n×pn\times p matrix YY. Boumal et al. showed that this nonconvex method provably solves equality-constrained SDPs with a generic cost matrix when p2mp \gtrsim \sqrt{2m}, where mm is the number of constraints. In this note we extend their result to arbitrary SDPs, possibly involving inequalities or multiple semidefinite constraints. We derive similar guarantees for a fixed cost matrix and generic constraints. We illustrate applications to matrix sensing and integer quadratic minimization.Comment: 10 page

    Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations

    Full text link
    It is well-known that any sum of squares (SOS) program can be cast as a semidefinite program (SDP) of a particular structure and that therein lies the computational bottleneck for SOS programs, as the SDPs generated by this procedure are large and costly to solve when the polynomials involved in the SOS programs have a large number of variables and degree. In this paper, we review SOS optimization techniques and present two new methods for improving their computational efficiency. The first method leverages the sparsity of the underlying SDP to obtain computational speed-ups. Further improvements can be obtained if the coefficients of the polynomials that describe the problem have a particular sparsity pattern, called chordal sparsity. The second method bypasses semidefinite programming altogether and relies instead on solving a sequence of more tractable convex programs, namely linear and second order cone programs. This opens up the question as to how well one can approximate the cone of SOS polynomials by second order representable cones. In the last part of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201

    Certified Roundoff Error Bounds Using Semidefinite Programming.

    Get PDF
    Roundoff errors cannot be avoided when implementing numerical programs with finite precision. The ability to reason about rounding is especially important if one wants to explore a range of potential representations, for instance for FPGAs or custom hardware implementation. This problem becomes challenging when the program does not employ solely linear operations as non-linearities are inherent to many interesting computational problems in real-world applications. Existing solutions to reasoning are limited in the presence of nonlinear correlations between variables, leading to either imprecise bounds or high analysis time. Furthermore, while it is easy to implement a straightforward method such as interval arithmetic, sophisticated techniques are less straightforward to implement in a formal setting. Thus there is a need for methods which output certificates that can be formally validated inside a proof assistant. We present a framework to provide upper bounds on absolute roundoff errors. This framework is based on optimization techniques employing semidefinite programming and sums of squares certificates, which can be formally checked inside the Coq theorem prover. Our tool covers a wide range of nonlinear programs, including polynomials and transcendental operations as well as conditional statements. We illustrate the efficiency and precision of this tool on non-trivial programs coming from biology, optimization and space control. Our tool produces more precise error bounds for 37 percent of all programs and yields better performance in 73 percent of all programs

    A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints

    Get PDF
    We propose a new algorithm to solve optimization problems of the form minf(X)\min f(X) for a smooth function ff under the constraints that XX is positive semidefinite and the diagonal blocks of XX are small identity matrices. Such problems often arise as the result of relaxing a rank constraint (lifting). In particular, many estimation tasks involving phases, rotations, orthonormal bases or permutations fit in this framework, and so do certain relaxations of combinatorial problems such as Max-Cut. The proposed algorithm exploits the facts that (1) such formulations admit low-rank solutions, and (2) their rank-restricted versions are smooth optimization problems on a Riemannian manifold. Combining insights from both the Riemannian and the convex geometries of the problem, we characterize when second-order critical points of the smooth problem reveal KKT points of the semidefinite problem. We compare against state of the art, mature software and find that, on certain interesting problem instances, what we call the staircase method is orders of magnitude faster, is more accurate and scales better. Code is available.Comment: 37 pages, 3 figure
    corecore