831 research outputs found

    Robust Stability Analysis of Sparsely Interconnected Uncertain Systems

    Full text link
    In this paper, we consider robust stability analysis of large-scale sparsely interconnected uncertain systems. By modeling the interconnections among the subsystems with integral quadratic constraints, we show that robust stability analysis of such systems can be performed by solving a set of sparse linear matrix inequalities. We also show that a sparse formulation of the analysis problem is equivalent to the classical formulation of the robustness analysis problem and hence does not introduce any additional conservativeness. The sparse formulation of the analysis problem allows us to apply methods that rely on efficient sparse factorization techniques, and our numerical results illustrate the effectiveness of this approach compared to methods that are based on the standard formulation of the analysis problem.Comment: Provisionally accepted to appear in IEEE Transactions on Automatic Contro

    Linear optimization over homogeneous matrix cones

    Full text link
    A convex cone is homogeneous if its automorphism group acts transitively on the interior of the cone, i.e., for every pair of points in the interior of the cone, there exists a cone automorphism that maps one point to the other. Cones that are homogeneous and self-dual are called symmetric. The symmetric cones include the positive semidefinite matrix cone and the second order cone as important practical examples. In this paper, we consider the less well-studied conic optimization problems over cones that are homogeneous but not necessarily self-dual. We start with cones of positive semidefinite symmetric matrices with a given sparsity pattern. Homogeneous cones in this class are characterized by nested block-arrow sparsity patterns, a subset of the chordal sparsity patterns. We describe transitive subsets of the automorphism groups of the cones and their duals, and important properties of the composition of log-det barrier functions with the automorphisms in this set. Next, we consider extensions to linear slices of the positive semidefinite cone, i.e., intersection of the positive semidefinite cone with a linear subspace, and review conditions that make the cone homogeneous. In the third part of the paper we give a high-level overview of the classical algebraic theory of homogeneous cones due to Vinberg and Rothaus. A fundamental consequence of this theory is that every homogeneous cone admits a spectrahedral (linear matrix inequality) representation. We conclude by discussing the role of homogeneous cone structure in primal-dual symmetric interior-point methods.Comment: 59 pages, 10 figures, to appear in Acta Numeric

    Optimal Rates of Statistical Seriation

    Full text link
    Given a matrix the seriation problem consists in permuting its rows in such way that all its columns have the same shape, for example, they are monotone increasing. We propose a statistical approach to this problem where the matrix of interest is observed with noise and study the corresponding minimax rate of estimation of the matrices. Specifically, when the columns are either unimodal or monotone, we show that the least squares estimator is optimal up to logarithmic factors and adapts to matrices with a certain natural structure. Finally, we propose a computationally efficient estimator in the monotonic case and study its performance both theoretically and experimentally. Our work is at the intersection of shape constrained estimation and recent work that involves permutation learning, such as graph denoising and ranking.Comment: V2 corrects an error in Lemma A.1, v3 corrects appendix F on unimodal regression where the bounds now hold with polynomial probability rather than exponentia

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    Decomposition in conic optimization with partially separable structure

    Get PDF
    Decomposition techniques for linear programming are difficult to extend to conic optimization problems with general non-polyhedral convex cones because the conic inequalities introduce an additional nonlinear coupling between the variables. However in many applications the convex cones have a partially separable structure that allows them to be characterized in terms of simpler lower-dimensional cones. The most important example is sparse semidefinite programming with a chordal sparsity pattern. Here partial separability derives from the clique decomposition theorems that characterize positive semidefinite and positive-semidefinite-completable matrices with chordal sparsity patterns. The paper describes a decomposition method that exploits partial separability in conic linear optimization. The method is based on Spingarn's method for equality constrained convex optimization, combined with a fast interior-point method for evaluating proximal operators

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT
    corecore