1,220 research outputs found

    Least-Squares Covariance Matrix Adjustment

    Get PDF
    We consider the problem of finding the smallest adjustment to a given symmetric n×nn \times n matrix, as measured by the Euclidean or Frobenius norm, so that it satisfies some given linear equalities and inequalities, and in addition is positive semidefinite. This least-squares covariance adjustment problem is a convex optimization problem, and can be efficiently solved using standard methods when the number of variables (i.e., entries in the matrix) is modest, say, under 10001000. Since the number of variables is n(n+1)/2n(n+1)/2, this corresponds to a limit around n=45n=45. Malick [{\it SIAM J. Matrix Anal.\ Appl.,} 26 (2005), pp. 272--284] studies a closely related problem and calls it the semidefinite least-squares problem. In this paper we formulate a dual problem that has no matrix inequality or matrix variables, and a number of (scalar) variables equal to the number of equality and inequality constraints in the original least-squares covariance adjustment problem. This dual problem allows us to solve far larger least-squares covariance adjustment problems than would be possible using standard methods. Assuming a modest number of constraints, problems with n=1000n=1000 are readily solved by the dual method. The dual method coincides with the dual method proposed by Malick when there are no inequality constraints and can be obtained as an extension of his dual method when there are inequality constraints. Using the dual problem, we show that in many cases the optimal solution is a low rank update of the original matrix. When the original matrix has structure, such as sparsity, this observation allows us to solve very large least-squares covariance adjustment problems

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    Some Preconditioning Techniques for Saddle Point Problems

    Get PDF
    Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud \ud The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336

    MINRES-QLP: a Krylov subspace method for indefinite or singular symmetric systems

    Full text link
    CG, SYMMLQ, and MINRES are Krylov subspace methods for solving symmetric systems of linear equations. When these methods are applied to an incompatible system (that is, a singular symmetric least-squares problem), CG could break down and SYMMLQ's solution could explode, while MINRES would give a least-squares solution but not necessarily the minimum-length (pseudoinverse) solution. This understanding motivates us to design a MINRES-like algorithm to compute minimum-length solutions to singular symmetric systems. MINRES uses QR factors of the tridiagonal matrix from the Lanczos process (where R is upper-tridiagonal). MINRES-QLP uses a QLP decomposition (where rotations on the right reduce R to lower-tridiagonal form). On ill-conditioned systems (singular or not), MINRES-QLP can give more accurate solutions than MINRES. We derive preconditioned MINRES-QLP, new stopping rules, and better estimates of the solution and residual norms, the matrix norm, and the condition number.Comment: 26 pages, 6 figure

    Probabilistic Interpretation of Linear Solvers

    Full text link
    This manuscript proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems Bx=bBx = b with positive definite BB for xx. The goal is to replace the point estimates returned by existing methods with a Gaussian posterior belief over the elements of the inverse of BB, which can be used to estimate errors. Recent probabilistic interpretations of the secant family of quasi-Newton optimization algorithms are extended. Combined with properties of the conjugate gradient algorithm, this leads to uncertainty-calibrated methods with very limited cost overhead over conjugate gradients, a self-contained novel interpretation of the quasi-Newton and conjugate gradient algorithms, and a foundation for new nonlinear optimization methods.Comment: final version, in press at SIAM J Optimizatio
    • …
    corecore