82 research outputs found

    BFGS-like updates of constraint preconditioners for sequences of KKT linear systems in quadratic programming

    Get PDF
    We focus on efficient preconditioning techniques for sequences of KKT linear systems arising from the interior point solution of large convex quadratic programming problems. Constraint Preconditioners~(CPs), though very effective in accelerating Krylov methods in the solution of KKT systems, have a very high computational cost in some instances, because their factorization may be the most time-consuming task at each interior point iteration. We overcome this problem by computing the CP from scratch only at selected interior point iterations and by updating the last computed CP at the remaining iterations, via suitable low-rank modifications based on a BFGS-like formula. This work extends the limited-memory preconditioners for symmetric positive definite matrices proposed by Gratton, Sartenaer and Tshimanga in [SIAM J. Optim. 2011; 21(3):912--935, by exploiting specific features of KKT systems and CPs. We prove that the updated preconditioners still belong to the class of exact CPs, thus allowing the use of the conjugate gradient method. Furthermore, they have the property of increasing the number of unit eigenvalues of the preconditioned matrix as compared to generally used CPs. Numerical experiments are reported, which show the effectiveness of our updating technique when the cost for the factorization of the CP is high

    Implementing a smooth exact penalty function for equality-constrained nonlinear optimization

    Full text link
    We develop a general equality-constrained nonlinear optimization algorithm based on a smooth penalty function proposed by Fletcher (1970). Although it was historically considered to be computationally prohibitive in practice, we demonstrate that the computational kernels required are no more expensive than other widely accepted methods for nonlinear optimization. The main kernel required to evaluate the penalty function and its derivatives is solving a structured linear system. We show how to solve this system efficiently by storing a single factorization each iteration when the matrices are available explicitly. We further show how to adapt the penalty function to the class of factorization-free algorithms by solving the linear system iteratively. The penalty function therefore has promise when the linear system can be solved efficiently, e.g., for PDE-constrained optimization problems where efficient preconditioners exist. We discuss extensions including handling simple constraints explicitly, regularizing the penalty function, and inexact evaluation of the penalty function and its gradients. We demonstrate the merits of the approach and its various features on some nonlinear programs from a standard test set, and some PDE-constrained optimization problems

    The antitriangular factorisation of saddle point matrices

    Get PDF
    Mastronardi and Van Dooren [SIAM J. Matrix Anal. Appl., 34 (2013), pp. 173--196] recently introduced the block antitriangular (``Batman'') decomposition for symmetric indefinite matrices. Here we show the simplification of this factorization for saddle point matrices and demonstrate how it represents the common nullspace method. We show that rank-1 updates to the saddle point matrix can be easily incorporated into the factorization and give bounds on the eigenvalues of matrices important in saddle point theory. We show the relation of this factorization to constraint preconditioning and how it transforms but preserves the structure of block diagonal and block triangular preconditioners

    Micro- and macro-block factorizations for regularized saddle point systems

    Get PDF
    We present unique and existing micro-block and induced macro-block Crout-based factorizations for matrices from regularized saddle-point problems with semi-positive de¿nite regularization block. For the classical case of saddle-point problems we show that the induced macro-block factorizations mostly reduces to the factorization presented in [24]. The presented factorization can be used as a direct solution algorithm for regularized saddle-point problems as well as it can be used a basis for the construction of preconditioners

    An interior-point trust-region-based method for large-scale non-negative regularization

    Get PDF
    Abstract We present a new method for solving large-scale quadratic problems with quadratic and nonnegativity constraints. Such problems arise for example in the regularization of ill-posed problems in image restoration where, in addition, some of the matrices involved are very ill-conditioned. The new method uses recently developed techniques for the large-scale trust-region subproblem
    • …
    corecore