10 research outputs found

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    BFGS-like updates of constraint preconditioners for sequences of KKT linear systems in quadratic programming

    Get PDF
    We focus on efficient preconditioning techniques for sequences of KKT linear systems arising from the interior point solution of large convex quadratic programming problems. Constraint Preconditioners~(CPs), though very effective in accelerating Krylov methods in the solution of KKT systems, have a very high computational cost in some instances, because their factorization may be the most time-consuming task at each interior point iteration. We overcome this problem by computing the CP from scratch only at selected interior point iterations and by updating the last computed CP at the remaining iterations, via suitable low-rank modifications based on a BFGS-like formula. This work extends the limited-memory preconditioners for symmetric positive definite matrices proposed by Gratton, Sartenaer and Tshimanga in [SIAM J. Optim. 2011; 21(3):912--935, by exploiting specific features of KKT systems and CPs. We prove that the updated preconditioners still belong to the class of exact CPs, thus allowing the use of the conjugate gradient method. Furthermore, they have the property of increasing the number of unit eigenvalues of the preconditioned matrix as compared to generally used CPs. Numerical experiments are reported, which show the effectiveness of our updating technique when the cost for the factorization of the CP is high

    Preconditioning issues in the numerical solution of nonlinear equations and nonlinear least squares

    Get PDF
    Second order methods for optimization call for the solution of sequences of linear systems. In this survey we will discuss several issues related to the preconditioning of such sequences. Covered topics include both techniques for building updates of factorized preconditioners and quasi-Newton approaches. Sequences of unsymmetric linear systems arising in Newton-Krylov methods will be considered as well as symmetric positive definite sequences arising in the solution of nonlinear least-squares by Truncated Gauss-Newton methods

    On an integrated Krylov-ADI Solver for Large-Scale Lyapunov Equations

    Get PDF

    On an integrated Krylov-ADI solver for large-scale Lyapunov equations

    Get PDF
    One of the most computationally expensive steps of the low-rank ADI method for large-scale Lyapunov equations is the solution of a shifted linear system at each iteration. We propose the use of the extended Krylov subspace method for this task. In particular, we illustrate how a single approximation space can be constructed to solve all the shifted linear systems needed to achieve a prescribed accuracy in terms of Lyapunov residual norm. Moreover, we show how to fully merge the two iterative procedures in order to obtain a novel, efcient implementation of the low-rank ADI method, for an important class of equations. Many state-of-the-art algorithms for the shift computation can be easily incorporated into our new scheme, as well. Several numerical results illustrate the potential of our novel procedure when compared to an implementation of the low-rank ADI method based on sparse direct solvers for the shifted linear systems

    New updates of incomplete LU factorizations and applications to large nonlinear systems

    Get PDF
    Abstract In this paper, we address the problem of preconditioning sequences of large sparse nonsymmetric systems of linear equations and present two new strategies to construct approximate updates of factorized preconditioners. Both updates are based on the availability of an incomplete LU (ILU) factorization for one matrix of the sequence and differ in the approximation of the so-called ideal updates. The first strategy is an approximate diagonal update of the ILU factorization; the second strategy relies on banded approximations of the factors in the ideal update. The efficiency and reliability of the proposed preconditioners are shown in the solution of nonlinear systems of equations by preconditioned inexact Newton-Krylov methods. Matrix-free implementations of the updating strategy are provided and numerical experiments are carried out on application problems

    Efficient preconditioner updates for shifted linear systems

    No full text
    We present a technique for building effective and low cost preconditioners for sequences of shifted linear systems (A + αI) x_α = b, where A is symmetric positive definite and α > 0. This technique updates a preconditioner for A, available in the form of an LDL^T factorization, by modifying only the nonzero entries of the L factor in such a way that the resulting preconditioner mimics the diagonal of the shifted matrix and reproduces its overall behavior. This approach is supported by a theoretical analysis as well as by numerical experiments, showing that it works efficiently for a broad range of values of α

    Efficient Preconditioner Updates for Shifted Linear Systems

    No full text
    We present a technique for building effective and low cost preconditioners for sequences of shifted linear systems (A+alphaI)xalpha=b(A + alpha I) x_alpha = b, where A is symmetric positive definite and alpha > 0. This technique updates a preconditioner for A, available in the form of an LDLTLDL^T factorization, by modifying only the nonzero entries of the L factor in such a way that the resulting preconditioner mimics the diagonal of the shifted matrix and reproduces its overall behavior. This approach is supported by a theoretical analysis as well as by numerical experiments, showing that it works efficiently for a broad range of values of alphaalpha

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8
    corecore