23 research outputs found

    Spectral estimates and preconditioning for saddle point systems arising from optimization problems

    Get PDF
    In this thesis, we consider the problem of solving large and sparse linear systems of saddle point type stemming from optimization problems. The focus of the thesis is on iterative methods, and new preconditioning srategies are proposed, along with novel spectral estimtates for the matrices involved

    Constraint-Preconditioned Krylov Solvers for Regularized Saddle-Point Systems

    Full text link
    We consider the iterative solution of regularized saddle-point systems. When the leading block is symmetric and positive semi-definite on an appropriate subspace, Dollar, Gould, Schilders, and Wathen (2006) describe how to apply the conjugate gradient (CG) method coupled with a constraint preconditioner, a choice that has proved to be effective in optimization applications. We investigate the design of constraint-preconditioned variants of other Krylov methods for regularized systems by focusing on the underlying basis-generation process. We build upon principles laid out by Gould, Orban, and Rees (2014) to provide general guidelines that allow us to specialize any Krylov method to regularized saddle-point systems. In particular, we obtain constraint-preconditioned variants of Lanczos and Arnoldi-based methods, including the Lanczos version of CG, MINRES, SYMMLQ, GMRES(m) and DQGMRES. We also provide MATLAB implementations in hopes that they are useful as a basis for the development of more sophisticated software. Finally, we illustrate the numerical behavior of constraint-preconditioned Krylov solvers using symmetric and nonsymmetric systems arising from constrained optimization.Comment: Accepted for publication in the SIAM Journal on Scientific Computin

    Spectral estimates for saddle point matrices arising in weak constraint four-dimensional variational data assimilation

    Get PDF
    We consider the large-sparse symmetric linear systems of equations that arise in the solution of weak constraint four-dimensional variational data assimilation, a method of high interest for numerical weather prediction. These systems can be written as saddle point systems with a 3Ă—33 \times 3 block structure but block eliminations can be performed to reduce them to saddle point systems with a 2Ă—22 \times 2 block structure, or further to symmetric positive definite systems. In this paper, we analyse how sensitive the spectra of these matrices are to the number of observations of the underlying dynamical system. We also obtain bounds on the eigenvalues of the matrices. Numerical experiments are used to confirm the theoretical analysis and bounds

    Primal-dual interior-point algorithms for linear programs with many inequality constraints

    Get PDF
    Linear programs (LPs) are one of the most basic and important classes of constrained optimization problems, involving the optimization of linear objective functions over sets defined by linear equality and inequality constraints. LPs have applications to a broad range of problems in engineering and operations research, and often arise as subproblems for algorithms that solve more complex optimization problems. ``Unbalanced'' inequality-constrained LPs with many more inequality constraints than variables are an important subclass of LPs. Under a basic non-degeneracy assumption, only a small number of the constraints can be active at the solution--it is only this active set that is critical to the problem description. On the other hand, the additional constraints make the problem harder to solve. While modern ``interior-point'' algorithms have become recognized as some of the best methods for solving large-scale LPs, they may not be recommended for unbalanced problems, because their per-iteration work does not scale well with the number of constraints. In this dissertation, we investigate "constraint-reduced'' interior-point algorithms designed to efficiently solve unbalanced LPs. At each iteration, these methods construct search directions based only on a small working set of constraints, while ignoring the rest. In this way, they significantly reduce their per-iteration work and, hopefully, their overall running time. In particular, we focus on constraint-reduction methods for the highly efficient primal-dual interior-point (PDIP) algorithms. We propose and analyze a convergent constraint-reduced variant of Mehrotra's predictor-corrector PDIP algorithm, the algorithm implemented in virtually every interior-point software package for linear (and convex-conic) programming. We prove global and local quadratic convergence of this algorithm under a very general class of constraint selection rules and under minimal assumptions. We also propose and analyze two regularized constraint-reduced PDIP algorithms (with similar convergence properties) designed to deal directly with a type of degeneracy that constraint-reduced interior-point algorithms are often subject to. Prior schemes for dealing with this degeneracy could end up negating the benefit of constraint-reduction. Finally, we investigate the performance of our algorithms by applying them to several test and application problems, and show that our algorithms often outperform alternative approaches

    A structured modified Newton approach for solving systems of nonlinear equations arising in interior-point methods for quadratic programming

    Full text link
    The focus in this work is on interior-point methods for inequality-constrained quadratic programs, and particularly on the system of nonlinear equations to be solved for each value of the barrier parameter. Newton iterations give high quality solutions, but we are interested in modified Newton systems that are computationally less expensive at the expense of lower quality solutions. We propose a structured modified Newton approach where each modified Jacobian is composed of a previous Jacobian, plus one low-rank update matrix per succeeding iteration. Each update matrix is, for a given rank, chosen such that the distance to the Jacobian at the current iterate is minimized, in both 2-norm and Frobenius norm. The approach is structured in the sense that it preserves the nonzero pattern of the Jacobian. The choice of update matrix is supported by results in an ideal theoretical setting. We also produce numerical results with a basic interior-point implementation to investigate the practical performance within and beyond the theoretical framework. In order to improve performance beyond the theoretical framework, we also motivate and construct two heuristics to be added to the method

    A new stopping criterion for Krylov solvers applied in Interior Point Methods

    Full text link
    A surprising result is presented in this paper with possible far reaching consequences for any optimization technique which relies on Krylov subspace methods employed to solve the underlying linear equation systems. In this paper the advantages of the new technique are illustrated in the context of Interior Point Methods (IPMs). When an iterative method is applied to solve the linear equation system in IPMs, the attention is usually placed on accelerating their convergence by designing appropriate preconditioners, but the linear solver is applied as a black box solver with a standard termination criterion which asks for a sufficient reduction of the residual in the linear system. Such an approach often leads to an unnecessary 'oversolving' of linear equations. In this paper a new specialized termination criterion for Krylov methods used in IPMs is designed. It is derived from a deep understanding of IPM needs and is demonstrated to preserve the polynomial worst-case complexity of these methods. The new criterion has been adapted to the Conjugate Gradient (CG) and to the Minimum Residual method (MINRES) applied in the IPM context. The new criterion has been tested on a set of linear and quadratic optimization problems including compressed sensing, image processing and instances with partial differential equation constraints. Evidence gathered from these computational experiments shows that the new technique delivers significant improvements in terms of inner (linear) iterations and those translate into significant savings of the IPM solution time

    Custom optimization algorithms for efficient hardware implementation

    No full text
    The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.Open Acces

    Superior properties of the PRESB preconditioner for operators on two-by-two block form with square blocks

    Get PDF
    Matrices or operators in two-by-two block form with square blocks arise in numerous important applications, such as in optimal control problems for PDEs. The problems are normally of very large scale so iterative solution methods must be used. Thereby the choice of an efficient and robust preconditioner is of crucial importance. Since some time a very efficient preconditioner, the preconditioned square block, PRESB method has been used by the authors and coauthors in various applications, in particular for optimal control problems for PDEs. It has been shown to have excellent properties, such as a very fast and robust rate of convergence that outperforms other methods. In this paper the fundamental and most important properties of the method are stressed and presented with new and extended proofs. Under certain conditions, the condition number of the preconditioned matrix is bounded by 2 or even smaller. Furthermore, under certain assumptions the rate of convergence is superlinear
    corecore