73 research outputs found

    Preconditioners for state constrained optimal control problems\ud with Moreau-Yosida penalty function tube

    Get PDF
    Optimal control problems with partial differential equations play an important role in many applications. The inclusion of bound constraints for the state poses a significant challenge for optimization methods. Our focus here is on the incorporation of the constraints via the Moreau-Yosida regularization technique. This method has been studied recently and has proven to be advantageous compared to other approaches. In this paper we develop preconditioners for the efficient solution of the Newton steps associated with the fast solution of the Moreau-Yosida regularized problem. Numerical results illustrate the competitiveness of this approach. \ud \ud Copyright c 2000 John Wiley & Sons, Ltd

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    General-purpose preconditioning for regularized interior point methods

    Get PDF
    In this paper we present general-purpose preconditioners for regularized augmented systems, and their corresponding normal equations, arising from optimization problems. We discuss positive definite preconditioners, suitable for CG and MINRES. We consider “sparsifications" which avoid situations in which eigenvalues of the preconditioned matrix may become complex. Special attention is given to systems arising from the application of regularized interior point methods to linear or nonlinear convex programming problems.</p

    Constraint-Preconditioned Krylov Solvers for Regularized Saddle-Point Systems

    Full text link
    We consider the iterative solution of regularized saddle-point systems. When the leading block is symmetric and positive semi-definite on an appropriate subspace, Dollar, Gould, Schilders, and Wathen (2006) describe how to apply the conjugate gradient (CG) method coupled with a constraint preconditioner, a choice that has proved to be effective in optimization applications. We investigate the design of constraint-preconditioned variants of other Krylov methods for regularized systems by focusing on the underlying basis-generation process. We build upon principles laid out by Gould, Orban, and Rees (2014) to provide general guidelines that allow us to specialize any Krylov method to regularized saddle-point systems. In particular, we obtain constraint-preconditioned variants of Lanczos and Arnoldi-based methods, including the Lanczos version of CG, MINRES, SYMMLQ, GMRES(m) and DQGMRES. We also provide MATLAB implementations in hopes that they are useful as a basis for the development of more sophisticated software. Finally, we illustrate the numerical behavior of constraint-preconditioned Krylov solvers using symmetric and nonsymmetric systems arising from constrained optimization.Comment: Accepted for publication in the SIAM Journal on Scientific Computin

    All-at-Once Solution if Time-Dependent PDE-Constrained Optimisation Problems

    Get PDF
    Time-dependent partial differential equations (PDEs) play an important role in applied mathematics and many other areas of science. One-shot methods try to compute the solution to these problems in a single iteration that solves for all time-steps at the same time. In this paper, we look at one-shot approaches for the optimal control of time-dependent PDEs and focus on the fast solution of these problems. The use of Krylov subspace solvers together with an efficient preconditioner allows for minimal storage requirements. We solve only approximate time-evolutions for both forward and adjoint problem and compute accurate solutions of a given control problem only at convergence of the overall Krylov subspace iteration. We show that our approach can give competitive results for a variety of problem formulations

    Spectral estimates and preconditioning for saddle point systems arising from optimization problems

    Get PDF
    In this thesis, we consider the problem of solving large and sparse linear systems of saddle point type stemming from optimization problems. The focus of the thesis is on iterative methods, and new preconditioning srategies are proposed, along with novel spectral estimtates for the matrices involved

    Implementing a smooth exact penalty function for equality-constrained nonlinear optimization

    Full text link
    We develop a general equality-constrained nonlinear optimization algorithm based on a smooth penalty function proposed by Fletcher (1970). Although it was historically considered to be computationally prohibitive in practice, we demonstrate that the computational kernels required are no more expensive than other widely accepted methods for nonlinear optimization. The main kernel required to evaluate the penalty function and its derivatives is solving a structured linear system. We show how to solve this system efficiently by storing a single factorization each iteration when the matrices are available explicitly. We further show how to adapt the penalty function to the class of factorization-free algorithms by solving the linear system iteratively. The penalty function therefore has promise when the linear system can be solved efficiently, e.g., for PDE-constrained optimization problems where efficient preconditioners exist. We discuss extensions including handling simple constraints explicitly, regularizing the penalty function, and inexact evaluation of the penalty function and its gradients. We demonstrate the merits of the approach and its various features on some nonlinear programs from a standard test set, and some PDE-constrained optimization problems

    Regularized interior point methods for convex programming

    Get PDF
    Interior point methods (IPMs) constitute one of the most important classes of optimization methods, due to their unparalleled robustness, as well as their generality. It is well known that a very large class of convex optimization problems can be solved by means of IPMs, in a polynomial number of iterations. As a result, IPMs are being used to solve problems arising in a plethora of fields, ranging from physics, engineering, and mathematics, to the social sciences, to name just a few. Nevertheless, there remain certain numerical issues that have not yet been addressed. More specifically, the main drawback of IPMs is that the linear algebra task involved is inherently ill-conditioned. At every iteration of the method, one has to solve a (possibly large-scale) linear system of equations (also known as the Newton system), the conditioning of which deteriorates as the IPM converges to an optimal solution. If these linear systems are of very large dimension, prohibiting the use of direct factorization, then iterative schemes may have to be employed. Such schemes are significantly affected by the inherent ill-conditioning within IPMs. One common approach for improving the aforementioned numerical issues, is to employ regularized IPM variants. Such methods tend to be more robust and numerically stable in practice. Over the last two decades, the theory behind regularization has been significantly advanced. In particular, it is well known that regularized IPM variants can be interpreted as hybrid approaches combining IPMs with the proximal point method. However, it remained unknown whether regularized IPMs retain the polynomial complexity of their non-regularized counterparts. Furthermore, the very important issue of tuning the regularization parameters appropriately, which is also crucial in augmented Lagrangian methods, was not addressed. In this thesis, we focus on addressing the previous open questions, as well as on creating robust implementations that solve various convex optimization problems. We discuss in detail the effect of regularization, and derive two different regularization strategies; one based on the proximal method of multipliers, and another one based on a Bregman proximal point method. The latter tends to be more efficient, while the former is more robust and has better convergence guarantees. In addition, we discuss the use of iterative linear algebra within the presented algorithms, by proposing some general purpose preconditioning strategies (used to accelerate the iterative schemes) that take advantage of the regularized nature of the systems being solved. In Chapter 2 we present a dynamic non-diagonal regularization for IPMs. The non-diagonal aspect of this regularization is implicit, since all the off-diagonal elements of the regularization matrices are cancelled out by those elements present in the Newton system, which do not contribute important information in the computation of the Newton direction. Such a regularization, which can be interpreted as the application of a Bregman proximal point method, has multiple goals. The obvious one is to improve the spectral properties of the Newton system solved at each IPM iteration. On the other hand, the regularization matrices introduce sparsity to the aforementioned linear system, allowing for more efficient factorizations. We propose a rule for tuning the regularization dynamically based on the properties of the problem, such that sufficiently large eigenvalues of the non-regularized system are perturbed insignificantly. This alleviates the need of finding specific regularization values through experimentation, which is the most common approach in the literature. We provide perturbation bounds for the eigenvalues of the non-regularized system matrix, and then discuss the spectral properties of the regularized matrix. Finally, we demonstrate the efficiency of the method applied to solve standard small- and medium-scale linear and convex quadratic programming test problems. In Chapter 3 we combine an IPM with the proximal method of multipliers (PMM). The resulting algorithm (IP-PMM) is interpreted as a primal-dual regularized IPM, suitable for solving linearly constrained convex quadratic programming problems. We apply few iterations of the interior point method to each sub-problem of the proximal method of multipliers. Once a satisfactory solution of the PMM sub-problem is found, we update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under standard assumptions. To our knowledge, this is the first polynomial complexity result for a primal-dual regularized IPM. The algorithm is guided by the use of a single penalty parameter; that of the logarithmic barrier. In other words, we show that IP-PMM inherits the polynomial complexity of IPMs, as well as the strong convexity of the PMM sub-problems. The updates of the penalty parameter are controlled by IPM, and hence are well-tuned, and do not depend on the problem solved. Furthermore, we study the behavior of the method when it is applied to an infeasible problem, and identify a necessary condition for infeasibility. The latter is used to construct an infeasibility detection mechanism. Subsequently, we provide a robust implementation of the presented algorithm and test it over a set of small to large scale linear and convex quadratic programming problems, demonstrating the benefits of using regularization in IPMs as well as the reliability of the approach. In Chapter 4 we extend IP-PMM to the case of linear semi-definite programming (SDP) problems. In particular, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM. In Chapter 5 we present general-purpose preconditioners for regularized Newton systems arising within regularized interior point methods. We discuss positive definite preconditioners, suitable for iterative schemes like the conjugate gradient (CG), or the minimal residual (MINRES) method. We study the spectral properties of the preconditioned systems, and discuss the use of each presented approach, depending on the properties of the problem under consideration. All preconditioning strategies are numerically tested on various medium- to large-scale problems coming from standard test sets, as well as problems arising from partial differential equation (PDE) optimization. In Chapter 6 we apply specialized regularized IPM variants to problems arising from portfolio optimization, machine learning, image processing, and statistics. Such problems are usually solved by specialized first-order approaches. The efficiency of the proposed regularized IPM variants is confirmed by comparing them against problem-specific state--of--the--art first-order alternatives given in the literature. Finally, in Chapter 7 we present some conclusions as well as open questions, and possible future research directions

    Preconditioning for Sparse Linear Systems at the Dawn of the 21st Century: History, Current Developments, and Future Perspectives

    Get PDF
    Iterative methods are currently the solvers of choice for large sparse linear systems of equations. However, it is well known that the key factor for accelerating, or even allowing for, convergence is the preconditioner. The research on preconditioning techniques has characterized the last two decades. Nowadays, there are a number of different options to be considered when choosing the most appropriate preconditioner for the specific problem at hand. The present work provides an overview of the most popular algorithms available today, emphasizing the respective merits and limitations. The overview is restricted to algebraic preconditioners, that is, general-purpose algorithms requiring the knowledge of the system matrix only, independently of the specific problem it arises from. Along with the traditional distinction between incomplete factorizations and approximate inverses, the most recent developments are considered, including the scalable multigrid and parallel approaches which represent the current frontier of research. A separate section devoted to saddle-point problems, which arise in many different applications, closes the paper
    • …
    corecore