7 research outputs found

    Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections

    Get PDF
    This work focuses on the iterative solution of sequences of KKT linear systems arising in interior point methods applied to large convex quadratic programming problems. This task is the computational core of the interior point procedure and an efficient preconditioning strategy is crucial for the efficiency of the overall method. Constraint preconditioners are very effective in this context; nevertheless, their computation may be very expensive for large-scale problems, and resorting to approximations of them may be convenient. Here we propose a procedure for building inexact constraint preconditioners by updating a "seed" constraint preconditioner computed for a KKT matrix at a previous interior point iteration. These updates are obtained through low-rank corrections of the Schur complement of the (1,1) block of the seed preconditioner. The updated preconditioners are analyzed both theoretically and computationally. The results obtained show that our updating procedure, coupled with an adaptive strategy for determining whether to reinitialize or update the preconditioner, can enhance the performance of interior point methods on large problems.Comment: 22 page

    Indefinitely preconditioned conjugate gradient method for large sparse equality and inequality constrained quadratic problems

    No full text
    This paper is concerned with the numerical solution of a symmetric indefinite system which is a generalization of the Karush Kuhn Tucker system. Following the recent approach of Luk.san and Vl.cek, we propose to solve this system by a preconditioned conjugate gradient (PCG) algorithm and we devise two indefin ite preconditioners with good theoretical properties. In particular, for one of these preconditioners, the finite termination property of the PCG method is stated. The PCG method combined with a parallel version of these preconditioners is used as inner solver within an inexact Interior-Point (IP) method for the solution of large and sparse quadratic programs. The numerical results obtained by a parallel code implementing the IP method on distributed memory multiprocessor systems enable us to confirm the effectiveness of the proposed approach for problems with special structure in the constraint matrix and in the objective function

    Constraint-Preconditioned Krylov Solvers for Regularized Saddle-Point Systems

    Full text link
    We consider the iterative solution of regularized saddle-point systems. When the leading block is symmetric and positive semi-definite on an appropriate subspace, Dollar, Gould, Schilders, and Wathen (2006) describe how to apply the conjugate gradient (CG) method coupled with a constraint preconditioner, a choice that has proved to be effective in optimization applications. We investigate the design of constraint-preconditioned variants of other Krylov methods for regularized systems by focusing on the underlying basis-generation process. We build upon principles laid out by Gould, Orban, and Rees (2014) to provide general guidelines that allow us to specialize any Krylov method to regularized saddle-point systems. In particular, we obtain constraint-preconditioned variants of Lanczos and Arnoldi-based methods, including the Lanczos version of CG, MINRES, SYMMLQ, GMRES(m) and DQGMRES. We also provide MATLAB implementations in hopes that they are useful as a basis for the development of more sophisticated software. Finally, we illustrate the numerical behavior of constraint-preconditioned Krylov solvers using symmetric and nonsymmetric systems arising from constrained optimization.Comment: Accepted for publication in the SIAM Journal on Scientific Computin

    Improvement of the hybrid preconditioning approach for interior point methods

    Get PDF
    Orientadores: Carla Taviane Lucke da Silva Ghidini, Aurelio Ribeiro Leite de OliveiraTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação CientíficaResumo: A etapa mais importante do método de pontos interiores preditor-corretor para solução de problemas de programação linear de grande porte, consiste em resolver sistemas de equações lineares para determinar as direções de busca. Esse é o passo que requer maior esforço computacional do método e, dessa forma, é fundamental realizá-lo da maneira mais eficiente possível. Uma abordagem que pode ser utilizada consiste em reduzir o sistema linear em um sistema de equações normais equivalente, cuja matriz é simétrica e definida positiva e aplicar o método iterativo dos gradientes conjugados precondicionado para resolvê-lo. Encontrar um único precondicionador que funcione bem em todas iterações do método preditor-corretor não é uma tarefa simples, pois os sistemas vão se tornando cada vez mais mal condicionados. Na literatura, foi proposta uma abordagem híbrida de precondicionamento que combina dois precondicionadores especialmente adaptados para esses sistemas, a qual apresentou bons resultados e que consiste em nas primeiras iterações do preditor-corretor utilizar o precondicionador Fatoração Controlada de Cholesky e depois de certo número de iterações realizar a troca e o precondicionador Separador passa a ser utilizado. Algumas heurísticas simples para determinar o momento ideal para trocar de precondicionador já foram desenvolvidas. Porém, motivados pela importância dessa etapa, desenvolvemos e testemos novas heurísticas que utilizam estimativas do número de condição da matriz do sistema, uma vez que calcular o valor exato para matrizes de grande porte é caro computacionalmente e torna-se inviável na prática, e análise da dispersão dos autovalores da matriz. Além disso, com o intuito de proporcionar outras melhorias para a abordagem híbrida de precondicionamento e, consequentemente, obter um método de pontos interiores ainda mais eficiente e robusto, outro critério para a atualização do parâmetro de preenchimento utilizado na Fatoração Controlada de Cholesky foi proposto com base numa característica do problema, que é a densidade da matriz. Diversos experimentos computacionais foram realizados com problemas pertencentes à coleções com acesso livre para testar todas as heurísticas desenvolvidas e compará-las com as existentes. De acordo com os resultados obtidos podemos afirmar que os objetivos deste trabalho foram alcançados, uma vez que algumas das heurísticas propostas contribuíram para determinar um melhor momento de trocar de precondicionador e tornaram o precondicionador Fatoração Controlada de Cholesky mais adequadoAbstract: The most important step of the predictor-corrector interior point method for large linear programming problems is to solve linear systems to determine the search directions. This is the most costly computational step of the method and therefore it is essential to perform it in the most efficient way possible. One approach that can be used is to reduce the linear system in a system of equivalent normal equations, which matrix is symmetric and positive definite and apply the preconditioned conjugate gradient method to solve it. Finding a single preconditioner that works well in all iterations of the interior point method is not a simple task as the systems become increasingly ill conditioned. In the literature, it was proposed a hybrid preconditioning approach that combines two preconditioners specially adapted for these systems, which presented good results and that consists in using the Controlled Cholesky Factorization preconditioner in the first iterations of the predictor-corrector making the exchange after a certain number of iterations, and adopting Separator preconditioner thereafter. Some simple heuristics to determine the ideal time to change preconditioner have already been developed. However, motivated by the importance of this step, we develop and test new heuristics that use estimates of the condition number of the system matrix, since calculating the exact value for large matrices is computationally expensive and becomes prohibitibe in practice, and investigate the eigenvalues ??dispersion. In addition, in order to provide other improvements to the hybrid pre-conditioning approach and consequently to obtain an even more efficient and robust interior point method, another criterion for updating the fill parameter used in Controlled Cholesky Factoring was proposed based on a characteristic of the problem, which is the density of the matrix. Several computational experiments were carried out with problems belonging to the collections with free access in order to test all developed heuristics and to compare them with existing ones. According to the results obtained we can affirm that the objectives of this work were reached, since some of the proposed heuristics contributed to determine a better instant to change preconditioner and made Controlled Cholesky Factorization Preconditioner more suitableDoutoradoMatematica AplicadaDoutora em Matemática AplicadaCAPE

    Hybrid Filter Methods for Nonlinear Optimization

    Get PDF
    Globalization strategies used by algorithms to solve nonlinear constrained optimization problems must balance the oftentimes conflicting goals of reducing the objective function and satisfying the constraints. The use of merit functions and filters are two such popular strategies, both of which have their strengths and weaknesses. In particular, traditional filter methods require the use of a restoration phase that is designed to reduce infeasibility while ignoring the objective function. For this reason, there is often a significant decrease in performance when restoration is triggered. In Chapter 3, we present a new filter method that addresses this main weakness of traditional filter methods. Specifically, we present a hybrid filter method that avoids a traditional restoration phase and instead employs a penalty mode that is built upon the l-1 penalty function; the penalty mode is entered when an iterate decreases both the penalty function and the constraint violation. Moreover, the algorithm uses the same search direction computation procedure during every iteration and uses local feasibility estimates that emerge during this procedure to define a new, improved, and adaptive margin (envelope) of the filter. Since we use the penalty function (a combination of the objective function and constraint violation) to define the search direction, our algorithm never ignores the objective function, a property that is not shared by traditional filter methods. Our algorithm thusly draws upon the strengths of both filter and penalty methods to form a novel hybrid approach that is robust and efficient. In particular, under common assumptions, we prove global convergence of our algorithm. In Chapter 4, we present a nonmonotonic variant of the algorithm in Chapter 3. For this version of our method, we prove that it generates iterates that converge to a first-order solution from an arbitrary starting point, with a superlinear rate of convergence. We also present numerical results that validate the efficiency of our method. Finally, in Chapter 5, we present a numerical study on the application of a recently developed bound-constrained quadratic optimization algorithm on the dual formulation of sparse large-scale strictly convex quadratic problems. Such problems are of particular interest since they arise as subproblems during every iteration of our new filter methods

    Numerical solution of saddle point problems

    Full text link
    corecore