11 research outputs found
On inexact Newton directions in interior point methods for linear optimization
In each iteration of the interior point method (IPM) at least one linear system
has to be solved. The main computational effort of IPMs consists in the computation
of these linear systems. Solving the corresponding linear systems
with a direct method becomes very expensive for large scale problems.
In this thesis, we have been concerned with using an iterative method for
solving the reduced KKT systems arising in IPMs for linear programming.
The augmented system form of this linear system has a number of advantages,
notably a higher degree of sparsity than the normal equations form.
We design a block triangular preconditioner for this system which is constructed
by using a nonsingular basis matrix identified from an estimate of
the optimal partition in the linear program. We use the preconditioned conjugate
gradients (PCG) method to solve the augmented system. Although
the augmented system is indefinite, short recurrence iterative methods such
as PCG can be applied to indefinite system in certain situations. This approach
has been implemented within the HOPDM interior point solver.
The KKT system is solved approximately. Therefore, it becomes necessary
to study the convergence of IPM for this inexact case. We present the
convergence analysis of the inexact infeasible path-following algorithm, prove
the global convergence of this method and provide complexity analysis
Interior Point Methods 25 Years Later
Interior point methods for optimization have been around for more than 25 years now. Their presence has shaken up the field of optimization. Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. Among the most impressive of them are their low-degree polynomial worst-case complexity and an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Interior point methods are competitive when dealing with small problems of dimensions below one million constraints and variables and are beyond competition when applied to large problems of dimensions going into millions of constraints and variables. In this survey we will discuss several issues related to interior point methods including the proof of the worst-case complexity result, the reasons for their amazingly fast practi-cal convergence and the features responsible for their ability to solve very large problems. The ever-growing sizes of optimization problems impose new requirements on optimizatio
Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization
1 Preconditioning Indefinite Systems in Interior Point Methods for Large Scale Linear Optimization Abstract We discuss the use of preconditioned conjugate gradients method for solving the reducedKKT systems arising in interior point algorithms for linear programming. The (indefinite) augmented system form of this linear system has a number of advantages, notably a higherdegree of sparsity than the (positive definite) normal equations form. Therefore we use the conjugate gradients method to solve the augmented system and look for a suitable precon-ditioner. An explicit null space representation of linear constraints is constructed by using a nonsin-gular basis matrix identified from an estimate of the optimal partition in the linear program. This is achieved by means of recently developed efficient basis matrix factorisation techniqueswhich exploit hyper-sparsity and are used in implementations of the revised simplex method. The approach has been implemented within the HOPDM interior point solver and appliedto medium and large-scale problems from public domain test collections. Computational experience is encouraging