76,828 research outputs found
Advances in Interior Point Methods for Large-Scale Linear Programming
This research studies two computational techniques that improve the practical performance of existing implementations of interior point methods for linear
programming. Both are based on the concept of symmetric neighbourhood as the
driving tool for the analysis of the good performance of some practical algorithms.
The symmetric neighbourhood adds explicit upper bounds on the complementarity pairs, besides the lower bound already present in the common N−1 neighbourhood. This allows the algorithm to keep under control the spread among
complementarity pairs and reduce it with the barrier parameter μ. We show that
a long-step feasible algorithm based on this neighbourhood is globally convergent
and converges in O(nL) iterations.
The use of the symmetric neighbourhood and the recent theoretical under-
standing of the behaviour of Mehrotra’s corrector direction motivate the introduction of a weighting mechanism that can be applied to any corrector direction,
whether originating from Mehrotra’s predictor–corrector algorithm or as part of
the multiple centrality correctors technique. Such modification in the way a correction is applied aims to ensure that any computed search direction can positively
contribute to a successful iteration by increasing the overall stepsize, thus avoid-
ing the case that a corrector is rejected. The usefulness of the weighting strategy is
documented through complete numerical experiments on various sets of publicly
available test problems. The implementation within the hopdm interior point
code shows remarkable time savings for large-scale linear programming problems.
The second technique develops an efficient way of constructing a starting point
for structured large-scale stochastic linear programs. We generate a computation-
ally viable warm-start point by solving to low accuracy a stochastic problem of
much smaller dimension. The reduced problem is the deterministic equivalent
program corresponding to an event tree composed of a restricted number of scenarios. The solution to the reduced problem is then expanded to the size of the
problem instance, and used to initialise the interior point algorithm. We present
theoretical conditions that the warm-start iterate has to satisfy in order to be
successful. We implemented this technique in both the hopdm and the oops
frameworks, and its performance is verified through a series of tests on problem
instances coming from various stochastic programming sources
Updating constraint preconditioners for KKT systems in quadratic programming via low-rank corrections
This work focuses on the iterative solution of sequences of KKT linear
systems arising in interior point methods applied to large convex quadratic
programming problems. This task is the computational core of the interior point
procedure and an efficient preconditioning strategy is crucial for the
efficiency of the overall method. Constraint preconditioners are very effective
in this context; nevertheless, their computation may be very expensive for
large-scale problems, and resorting to approximations of them may be
convenient. Here we propose a procedure for building inexact constraint
preconditioners by updating a "seed" constraint preconditioner computed for a
KKT matrix at a previous interior point iteration. These updates are obtained
through low-rank corrections of the Schur complement of the (1,1) block of the
seed preconditioner. The updated preconditioners are analyzed both
theoretically and computationally. The results obtained show that our updating
procedure, coupled with an adaptive strategy for determining whether to
reinitialize or update the preconditioner, can enhance the performance of
interior point methods on large problems.Comment: 22 page
An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for Linear Programming
Powerful interior-point methods (IPM) based commercial solvers, such as
Gurobi and Mosek, have been hugely successful in solving large-scale linear
programming (LP) problems. The high efficiency of these solvers depends
critically on the sparsity of the problem data and advanced matrix
factorization techniques. For a large scale LP problem with data matrix
that is dense (possibly structured) or whose corresponding normal matrix
has a dense Cholesky factor (even with re-ordering), these solvers may require
excessive computational cost and/or extremely heavy memory usage in each
interior-point iteration. Unfortunately, the natural remedy, i.e., the use of
iterative methods based IPM solvers, although can avoid the explicit
computation of the coefficient matrix and its factorization, is not practically
viable due to the inherent extreme ill-conditioning of the large scale normal
equation arising in each interior-point iteration. To provide a better
alternative choice for solving large scale LPs with dense data or requiring
expensive factorization of its normal equation, we propose a semismooth Newton
based inexact proximal augmented Lagrangian ({\sc Snipal}) method. Different
from classical IPMs, in each iteration of {\sc Snipal}, iterative methods can
efficiently be used to solve simpler yet better conditioned semismooth Newton
linear systems. Moreover, {\sc Snipal} not only enjoys a fast asymptotic
superlinear convergence but is also proven to enjoy a finite termination
property. Numerical comparisons with Gurobi have demonstrated encouraging
potential of {\sc Snipal} for handling large-scale LP problems where the
constraint matrix has a dense representation or has a dense
factorization even with an appropriate re-ordering.Comment: Due to the limitation "The abstract field cannot be longer than 1,920
characters", the abstract appearing here is slightly shorter than that in the
PDF fil
Inexact Interior-Point Methods for Large Scale Linear and Convex Quadratic Semidefinite Programming
Ph.DDOCTOR OF PHILOSOPH
A New Preconditioning Approachfor an Interior Point–Proximal Method of Multipliers for Linear and Convex Quadratic Programming
In this paper, we address the efficient numerical solution of linear and
quadratic programming problems, often of large scale. With this aim, we devise
an infeasible interior point method, blended with the proximal method of
multipliers, which in turn results in a primal-dual regularized interior point
method. Application of this method gives rise to a sequence of increasingly
ill-conditioned linear systems which cannot always be solved by factorization
methods, due to memory and CPU time restrictions. We propose a novel
preconditioning strategy which is based on a suitable sparsification of the
normal equations matrix in the linear case, and also constitutes the foundation
of a block-diagonal preconditioner to accelerate MINRES for linear systems
arising from the solution of general quadratic programming problems. Numerical
results for a range of test problems demonstrate the robustness of the proposed
preconditioning strategy, together with its ability to solve linear systems of
very large dimension
Solving Saddle Point Formulations of Linear Programs with Frank-Wolfe
The problem of solving a linear program (LP) is ubiquitous in industry, yet in recent years the size of linear programming problems has grown and continues to do so. State-of-the-art LP solvers make use of the Simplex method and primal-dual interior-point methods which are able to provide accurate solutions in a reasonable amount of time for most problems. However, both the Simplex method and interior-point methods require solving a system of linear equations at each iteration, an operation that does not scale well with the size of the problem.
In response to the growing size of linear programs and poor scalability of existing algorithms, researchers have started to consider
first-order methods for solving large scale linear programs. The best known first-order method for general linear programming problems is PDLP. First-order methods for linear programming are characterized by having a matrix-vector product as their primary computational cost.
We present a first-order primal-dual algorithm for solving saddle point formulations of linear programs, named FWLP (Frank-Wolfe Linear Programming). We provide some theoretical results regarding the behavior of our algorithm, however no convergence guarantees are provided. Numerical investigations suggest that our algorithm has error O(1/sqrt(k)) after k iterations, worse than that of PDLP, however we show that our algorithm has advantages for solving very large LPs in practice such as only needing part of the matrix A at each iteration
- …