53 research outputs found
A specialized interior-point algorithm for huge minimum convex cost flows in bipartite networks
Research Report UPC-DEIO DR 2018-01. November 2018The computation of the Newton direction is the most time consuming step of interior-point methods. This direction was efficiently computed by a combination
of Cholesky factorizations and conjugate gradients in a specialized interior-point method for block-angular structured problems. In this work we apply this algorithmic approach to solve very large instances of minimum cost flows problems in bipartite networks, for convex objective functions with diagonal Hessians (i.e., either linear, quadratic or separable nonlinear objectives). After analyzing the theoretical properties of the interior-point method for this kind of problems, we provide extensive computational experiments with linear and quadratic instances of up to one billion arcs and 200 and five million nodes in each subset of the node partition. For linear and quadratic instances our approach is compared with the barriers algorithms of CPLEX (both standard path-following and homogeneous-self-dual); for linear instances it is also compared with the different algorithms of the state-of-the-art network flow solver LEMON (namely: network simplex, capacity scaling, cost scaling and cycle canceling).
The specialized interior-point approach significantly outperformed the other approaches in most of the linear and quadratic transportation instances tested. In particular, it always provided a solution within the time limit and it never exhausted the 192 Gigabytes of memory of the server used for the runs. For assignment problems the network algorithms in LEMON were the most efficient option.Peer ReviewedPreprin
A specialized interior-point algorithm for huge minimum convex cost flows in bipartite networks
© 2020 ElsevierThe computation of the Newton direction is the most time consuming step of interior-point methods. This direction was efficiently computed by a combination of Cholesky factorizations and conjugate gradients in a specialized interior-point method for block-angular structured problems. In this work we refine this algorithmic approach to solve very large instances of minimum cost flows problems in bipartite networks, for convex objective functions with diagonal Hessians (i.e., either linear, quadratic or separable nonlinear objectives). For this class of problems the specialized algorithm only required the solution of a system by conjugate gradients at each interior-point iteration, avoiding Cholesky factorizations. After analyzing the theoretical properties of the interior-point method for this kind of problems, we provide extensive computational experiments with linear and quadratic instances of up to one billion arcs (corresponding to 200 nodes and five million nodes in each subset of the node partition, respectively). For linear and quadratic instances our approach is compared with the barriers algorithms of CPLEX (both standard path-following and homogeneous-self-dual); for linear instances it is also compared with the different algorithms of the state-of-the-art network flow solver LEMON (namely: network simplex, capacity scaling, cost scaling and cycle canceling). The specialized interior-point approach significantly outperformed the other approaches in most of the linear and quadratic transportation instances tested. In particular, it always provided a solution within the time limit; and (like LEMON, and unlike CPLEX) it never exhausted the memory of the server used for the runs. For assignment problems the network algorithms in LEMON were the most efficient option.This work has been supported by the grants MINECO /FEDER MTM2015-65362-R and MCIU/ AEI /FEDER RTI2018-097580-B-I00Peer ReviewedPostprint (author's final draft
A specialized interior-point algorithm for huge minimum convex cost flows in bipartite networks
Preprin
Interior-point solver for convex separable block-angular problems
Constraints matrices with block-angular structures are pervasive in Optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solved the normal equations using sparse Cholesky factorizations for the block constraints, and a preconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work we present an efficient solver based on this algorithm. Some of its features are: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor-corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner
may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the precondi-
tioner). The solver has been hooked to SML, a structure-conveying modelling language based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using l1 and l2 distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to l1 which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300000 constraints, this approach is from two to three orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.Preprin
Adaptive Branch and Bound for Efficient Solution of Mixed-Integer Programs Formulated with Big-M
This thesis describes three specialized branch-and-bound (B and B) algorithms for solving a mixed-integer program (MIP) that incorporates standard big-M constructs. The goal is to identify valid values for M that also lead to short solution times. One algorithm initializes large instances of M (giving a weak relaxation of the MIP), and decreases these as required to increase efficiency of the standard B and B. Two algorithms initialize small and possibly invalid instances of M, and subsequently increase those values in an attempt to ensure solution validity. Each algorithm requires a model-specific test condition to detect weak or invalid Ms. We test all algorithms on an uncapacitated k-median problem (a variant of the uncapacitated facility location problem), and one algorithm on a shortest-path interdiction problem (SPIP). We observe substantial reduction in run times in almost all cases tested. When solving for exact solutions, computational results show that the proposed algorithms may reduce solution times by up to 75 per cent for the uncapacitated k-median problem and 99 per cent for the SPIP. When the algorithms yield marginally suboptimal solutions, substantial solution-time improvements are also recorded. While testing is limited, this thesis serves as a proof-of-concept that the proposed adaptive algorithms can be effective in reducing solution times and producing optimal or nearly optimal solutions.http://archive.org/details/adaptivebranchnd1094517370Major, Singapore ArmyApproved for public release; distribution is unlimited
Improving an interior-point approach for large block-angular problems by hybrid preconditioners
The computational time required by interior-point methods
is often domi-
nated by the solution of linear systems of equations. An efficient spec
ialized
interior-point algorithm for primal block-angular proble
ms has been used to
solve these systems by combining Cholesky factorizations for the
block con-
straints and a conjugate gradient based on a power series precon
ditioner for
the linking constraints. In some problems this power series prec
onditioner re-
sulted to be inefficient on the last interior-point iterations, wh
en the systems
became ill-conditioned. In this work this approach is combi
ned with a split-
ting preconditioner based on LU factorization, which is main
ly appropriate
for the last interior-point iterations. Computational result
s are provided for
three classes of problems: multicommodity flows (oriented and no
noriented),
minimum-distance controlled tabular adjustment for statistic
al data protec-
tion, and the minimum congestion problem. The results show that
, in most
cases, the hybrid preconditioner improves the performance an
d robustness of
the interior-point solver. In particular, for some block-ang
ular problems the
solution time is reduced by a factor of 10.Peer ReviewedPreprin
Efficient interior point algorithms for large scale convex optimization problems
Interior point methods (IPMs) are among the most widely used algorithms for
convex optimization problems. They are applicable to a wide range of problems, including
linear, quadratic, nonlinear, conic and semidefinite programming problems,
requiring a polynomial number of iterations to find an accurate approximation of
the primal-dual solution. The formidable convergence properties of IPMs come
with a fundamental drawback: the numerical linear algebra involved becomes
progressively more and more challenging as the IPM converges towards optimality.
In particular, solving the linear systems to find the Newton directions requires
most of the computational effort of an IPM. Proposed remedies to alleviate
this phenomenon include regularization techniques, predictor-corrector schemes,
purposely developed preconditioners, low-rank update strategies, to mention a
few.
For problems of very large scale, this unpleasant characteristic of IPMs becomes
a more and more problematic feature, since any technique used must be efficient
and scalable in order to maintain acceptable computational requirements. In this
Thesis, we deal with convex linear and quadratic problems of large “dimension”:
we use this term in a broader sense than just a synonym for “size” of the problem.
The instances considered can be either problems with a large number of variables
and/or constraints but with a sparse structure, or problems with a moderate
number of variables and/or constraints but with a dense structure. Both these
type of problems require very efficient strategies to be used during the algorithm,
even though the corresponding difficulties arise for different reasons.
The first application that we consider deals with a moderate size quadratic
problem where the quadratic term is 100% dense; this problem arises from X-ray
tomographic imaging reconstruction, in particular with the goal of separating the
distribution of two materials present in the observed sample. A novel non-convex
regularizer is introduced for this purpose; convexity of the overall problem is
maintained by careful choice of the parameters. We derive a specialized interior
point method for this problem and an appropriate preconditioner for the normal
equations linear system, to be used without ever forming the fully dense matrices
involved.
The next major contribution is related to the issue of efficiently computing
the Newton direction during IPMs. When an iterative method is applied to
solve the linear equation system in IPMs, the attention is usually placed on
accelerating their convergence by designing appropriate preconditioners, but the
linear solver is applied as a black box with a standard termination criterion
which asks for a sufficient reduction of the residual in the linear system. Such an
approach often leads to an unnecessary “over-solving” of linear equations. We
propose new indicators for the early termination of the inner iterations and test
them on a set of large scale quadratic optimization problems. Evidence gathered
from these computational experiments shows that the new technique delivers
significant improvements in terms of inner (linear) iterations and those translate
into significant savings of the IPM solution time.
The last application considered is discrete optimal transport (OT) problems;
these kind of problems give rise to very large linear programs with highly structured
matrices. Solutions of such problems are expected to be sparse, that is only a
small subset of entries in the optimal solution is expected to be nonzero. We derive
an IPM for the standard OT formulation, which exploits a column-generation-like
technique to force all intermediate iterates to be as sparse as possible. We prove
theoretical results about the sparsity pattern of the optimal solution and we
propose to mix iterative and direct linear solvers in an efficient way, to keep
computational time and memory requirement as low as possible. We compare the
proposed method with two state-of-the-art solvers and show that it can compete
with the best network optimization tools in terms of computational time and
memory usage. We perform experiments with problems reaching more than four
billion variables and demonstrate the robustness of the proposed method.
We consider also the optimal transport problem on sparse graphs and present
a primal-dual regularized IPM to solve it. We prove that the introduction of the
regularization allows us to use sparsified versions of the normal equations system
to inexpensively generate inexact IPM directions. The proposed method is shown
to have polynomial complexity and to outperform a very efficient network simplex
implementation, for problems with up to 50 million variables
- …