13 research outputs found
Improving an interior-point algorithm for multicommodity flows by quadratic regularizations
One of the best approaches for some classes of multicommodity flow problems is a specialized interior-point method that solves the normal equations by a combination of Cholesky factorizations and preconditioned
conjugate gradient. Its efficiency depends on the spectral radius—in [0,1)—of a certain matrix in the definition of the preconditioner. In a recent work the authors improved this algorithm (i.e., reduced the spectral radius) for general block-angular problems by adding a quadratic
regularization to the logarithmic barrier. This barrier was shown to be self-concordant, which guarantees the convergence and polynomial complexity of the algorithm. In this work we focus on linear multicommodity problems, a particular case of primal block-angular ones. General results
are tailored for multicommodity flows, allowing a local sensitivity analysis
on the effect of the regularization. Extensive computational results on some standard and some difficult instances, testing several regularization strategies, are also provided. These results show that the regularized interior-point algorithm is more efficient than the nonregularized one.
From this work it can be concluded that, if interior-point methods based on conjugate gradients are used, linear multicommodity flow problems are most efficiently solved as a sequence of quadratic ones.Preprin
Interior-point solver for convex separable block-angular problems
Constraints matrices with block-angular structures are pervasive in Optimization. Interior-point methods have shown to be competitive for these structured problems by exploiting the linear algebra. One of these approaches solved the normal equations using sparse Cholesky factorizations for the block constraints, and a preconditioned conjugate gradient (PCG) for the linking constraints. The preconditioner is based on a power series expansion which approximates the inverse of the matrix of the linking constraints system. In this work we present an efficient solver based on this algorithm. Some of its features are: it solves linearly constrained convex separable problems (linear, quadratic or nonlinear); both Newton and second-order predictor-corrector directions can be used, either with the Cholesky+PCG scheme or with a Cholesky factorization of normal equations; the preconditioner
may include any number of terms of the power series; for any number of these terms, it estimates the spectral radius of the matrix in the power series (which is instrumental for the quality of the precondi-
tioner). The solver has been hooked to SML, a structure-conveying modelling language based on the popular AMPL modeling language. Computational results are reported for some large and/or difficult instances in the literature: (1) multicommodity flow problems; (2) minimum congestion problems; (3) statistical data protection problems using l1 and l2 distances (which are linear and quadratic problems, respectively), and the pseudo-Huber function, a nonlinear approximation to l1 which improves the preconditioner. In the largest instances, of up to 25 millions of variables and 300000 constraints, this approach is from two to three orders of magnitude faster than state-of-the-art linear and quadratic optimization solvers.Preprin
Improving an interior-point approach for large block-angular problems by hybrid preconditioners
The computational time required by interior-point methods
is often domi-
nated by the solution of linear systems of equations. An efficient spec
ialized
interior-point algorithm for primal block-angular proble
ms has been used to
solve these systems by combining Cholesky factorizations for the
block con-
straints and a conjugate gradient based on a power series precon
ditioner for
the linking constraints. In some problems this power series prec
onditioner re-
sulted to be inefficient on the last interior-point iterations, wh
en the systems
became ill-conditioned. In this work this approach is combi
ned with a split-
ting preconditioner based on LU factorization, which is main
ly appropriate
for the last interior-point iterations. Computational result
s are provided for
three classes of problems: multicommodity flows (oriented and no
noriented),
minimum-distance controlled tabular adjustment for statistic
al data protec-
tion, and the minimum congestion problem. The results show that
, in most
cases, the hybrid preconditioner improves the performance an
d robustness of
the interior-point solver. In particular, for some block-ang
ular problems the
solution time is reduced by a factor of 10.Peer ReviewedPreprin
Solving -CTA in 3D tables by an interior-point method for primal block-angular problems
The purpose of the field of statistical disclosure control is to avoid that no
confidential information can be derived from statistical data released by, mainly, national
statistical agencies. Controlled tabular adjustment (CTA) is an emerging technique
for the protection of statistical tabular data. Given a table to be protected, CTA
looks for the closest safe table. In this work we focus on CTA for three-dimensional
tables using the L1 norm for the distance between the original and protected tables.
Three L1-CTA models are presented, giving rise to six different primal block-angular
structures of the constraint matrices. The resulting linear programming problems are
solved by a specialized interior-point algorithm for this constraints structure, which
solves the normal equations by a combination of Cholesky factorization and preconditioned
conjugate gradients (PCG). In the past this algorithm shown to be one of
the most efficient approaches for some classes of block-angular problems. The effect
of quadratic regularizations is also analyzed, showing that for three of the six
primal block-angular structures the performance of PCG is guaranteed to improve.
Computational results are reported for a set of large instances, which provide linear
optimization problems of up to 50 millions of variables and 25 millions of constraints.
The specialized interior-point algorithm is compared with the state-of-the-art barrier
solver of the CPLEX 12.1 package, showing to be a more efficient choice for very
large L1-CTA instances.Preprin
A New Preconditioning Approachfor an Interior Point–Proximal Method of Multipliers for Linear and Convex Quadratic Programming
In this paper, we address the efficient numerical solution of linear and
quadratic programming problems, often of large scale. With this aim, we devise
an infeasible interior point method, blended with the proximal method of
multipliers, which in turn results in a primal-dual regularized interior point
method. Application of this method gives rise to a sequence of increasingly
ill-conditioned linear systems which cannot always be solved by factorization
methods, due to memory and CPU time restrictions. We propose a novel
preconditioning strategy which is based on a suitable sparsification of the
normal equations matrix in the linear case, and also constitutes the foundation
of a block-diagonal preconditioner to accelerate MINRES for linear systems
arising from the solution of general quadratic programming problems. Numerical
results for a range of test problems demonstrate the robustness of the proposed
preconditioning strategy, together with its ability to solve linear systems of
very large dimension
Large-scale optimization with the primal-dual column generation method
The primal-dual column generation method (PDCGM) is a general-purpose column
generation technique that relies on the primal-dual interior point method to
solve the restricted master problems. The use of this interior point method
variant allows to obtain suboptimal and well-centered dual solutions which
naturally stabilizes the column generation. As recently presented in the
literature, reductions in the number of calls to the oracle and in the CPU
times are typically observed when compared to the standard column generation,
which relies on extreme optimal dual solutions. However, these results are
based on relatively small problems obtained from linear relaxations of
combinatorial applications. In this paper, we investigate the behaviour of the
PDCGM in a broader context, namely when solving large-scale convex optimization
problems. We have selected applications that arise in important real-life
contexts such as data analysis (multiple kernel learning problem),
decision-making under uncertainty (two-stage stochastic programming problems)
and telecommunication and transportation networks (multicommodity network flow
problem). In the numerical experiments, we use publicly available benchmark
instances to compare the performance of the PDCGM against recent results for
different methods presented in the literature, which were the best available
results to date. The analysis of these results suggests that the PDCGM offers
an attractive alternative over specialized methods since it remains competitive
in terms of number of iterations and CPU times even for large-scale
optimization problems.Comment: 28 pages, 1 figure, minor revision, scaled CPU time
Standard Bundle Methods: Untrusted Models and Duality
We review the basic ideas underlying the vast family of algorithms for nonsmooth convex optimization known as "bundle methods|. In a nutshell, these approaches are based on constructing models of the function, but lack of continuity of first-order information implies that these models cannot be trusted, not even close to an optimum. Therefore, many different forms of stabilization have been proposed to try to avoid being led to areas where the model is so inaccurate as to result in almost useless steps. In the development of these methods, duality arguments are useful, if not outright necessary, to better analyze the behaviour of the algorithms. Also, in many relevant applications the function at hand is itself a dual one, so that duality allows to map back algorithmic concepts and results into a "primal space" where they can be exploited; in turn, structure in that space can be exploited to improve the algorithms' behaviour, e.g. by developing better models. We present an updated picture of the many developments around the basic idea along at least three different axes: form of the stabilization, form of the model, and approximate evaluation of the function