763 research outputs found
Algorithms for the continuous nonlinear resource allocation problem---new implementations and numerical studies
Patriksson (2008) provided a then up-to-date survey on the
continuous,separable, differentiable and convex resource allocation problem
with a single resource constraint. Since the publication of that paper the
interest in the problem has grown: several new applications have arisen where
the problem at hand constitutes a subproblem, and several new algorithms have
been developed for its efficient solution. This paper therefore serves three
purposes. First, it provides an up-to-date extension of the survey of the
literature of the field, complementing the survey in Patriksson (2008) with
more then 20 books and articles. Second, it contributes improvements of some of
these algorithms, in particular with an improvement of the pegging (that is,
variable fixing) process in the relaxation algorithm, and an improved means to
evaluate subsolutions. Third, it numerically evaluates several relaxation
(primal) and breakpoint (dual) algorithms, incorporating a variety of pegging
strategies, as well as a quasi-Newton method. Our conclusion is that our
modification of the relaxation algorithm performs the best. At least for
problem sizes up to 30 million variables the practical time complexity for the
breakpoint and relaxation algorithms is linear
Parallel decomposition methods for linearly constrained problems subject to simple bound with application to the SVMs training
We consider the convex quadratic linearly constrained problem
with bounded variables and with huge and dense Hessian matrix that arises
in many applications such as the training problem of bias support vector machines.
We propose a decomposition algorithmic scheme suitable to parallel implementations
and we prove global convergence under suitable conditions. Focusing
on support vector machines training, we outline how these assumptions
can be satisfied in practice and we suggest various specific implementations.
Extensions of the theoretical results to general linearly constrained problem
are provided. We included numerical results on support vector machines with
the aim of showing the viability and the effectiveness of the proposed scheme
A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables
We propose a gradient-based method for quadratic programming problems with a
single linear constraint and bounds on the variables. Inspired by the GPCG
algorithm for bound-constrained convex quadratic programming [J.J. Mor\'e and
G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases
until convergence: an identification phase, which performs gradient projection
iterations until either a candidate active set is identified or no reasonable
progress is made, and an unconstrained minimization phase, which reduces the
objective function in a suitable space defined by the identification phase, by
applying either the conjugate gradient method or a recently proposed spectral
gradient method. However, the algorithm differs from GPCG not only because it
deals with a more general class of problems, but mainly for the way it stops
the minimization phase. This is based on a comparison between a measure of
optimality in the reduced space and a measure of bindingness of the variables
that are on the bounds, defined by extending the concept of proportioning,
which was proposed by some authors for box-constrained problems. If the
objective function is bounded, the algorithm converges to a stationary point
thanks to a suitable application of the gradient projection method in the
identification phase. For strictly convex problems, the algorithm converges to
the optimal solution in a finite number of steps even in case of degeneracy.
Extensive numerical experiments show the effectiveness of the proposed
approach.Comment: 30 pages, 17 figure
A convergent decomposition method for box-constrained optimization problems.
In this work we consider the problem of minimizing a continuously differentiable function over a feasible set defined by box constraints. We present a decomposition method based on the solution of a sequence of subproblems. In particular, we state conditions on the rule for selecting the subproblem variables sufficient to ensure the global convergence of the generated sequence without convexity assumptions. The conditions require to select suitable variables (related to the violation of the optimality conditions) to guarantee theoretical convergence properties, and leave the degree of freedom of selecting any other group of variables to accelerate the convergence. © 2009 Springer-Verlag
Active-set identification with complexity guarantees of an almost cyclic 2-coordinate descent method with Armijo line search
In this paper, it is established finite active-set identification of an
almost cyclic 2-coordinate descent method for problems with one linear coupling
constraint and simple bounds. First, general active-set identification results
are stated for non-convex objective functions. Then, under convexity and a
quadratic growth condition (satisfied by any strongly convex function),
complexity results on the number of iterations required to identify the active
set are given. In our analysis, a simple Armijo line search is used to compute
the stepsize, thus not requiring exact minimizations or additional information
- …