5 research outputs found
Hessian barrier algorithms for linearly constrained optimization problems
In this paper, we propose an interior-point method for linearly constrained
optimization problems (possibly nonconvex). The method - which we call the
Hessian barrier algorithm (HBA) - combines a forward Euler discretization of
Hessian Riemannian gradient flows with an Armijo backtracking step-size policy.
In this way, HBA can be seen as an alternative to mirror descent (MD), and
contains as special cases the affine scaling algorithm, regularized Newton
processes, and several other iterative solution methods. Our main result is
that, modulo a non-degeneracy condition, the algorithm converges to the
problem's set of critical points; hence, in the convex case, the algorithm
converges globally to the problem's minimum set. In the case of linearly
constrained quadratic programs (not necessarily convex), we also show that the
method's convergence rate is for some
that depends only on the choice of kernel function (i.e., not on the problem's
primitives). These theoretical results are validated by numerical experiments
in standard non-convex test functions and large-scale traffic assignment
problems.Comment: 27 pages, 6 figure
Hessian barrier algorithms for linearly constrained optimization problems
International audienceIn this paper, we propose an interior-point method for linearly constrained-and possibly nonconvex-optimization problems. The method-which we call the Hessian barrier algorithm (HBA)-combines a forward Euler discretization of Hessian-Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent, and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a nondegeneracy condition, the algorithm converges to the problem's critical set; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is for some that depends only on the choice of kernel function (i.e., not on the problem's primi-tives). These theoretical results are validated by numerical experiments in standard nonconvex test functions and large-scale traffic assignment problems
Generalized self-concordant Hessian-barrier algorithms
Many problems in statistical learning, imaging, and computer vision involve the optimization of a non-convex objective function with singularities at the boundary of the feasible set. For such challenging instances, we develop a new interior-point technique building on the Hessian-barrier algorithm recently introduced in Bomze, Mertikopoulos, Schachinger and Staudigl, [SIAM J. Opt. 2019 29(3), pp. 2100-2127], where the Riemannian metric is induced by a generalized selfconcordant function. This class of functions is sufficiently general to include most of the commonly used barrier functions in the literature of interior point methods. We prove global convergence to an approximate stationary point of the method, and in cases where the feasible set admits an easily computable self-concordant barrier, we verify worst-case optimal iteration complexity of the method. Applications in non-convex statistical estimation and Lp-minimization are discussed to given the efficiency of the method
Generalized Self-concordant Hessian-barrier algorithms
Many problems in statistical learning, imaging, and computer vision involve
the optimization of a non-convex objective function with singularities at the
boundary of the feasible set. For such challenging instances, we develop a new
interior-point technique building on the Hessian-barrier algorithm recently
introduced in Bomze, Mertikopoulos, Schachinger and Staudigl, [SIAM J. Opt.
2019 29(3), pp. 2100-2127], where the Riemannian metric is induced by a
generalized self-concordant function. This class of functions is sufficiently
general to include most of the commonly used barrier functions in the
literature of interior point methods. We prove global convergence to an
approximate stationary point of the method, and in cases where the feasible set
admits an easily computable self-concordant barrier, we verify worst-case
optimal iteration complexity of the method. Applications in non-convex
statistical estimation and -minimization are discussed to given the
efficiency of the method