124,782 research outputs found
Self-scaled barriers for irreducible symmetric cones
Self-scaled barrier functions are fundamental objects in the theory of
interior-point methods for linear optimization over symmetric cones, of which
linear and semidefinite programming are special cases. We are classifying all
self-scaled barriers over irreducible symmetric cones and show that these
functions are merely homothetic transformations of the universal barrier
function. Together with a decomposition theorem for self-scaled barriers this
concludes the algebraic classification theory of these functions. After
introducing the reader to the concepts relevant to the problem and tracing the
history of the subject, we start by deriving our result from first principles
in the important special case of semidefinite programming. We then generalise
these arguments to irreducible symmetric cones by invoking results from the
theory of Euclidean Jordan algebras.Comment: 12 page
Generalized Self-concordant Hessian-barrier algorithms
Many problems in statistical learning, imaging, and computer vision involve
the optimization of a non-convex objective function with singularities at the
boundary of the feasible set. For such challenging instances, we develop a new
interior-point technique building on the Hessian-barrier algorithm recently
introduced in Bomze, Mertikopoulos, Schachinger and Staudigl, [SIAM J. Opt.
2019 29(3), pp. 2100-2127], where the Riemannian metric is induced by a
generalized self-concordant function. This class of functions is sufficiently
general to include most of the commonly used barrier functions in the
literature of interior point methods. We prove global convergence to an
approximate stationary point of the method, and in cases where the feasible set
admits an easily computable self-concordant barrier, we verify worst-case
optimal iteration complexity of the method. Applications in non-convex
statistical estimation and -minimization are discussed to given the
efficiency of the method
Generalized self-concordant Hessian-barrier algorithms
Many problems in statistical learning, imaging, and computer vision involve the optimization of a non-convex objective function with singularities at the boundary of the feasible set. For such challenging instances, we develop a new interior-point technique building on the Hessian-barrier algorithm recently introduced in Bomze, Mertikopoulos, Schachinger and Staudigl, [SIAM J. Opt. 2019 29(3), pp. 2100-2127], where the Riemannian metric is induced by a generalized selfconcordant function. This class of functions is sufficiently general to include most of the commonly used barrier functions in the literature of interior point methods. We prove global convergence to an approximate stationary point of the method, and in cases where the feasible set admits an easily computable self-concordant barrier, we verify worst-case optimal iteration complexity of the method. Applications in non-convex statistical estimation and Lp-minimization are discussed to given the efficiency of the method
Primal-Dual Algorithms for Semidefinit Optimization Problems based on generalized trigonometric barrier function
Recently, M. Bouafoa, et al. (Journal of optimization Theory and Applications, August, 2016), investigated a new kernel function which differs from the self-regular kernel functions. The kernel function has a trigonometric Barrier Term. In this paper we generalize the analysis presented in the above paper for Semidefinit Optimization Problems (SDO). It is shown that the interior-point methods based on this function for large-update methods, the iteration bound is improved significantly. For small-update interior point methods the iteration bound is the best currently known bound for primal-dual interior point methods. The analysis for SDO deviates significantly from the analysis for linear optimization. Several new tools and techniques are derived in this paper.publishedVersio
Kernel-Based Interior-Point Methods for Cartesian \u3cem\u3eP\u3c/em\u3e*(Îș)-Linear Complementarity Problems over Symmetric Cones
We present an interior point method for Cartesian P*(k)-Linear Complementarity Problems over Symmetric Cones (SCLCPs). The Cartesian P*(k)-SCLCPs have been recently introduced as the generalization of the more commonly known and more widely used monotone SCLCPs. The IPM is based on the barrier functions that are defined by a large class of univariate functions called eligible kernel function which have recently been successfully used to design new IPMs for various optimization problems. Eligible barrier (kernel) functions are used in calculating the Nesterov-Todd search directions and the default step-size which leads to a very good complexity results for the method. For some specific eligilbe kernel functions we match the best known iteration bound for the long-step methods while for the short-step methods the best iteration bound is matched for all cases
Local quadratic convergence of polynomial-time interior-point methods for conic optimization problems
In this paper, we establish a local quadratic convergence of polynomial-time interior-point methods for general conic optimization problems. The main structural property used in our analysis is the logarithmic homogeneity of self-concordant barrier functions. We propose new path-following predictor-corrector schemes which work only in the dual space. They are based on an easily computable gradient proximity measure, which ensures an automatic transformation of the global linear rate of convergence to the local quadratic one under some mild assumptions. Our step-size procedure for the predictor step is related to the maximum step size (the one that takes us to the boundary). It appears that in order to obtain local superlinear convergence, we need to tighten the neighborhood of the central path proportionally to the current duality gapconic optimization problem, worst-case complexity analysis, self-concordant barriers, polynomial-time methods, predictor-corrector methods, local quadratic convergence
Interior-point methods for Pâ(Îș)-linear complementarity problem based on generalized trigonometric barrier function
Recently, M.~Bouafoa, et al. investigated a new kernel function which differs from the self-regular kernel functions. The kernel function has a trigonometric Barrier Term. In this paper we generalize the analysis presented in the above paper for Linear Complementarity Problems (LCPs). It is shown that the iteration bound for primal-dual large-update and small-update interior-point methods based on this function is as good as the currently best known iteration bounds for these type methods. The analysis for LCPs deviates significantly from the analysis for linear optimization. Several new tools and techniques are derived in this paper.publishedVersio
Interior Point Methods with a Gradient Oracle
We provide an interior point method based on quasi-Newton iterations, which
only requires first-order access to a strongly self-concordant barrier
function. To achieve this, we extend the techniques of Dunagan-Harvey [STOC
'07] to maintain a preconditioner, while using only first-order information. We
measure the quality of this preconditioner in terms of its relative
excentricity to the unknown Hessian matrix, and we generalize these techniques
to convex functions with a slowly-changing Hessian. We combine this with an
interior point method to show that, given first-order access to an appropriate
barrier function for a convex set , we can solve well-conditioned linear
optimization problems over to precision in time
,
where is the self-concordance parameter of the barrier function, and
is the time required to make a gradient query. As a consequence
we show that:
Linear optimization over -dimensional convex sets can be solved
in time
.
This parallels the running time achieved by state of the art algorithms for
cutting plane methods, when replacing separation oracles with first-order
oracles for an appropriate barrier function.
We can solve semidefinite programs involving matrices in
in time
,
improving over the state of the art algorithms, in the case where
.
Along the way we develop a host of tools allowing us to control the evolution
of our potential functions, using techniques from matrix analysis and Schur
convexity.Comment: STOC 202
Hessian barrier algorithms for linearly constrained optimization problems
In this paper, we propose an interior-point method for linearly constrained
optimization problems (possibly nonconvex). The method - which we call the
Hessian barrier algorithm (HBA) - combines a forward Euler discretization of
Hessian Riemannian gradient flows with an Armijo backtracking step-size policy.
In this way, HBA can be seen as an alternative to mirror descent (MD), and
contains as special cases the affine scaling algorithm, regularized Newton
processes, and several other iterative solution methods. Our main result is
that, modulo a non-degeneracy condition, the algorithm converges to the
problem's set of critical points; hence, in the convex case, the algorithm
converges globally to the problem's minimum set. In the case of linearly
constrained quadratic programs (not necessarily convex), we also show that the
method's convergence rate is for some
that depends only on the choice of kernel function (i.e., not on the problem's
primitives). These theoretical results are validated by numerical experiments
in standard non-convex test functions and large-scale traffic assignment
problems.Comment: 27 pages, 6 figure
- âŠ