1,599 research outputs found
On the relationship between bilevel decomposition algorithms and direct interior-point methods
Engineers have been using bilevel decomposition algorithms to solve certain nonconvex large-scale optimization problems arising in engineering design projects. These algorithms transform the large-scale problem into a bilevel program with one upperlevel problem (the master problem) and several lower-level problems (the subproblems). Unfortunately, there is analytical and numerical evidence that some of these commonly used bilevel decomposition algorithms may fail to converge even when the starting point is very close to the minimizer. In this paper, we establish a relationship between a particular bilevel decomposition algorithm, which only performs one iteration of an interior-point method when solving the subproblems, and a direct interior-point method, which solves the problem in its original (integrated) form. Using this relationship, we formally prove that the bilevel decomposition algorithm converges locally at a superlinear rate. The relevance of our analysis is that it bridges the gap between the incipient local convergence theory of bilevel decomposition algorithms and the mature theory of direct interior-point methods
Interior Point Methods with a Gradient Oracle
We provide an interior point method based on quasi-Newton iterations, which
only requires first-order access to a strongly self-concordant barrier
function. To achieve this, we extend the techniques of Dunagan-Harvey [STOC
'07] to maintain a preconditioner, while using only first-order information. We
measure the quality of this preconditioner in terms of its relative
excentricity to the unknown Hessian matrix, and we generalize these techniques
to convex functions with a slowly-changing Hessian. We combine this with an
interior point method to show that, given first-order access to an appropriate
barrier function for a convex set , we can solve well-conditioned linear
optimization problems over to precision in time
,
where is the self-concordance parameter of the barrier function, and
is the time required to make a gradient query. As a consequence
we show that:
Linear optimization over -dimensional convex sets can be solved
in time
.
This parallels the running time achieved by state of the art algorithms for
cutting plane methods, when replacing separation oracles with first-order
oracles for an appropriate barrier function.
We can solve semidefinite programs involving matrices in
in time
,
improving over the state of the art algorithms, in the case where
.
Along the way we develop a host of tools allowing us to control the evolution
of our potential functions, using techniques from matrix analysis and Schur
convexity.Comment: STOC 202
- …