1,599 research outputs found

    On the relationship between bilevel decomposition algorithms and direct interior-point methods

    Get PDF
    Engineers have been using bilevel decomposition algorithms to solve certain nonconvex large-scale optimization problems arising in engineering design projects. These algorithms transform the large-scale problem into a bilevel program with one upperlevel problem (the master problem) and several lower-level problems (the subproblems). Unfortunately, there is analytical and numerical evidence that some of these commonly used bilevel decomposition algorithms may fail to converge even when the starting point is very close to the minimizer. In this paper, we establish a relationship between a particular bilevel decomposition algorithm, which only performs one iteration of an interior-point method when solving the subproblems, and a direct interior-point method, which solves the problem in its original (integrated) form. Using this relationship, we formally prove that the bilevel decomposition algorithm converges locally at a superlinear rate. The relevance of our analysis is that it bridges the gap between the incipient local convergence theory of bilevel decomposition algorithms and the mature theory of direct interior-point methods

    Interior Point Methods with a Gradient Oracle

    Full text link
    We provide an interior point method based on quasi-Newton iterations, which only requires first-order access to a strongly self-concordant barrier function. To achieve this, we extend the techniques of Dunagan-Harvey [STOC '07] to maintain a preconditioner, while using only first-order information. We measure the quality of this preconditioner in terms of its relative excentricity to the unknown Hessian matrix, and we generalize these techniques to convex functions with a slowly-changing Hessian. We combine this with an interior point method to show that, given first-order access to an appropriate barrier function for a convex set KK, we can solve well-conditioned linear optimization problems over KK to ε\varepsilon precision in time O~((T+n2)nνlog(1/ε))\widetilde{O}\left(\left(\mathcal{T}+n^{2}\right)\sqrt{n\nu}\log\left(1/\varepsilon\right)\right), where ν\nu is the self-concordance parameter of the barrier function, and T\mathcal{T} is the time required to make a gradient query. As a consequence we show that: \bullet Linear optimization over nn-dimensional convex sets can be solved in time O~((Tn+n3)log(1/ε))\widetilde{O}\left(\left(\mathcal{T}n+n^{3}\right)\log\left(1/\varepsilon\right)\right). This parallels the running time achieved by state of the art algorithms for cutting plane methods, when replacing separation oracles with first-order oracles for an appropriate barrier function. \bullet We can solve semidefinite programs involving mnm\geq n matrices in Rn×n\mathbb{R}^{n\times n} in time O~(mn4+m1.25n3.5log(1/ε))\widetilde{O}\left(mn^{4}+m^{1.25}n^{3.5}\log\left(1/\varepsilon\right)\right), improving over the state of the art algorithms, in the case where m=Ω(n3.5ω1.25)m=\Omega\left(n^{\frac{3.5}{\omega-1.25}}\right). Along the way we develop a host of tools allowing us to control the evolution of our potential functions, using techniques from matrix analysis and Schur convexity.Comment: STOC 202
    corecore