50 research outputs found

    Riemannian Adaptive Regularized Newton Methods with H\"older Continuous Hessians

    Full text link
    This paper presents strong worst-case iteration and operation complexity guarantees for Riemannian adaptive regularized Newton methods, a unified framework encompassing both Riemannian adaptive regularization (RAR) methods and Riemannian trust region (RTR) methods. We comprehensively characterize the sources of approximation in second-order manifold optimization methods: the objective function's smoothness, retraction's smoothness, and subproblem solver's inexactness. Specifically, for a function with a μ\mu-H\"older continuous Hessian, when equipped with a retraction featuring a ν\nu-H\"older continuous differential and a θ\theta-inexact subproblem solver, both RTR and RAR with 2+α2+\alpha regularization (where α=min{μ,ν,θ}\alpha=\min\{\mu,\nu,\theta\}) locate an (ϵ,ϵα/(1+α))(\epsilon,\epsilon^{\alpha/(1+\alpha)})-approximate second-order stationary point within at most O(ϵ(2+α)/(1+α))O(\epsilon^{-(2+\alpha)/(1+\alpha)}) iterations and at most O~(ϵ(4+3α)/(2(1+α)))\tilde{O}(\epsilon^{-(4+3\alpha)/(2(1+\alpha))}) Hessian-vector products. These complexity results are novel and sharp, and reduce to an iteration complexity of O(ϵ3/2)O(\epsilon^{-3/2}) and an operation complexity of O~(ϵ7/4)\tilde{O}(\epsilon^{-7/4}) when α=1\alpha=1

    An accelerated first-order method with complexity analysis for solving cubic regularization subproblems

    Full text link
    We propose a first-order method to solve the cubic regularization subproblem (CRS) based on a novel reformulation. The reformulation is a constrained convex optimization problem whose feasible region admits an easily computable projection. Our reformulation requires computing the minimum eigenvalue of the Hessian. To avoid the expensive computation of the exact minimum eigenvalue, we develop a surrogate problem to the reformulation where the exact minimum eigenvalue is replaced with an approximate one. We then apply first-order methods such as the Nesterov's accelerated projected gradient method (APG) and projected Barzilai-Borwein method to solve the surrogate problem. As our main theoretical contribution, we show that when an ϵ\epsilon-approximate minimum eigenvalue is computed by the Lanczos method and the surrogate problem is approximately solved by APG, our approach returns an ϵ\epsilon-approximate solution to CRS in O~(ϵ1/2)\tilde O(\epsilon^{-1/2}) matrix-vector multiplications (where O~()\tilde O(\cdot) hides the logarithmic factors). Numerical experiments show that our methods are comparable to and outperform the Krylov subspace method in the easy and hard cases, respectively. We further implement our methods as subproblem solvers of adaptive cubic regularization methods, and numerical results show that our algorithms are comparable to the state-of-the-art algorithms

    DC Algorithm for Sample Average Approximation of Chance Constrained Programming: Convergence and Numerical Results

    Full text link
    Chance constrained programming refers to an optimization problem with uncertain constraints that must be satisfied with at least a prescribed probability level. In this work, we study a class of structured chance constrained programs in the data-driven setting, where the objective function is a difference-of-convex (DC) function and the functions in the chance constraint are all convex. By exploiting the structure, we reformulate it into a DC constrained DC program. Then, we propose a proximal DC algorithm for solving the reformulation. Moreover, we prove the convergence of the proposed algorithm based on the Kurdyka-\L ojasiewicz property and derive the iteration complexity for finding an approximate KKT point. We point out that the proposed pDCA and its associated analysis apply to general DC constrained DC programs, which may be of independent interests. To support and complement our theoretical development, we show via numerical experiments that our proposed approach is competitive with a host of existing approaches.Comment: 31 pages, 3 table

    Penalty-based Methods for Simple Bilevel Optimization under H\"{o}lderian Error Bounds

    Full text link
    This paper investigates simple bilevel optimization problems where the upper-level objective minimizes a composite convex function over the optimal solutions of a composite convex lower-level problem. Existing methods for such problems either only guarantee asymptotic convergence, have slow sublinear rates, or require strong assumptions. To address these challenges, we develop a novel penalty-based approach that employs the accelerated proximal gradient (APG) method. Under an α\alpha-H\"{o}lderian error bound condition on the lower-level objective, our algorithm attains an (ϵ,lFβϵβ)(\epsilon,l_F^{-\beta}\epsilon^{\beta})-optimal solution for any β>0\beta>0 within O(Lf1ϵ)+O(lFmax{α,β}Lg1ϵmax{α,β})\mathcal{O}\left(\sqrt{\frac{L_{f_1}}{\epsilon }}\right)+\mathcal{O}\left(\sqrt{\frac{l_F^{\max\{\alpha,\beta\}}L_{g_1}}{\epsilon^{\max\{\alpha,\beta\}}}}\right) iterations, where lFl_F, Lf1L_{f_1} and Lg1L_{g_1} denote the Lipschitz constants of the upper-level objective, the gradients of the smooth parts of the upper- and lower-level objectives, respectively. If the smooth part of the upper-level objective is strongly convex, the result improves further. We also establish the complexity results when both upper- and lower-level objectives are general convex nonsmooth functions. Numerical experiments demonstrate the effectiveness of our algorithms
    corecore