77 research outputs found

    Evaluation complexity for nonlinear constrained optimization using unscaled kkt conditions and high-order models

    Get PDF
    FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOThe evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.The evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.262951967FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO2010/10133-0; 2013/03447-6; 2013/05475-7; 2013/07375-0; 2013/23494-9304032/2010-7; 309517/2014-1; 303750/2014-6; 490326/2013-

    Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity

    Full text link
    We consider nonconvex constrained optimization problems and propose a new approach to the convergence analysis based on penalty functions. We make use of classical penalty functions in an unconventional way, in that penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty-free. Based on this idea, we are able to establish several new results, including the first general analysis for diminishing stepsize methods in nonconvex, constrained optimization, showing convergence to generalized stationary points, and a complexity study for SQP-type algorithms.Comment: To appear on Mathematics of Operations Researc

    Worst-case iteration bounds for log barrier methods for problems with nonconvex constraints

    Full text link
    Interior point methods (IPMs) that handle nonconvex constraints such as IPOPT, KNITRO and LOQO have had enormous practical success. We consider IPMs in the setting where the objective and constraints have Lipschitz first and second derivatives. Unfortunately, previous analyses of log barrier methods in this setting implicitly prove guarantees with exponential dependencies on 1/ÎŒ1/\mu, where ÎŒ\mu is the barrier penalty parameter. We provide an IPM that finds a ÎŒ\mu-approximate Fritz John point by solving O(Ό−7/4)\mathcal{O}( \mu^{-7/4}) trust-region subproblems. For this setup, the results represent both the first iteration bound with a polynomial dependence on 1/ÎŒ1/\mu for a log barrier method and the best-known guarantee for finding Fritz John points. We also show that, given convexity and regularity conditions, our algorithm finds an Ï”\epsilon-optimal solution in at most O(ϔ−2/3)\mathcal{O}(\epsilon^{-2/3}) trust-region steps.Comment: Minor edit

    FIRST-ORDER METHODS FOR NONSMOOTH NONCONVEX FUNCTIONAL CONSTRAINED OPTIMIZATION WITH OR WITHOUT SLATER POINTS

    Get PDF
    Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, Ï”-stationary point at a convergence rate of O(ϔ−4) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by approximately satisfying the Karush-Kuhn-Tucker conditions. When CQ fails, we guarantee the attainment of weaker Fritz-John conditions. As an illustrative example, we show our method still stably converges on piecewise quadratic SCAD regularized problems despite frequent violations of constraint qualification. The considered algorithm is similar to those of "Quadratically regularized subgradient methods for weakly convex optimization with weakly convex constraints" by Ma et al. and "Stochastic first-order methods for convex and nonconvex functional constrained optimization" by Boob et al. (whose guarantees both assume compactness and CQ), iteratively taking inexact proximal steps, computed via an inner loop applying a switching subgradient method to a strongly convex constrained subproblem. Our non-Lipschitz analysis of the switching subgradient method analysis appears to be new and may be of independent interest
    • 

    corecore