33 research outputs found

    Highly Smoothness Zero-Order Methods for Solving Optimization Problems under PL Condition

    Full text link
    In this paper, we study the black box optimization problem under the Polyak--Lojasiewicz (PL) condition, assuming that the objective function is not just smooth, but has higher smoothness. By using "kernel-based" approximation instead of the exact gradient in Stochastic Gradient Descent method, we improve the best known results of convergence in the class of gradient-free algorithms solving problem under PL condition. We generalize our results to the case where a zero-order oracle returns a function value at a point with some adversarial noise. We verify our theoretical results on the example of solving a system of nonlinear equations

    New Version of Mirror Prox for Variational Inequalities with Adaptation to Inexactness

    Get PDF
    18 pages, 5 figures, X International Conference Optimization and Applications (OPTIMA-2019) dedicated to the 80th anniversary of Academician Yury G. EvtushenkoPetrovac, Montenegro, September 30 - October 4, 2019Some adaptive analogue of the Mirror Prox method for variational inequalities is proposed. In this work we consider the adaptation not only to the value of the Lipschitz constant, but also to the magnitude of the oracle error. This approach, in particular, allows us to prove a complexity near O(1εlog21ε)O\left(\frac{1}{\varepsilon}\log_2\frac{1}{\varepsilon}\right) for variational inequalities for a special class of monotone bounded operators. This estimate is optimal for variational inequalities with monotone Lipschitz-continuous operators. However, there exists some error, which may be insignificant. The results of experiments on the comparison of the proposed approach with some known analogues are presented. Also, we discuss the results of the experiments for matrix games in the case of using non-Euclidean proximal setup

    Gradient-Type Methods For Decentralized Optimization Problems With Polyak-{\L}ojasiewicz Condition Over Time-Varying Networks

    Full text link
    This paper focuses on the decentralized optimization (minimization and saddle point) problems with objective functions that satisfy Polyak-{\L}ojasiewicz condition (PL-condition). The first part of the paper is devoted to the minimization problem of the sum-type cost functions. In order to solve a such class of problems, we propose a gradient descent type method with a consensus projection procedure and the inexact gradient of the objectives. Next, in the second part, we study the saddle-point problem (SPP) with a structure of the sum, with objectives satisfying the two-sided PL-condition. To solve such SPP, we propose a generalization of the Multi-step Gradient Descent Ascent method with a consensus procedure, and inexact gradients of the objective function with respect to both variables. Finally, we present some of the numerical experiments, to show the efficiency of the proposed algorithm for the robust least squares problem

    Adaptive Algorithms for Relatively Lipschitz Continuous Convex Optimization Problems

    Full text link
    Recently there were proposed some innovative convex optimization concepts, namely, relative smoothness [1] and relative strong convexity [2,3]. These approaches have significantly expanded the class of applicability of gradient-type methods with optimal estimates of the convergence rate, which are invariant regardless of the dimensionality of the problem. Later Yu. Nesterov and H. Lu introduced some modifications of the Mirror Descent method for convex minimization problems with the corresponding analogue of the Lipschitz condition (so-called relative Lipschitz continuity). By introducing an artificial inaccuracy to the optimization model, we propose adaptive methods for minimizing a convex Lipschitz continuous function, as well as for the corresponding class of variational inequalities. We also consider an adaptive "universal" method, applicable to convex minimization problems both on the class of relatively smooth and relatively Lipschitz continuous functionals with optimal estimates of the convergence rate. The universality of the method makes it possible to justify the applicability of the obtained theoretical results to a wider class of convex optimization problems. We also present the results of numerical experiments
    corecore