57 research outputs found

    Upper bounds on maximum admissible noise in zeroth-order optimisation

    Full text link
    In this paper, based on information-theoretic upper bound on noise in convex Lipschitz continuous zeroth-order optimisation, we provide corresponding upper bounds for strongly-convex and smooth classes of problems using non-constructive proofs through optimal reductions. Also, we show that based on one-dimensional grid-search optimisation algorithm one can construct algorithm for simplex-constrained optimisation with upper bound on noise better than that for ball-constrained and asymptotic in dimensionality case.Comment: 15 pages, 2 figure

    Distributed optimization with quantization for computing Wasserstein barycenters

    Get PDF
    We study the problem of the decentralized computation of entropy-regularized semi-discrete Wasserstein barycenters over a network. Building upon recent primal-dual approaches, we propose a sampling gradient quantization scheme that allows efficient communication and computation of approximate barycenters where the factor distributions are stored distributedly on arbitrary networks. The communication and algorithmic complexity of the proposed algorithm are shown, with explicit dependency on the size of the support, the number of distributions, and the desired accuracy. Numerical results validate our algorithmic analysis

    Some Adaptive First-order Methods for Variational Inequalities with Relatively Strongly Monotone Operators and Generalized Smoothness

    Full text link
    In this paper, we introduce some adaptive methods for solving variational inequalities with relatively strongly monotone operators. Firstly, we focus on the modification of the recently proposed, in smooth case [1], adaptive numerical method for generalized smooth (with H\"older condition) saddle point problem, which has convergence rate estimates similar to accelerated methods. We provide the motivation for such an approach and obtain theoretical results of the proposed method. Our second focus is the adaptation of widespread recently proposed methods for solving variational inequalities with relatively strongly monotone operators. The key idea in our approach is the refusal of the well-known restart technique, which in some cases causes difficulties in implementing such algorithms for applied problems. Nevertheless, our algorithms show a comparable rate of convergence with respect to algorithms based on the above-mentioned restart technique. Also, we present some numerical experiments, which demonstrate the effectiveness of the proposed methods. [1] Jin, Y., Sidford, A., & Tian, K. (2022). Sharper rates for separable minimax and finite sum optimization via primal-dual extragradient methods. arXiv preprint arXiv:2202.04640

    Near-optimal tensor methods for minimizing gradient norm

    Get PDF
    Motivated by convex problems with linear constraints and, in particular, by entropy-regularized optimal transport, we consider the problem of finding approximate stationary points, i.e. points with the norm of the objective gradient less than small error, of convex functions with Lipschitz p-th order derivatives. Lower complexity bounds for this problem were recently proposed in [Grapiglia and Nesterov, arXiv:1907.07053]. However, the methods presented in the same paper do not have optimal complexity bounds. We propose two optimal up to logarithmic factors methods with complexity bounds with respect to the initial objective residual and the distance between the starting point and solution respectivel

    Near-optimal tensor methods for minimizing the gradient norm of convex function

    Full text link
    Motivated by convex problems with linear constraints and, in particular, by entropy-regularized optimal transport, we consider the problem of finding ε\varepsilon-approximate stationary points, i.e. points with the norm of the objective gradient less than ε\varepsilon, of convex functions with Lipschitz pp-th order derivatives. Lower complexity bounds for this problem were recently proposed in [Grapiglia and Nesterov, arXiv:1907.07053]. However, the methods presented in the same paper do not have optimal complexity bounds. We propose two optimal up to logarithmic factors methods with complexity bounds O~(ε−2(p+1)/(3p+1))\tilde{O}(\varepsilon^{-2(p+1)/(3p+1)}) and O~(ε−2/(3p+1))\tilde{O}(\varepsilon^{-2/(3p+1)}) with respect to the initial objective residual and the distance between the starting point and solution respectively
    • …
    corecore