138 research outputs found

    Stochastic Intermediate Gradient Method for Convex Problems with Inexact Stochastic Oracle

    Full text link
    In this paper we introduce new methods for convex optimization problems with inexact stochastic oracle. First method is an extension of the intermediate gradient method proposed by Devolder, Glineur and Nesterov for problems with inexact oracle. Our new method can be applied to the problems with composite structure, stochastic inexact oracle and allows using non-Euclidean setup. We prove estimates for mean rate of convergence and probabilities of large deviations from this rate. Also we introduce two modifications of this method for strongly convex problems. For the first modification we prove mean rate of convergence estimates and for the second we prove estimates for large deviations from the mean rate of convergence. All the rates give the complexity estimates for proposed methods which up to multiplicative constant coincide with lower complexity bound for the considered class of convex composite optimization problems with stochastic inexact oracle

    Primal-dual accelerated gradient methods with small-dimensional relaxation oracle

    Full text link
    In this paper, a new variant of accelerated gradient descent is proposed. The pro-posed method does not require any information about the objective function, usesexact line search for the practical accelerations of convergence, converges accordingto the well-known lower bounds for both convex and non-convex objective functions,possesses primal-dual properties and can be applied in the non-euclidian set-up. Asfar as we know this is the rst such method possessing all of the above properties atthe same time. We also present a universal version of the method which is applicableto non-smooth problems. We demonstrate how in practice one can efficiently use thecombination of line-search and primal-duality by considering a convex optimizationproblem with a simple structure (for example, linearly constrained)

    Analysis of Kernel Mirror Prox for Measure Optimization

    Full text link
    By choosing a suitable function space as the dual to the non-negative measure cone, we study in a unified framework a class of functional saddle-point optimization problems, which we term the Mixed Functional Nash Equilibrium (MFNE), that underlies several existing machine learning algorithms, such as implicit generative models, distributionally robust optimization (DRO), and Wasserstein barycenters. We model the saddle-point optimization dynamics as an interacting Fisher-Rao-RKHS gradient flow when the function space is chosen as a reproducing kernel Hilbert space (RKHS). As a discrete time counterpart, we propose a primal-dual kernel mirror prox (KMP) algorithm, which uses a dual step in the RKHS, and a primal entropic mirror prox step. We then provide a unified convergence analysis of KMP in an infinite-dimensional setting for this class of MFNE problems, which establishes a convergence rate of O(1/N)O(1/N) in the deterministic case and O(1/N)O(1/\sqrt{N}) in the stochastic case, where NN is the iteration counter. As a case study, we apply our analysis to DRO, providing algorithmic guarantees for DRO robustness and convergence.Comment: Accepted to AISTATS 202
    • …
    corecore