5,131 research outputs found

    On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Optimization

    Full text link
    Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for non-convex optimization. Several recent works have empirically shown its success in some machine learning tasks. However, it has not been analyzed for non-convex minimization and there still remains a gap between the theory and the practice. In this paper, we analyze gradient descent and stochastic gradient descent with extrapolation for finding an approximate first-order stationary point in smooth non-convex optimization problems. Our convergence upper bounds show that the algorithms with extrapolation can be accelerated than without extrapolation

    Power Allocation Games on Interference Channels with Complete and Partial Information

    Full text link
    We consider a wireless channel shared by multiple transmitter-receiver pairs. Their transmissions interfere with each other. Each transmitter-receiver pair aims to maximize its long-term average transmission rate subject to an average power constraint. This scenario is modeled as a stochastic game under different assumptions. We first assume that each transmitter and receiver has knowledge of all direct and cross link channel gains. We later relax the assumption to the knowledge of incident channel gains and then further relax to the knowledge of the direct link channel gains only. In all the cases, we formulate the problem of finding the Nash equilibrium as a variational inequality (VI) problem and present an algorithm to solve the VI.Comment: arXiv admin note: text overlap with arXiv:1409.755

    Solving Variational Inequalities with Monotone Operators on Domains Given by Linear Minimization Oracles

    Full text link
    The standard algorithms for solving large-scale convex-concave saddle point problems, or, more generally, variational inequalities with monotone operators, are proximal type algorithms which at every iteration need to compute a prox-mapping, that is, to minimize over problem's domain XX the sum of a linear form and the specific convex distance-generating function underlying the algorithms in question. Relative computational simplicity of prox-mappings, which is the standard requirement when implementing proximal algorithms, clearly implies the possibility to equip XX with a relatively computationally cheap Linear Minimization Oracle (LMO) able to minimize over XX linear forms. There are, however, important situations where a cheap LMO indeed is available, but where no proximal setup with easy-to-compute prox-mappings is known. This fact motivates our goal in this paper, which is to develop techniques for solving variational inequalities with monotone operators on domains given by Linear Minimization Oracles. The techniques we develope can be viewed as a substantial extension of the proposed in [5] method of nonsmooth convex minimization over an LMO-represented domain
    corecore