37 research outputs found

    Distributionally Robust Games with Risk-averse Players

    Full text link
    We present a new model of incomplete information games without private information in which the players use a distributionally robust optimization approach to cope with the payoff uncertainty. With some specific restrictions, we show that our "Distributionally Robust Game" constitutes a true generalization of three popular finite games. These are the Complete Information Games, Bayesian Games and Robust Games. Subsequently, we prove that the set of equilibria of an arbitrary distributionally robust game with specified ambiguity set can be computed as the component-wise projection of the solution set of a multi-linear system of equations and inequalities. For special cases of such games we show equivalence to complete information finite games (Nash Games) with the same number of players and same action spaces. Thus, when our game falls within these special cases one can simply solve the corresponding Nash Game. Finally, we demonstrate the applicability of our new model of games and highlight its importance.Comment: 11 pages, 3 figures, Proceedings of 5th the International Conference on Operations Research and Enterprise Systems ({ICORES} 2016), Rome, Italy, February 23-25, 201

    A New Perspective on Randomized Gossip Algorithms

    Get PDF
    In this short note we propose a new approach for the design and analysis of randomized gossip algorithms which can be used to solve the average consensus problem. We show how that Randomized Block Kaczmarz (RBK) method - a method for solving linear systems - works as gossip algorithm when applied to a special system encoding the underlying network. The famous pairwise gossip algorithm arises as a special case. Subsequently, we reveal a hidden duality of randomized gossip algorithms, with the dual iterative process maintaining a set of numbers attached to the edges as opposed to nodes of the network. We prove that RBK obtains a superlinear speedup in the size of the block, and demonstrate this effect through experiments

    Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    Get PDF
    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.Comment: 47 pages, 7 figures, 7 table
    corecore