1,409 research outputs found

    On the robustness of learning in games with stochastically perturbed payoff observations

    Get PDF
    Motivated by the scarcity of accurate payoff feedback in practical applications of game theory, we examine a class of learning dynamics where players adjust their choices based on past payoff observations that are subject to noise and random disturbances. First, in the single-player case (corresponding to an agent trying to adapt to an arbitrarily changing environment), we show that the stochastic dynamics under study lead to no regret almost surely, irrespective of the noise level in the player's observations. In the multi-player case, we find that dominated strategies become extinct and we show that strict Nash equilibria are stochastically stable and attracting; conversely, if a state is stable or attracting with positive probability, then it is a Nash equilibrium. Finally, we provide an averaging principle for 2-player games, and we show that in zero-sum games with an interior equilibrium, time averages converge to Nash equilibrium for any noise level.Comment: 36 pages, 4 figure

    Deep Adversarial Reinforcement Learning With Noise Compensation by Autoencoder

    Get PDF
    We present a new adversarial learning method for deep reinforcement learning (DRL). Based on this method, robust internal representation in a deep Q-network (DQN) was introduced by applying adversarial noise to disturb the DQN policy; however, it was compensated for by the autoencoder network. In particular, we proposed the use of a new type of adversarial noise: it encourages the policy to choose the worst action leading to the worst outcome at each state. When the proposed method, called deep Q-W-network regularized with an autoencoder (DQWAE), was applied to seven different games in an Atari 2600, the results were convincing. DQWAE exhibited greater robustness against the random/adversarial noise added to the input and accelerated the learning process more than the baseline DQN. When applied to a realistic automatic driving simulation, the proposed DRL method was found to be effective at rendering the acquired policy robust against random/adversarial noise

    Abstracting Imperfect Information Away from Two-Player Zero-Sum Games

    Full text link
    In their seminal work, Nayyar et al. (2013) showed that imperfect information can be abstracted away from common-payoff games by having players publicly announce their policies as they play. This insight underpins sound solvers and decision-time planning algorithms for common-payoff games. Unfortunately, a naive application of the same insight to two-player zero-sum games fails because Nash equilibria of the game with public policy announcements may not correspond to Nash equilibria of the original game. As a consequence, existing sound decision-time planning algorithms require complicated additional mechanisms that have unappealing properties. The main contribution of this work is showing that certain regularized equilibria do not possess the aforementioned non-correspondence problem -- thus, computing them can be treated as perfect information problems. Because these regularized equilibria can be made arbitrarily close to Nash equilibria, our result opens the door to a new perspective on solving two-player zero-sum games and, in particular, yields a simplified framework for decision-time planning in two-player zero-sum games, void of the unappealing properties that plague existing decision-time planning approaches
    corecore