15,229 research outputs found

    No-regret Dynamics and Fictitious Play

    Full text link
    Potential based no-regret dynamics are shown to be related to fictitious play. Roughly, these are epsilon-best reply dynamics where epsilon is the maximal regret, which vanishes with time. This allows for alternative and sometimes much shorter proofs of known results on convergence of no-regret dynamics to the set of Nash equilibria

    Multilateral Non-Cooperative Bargaining in a General Utility Space

    Get PDF
    We consider an n-player bargaining problem where the utility possibility set is compact, convex, and stricly comprehensive. We show that a stationary subgame perfect Nash equilibrium exists, and that, if the Pareto surface is differentiable, all such equilibria converge to the Nash bargaining solution as the length of a time period between offers goes to zero. Without the differentiability assumption, convergence need not hold.multilateral, bargaining, general utility set

    Score-Based Equilibrium Learning in Multi-Player Finite Games with Imperfect Information

    Full text link
    Real-world games, which concern imperfect information, multiple players, and simultaneous moves, are less frequently discussed in the existing literature of game theory. While reinforcement learning (RL) provides a general framework to extend the game theoretical algorithms, the assumptions that guarantee their convergence towards Nash equilibria may no longer hold in real-world games. Starting from the definition of the Nash distribution, we construct a continuous-time dynamic named imperfect-information exponential-decay score-based learning (IESL) to find approximate Nash equilibria in games with the above-mentioned features. Theoretical analysis demonstrates that IESL yields equilibrium-approaching policies in imperfect information simultaneous games with the basic assumption of concavity. Experimental results show that IESL manages to find approximate Nash equilibria in four canonical poker scenarios and significantly outperforms three other representative algorithms in 3-player Leduc poker, manifesting its equilibrium-finding ability even in practical sequential games. Furthermore, related to the concept of game hypomonotonicity, a trade-off between the convergence of the IESL dynamic and the ultimate NashConv of the convergent policies is observed from the perspectives of both theory and experiment

    Approximately Solving Mean Field Games via Entropy-Regularized Deep Reinforcement Learning

    Get PDF
    The recent mean field game (MFG) formalism facilitates otherwise intractable computation of approximate Nash equilibria in many-agent settings. In this paper, we consider discrete-time finite MFGs subject to finite-horizon objectives. We show that all discrete-time finite MFGs with non-constant fixed point operators fail to be contractive as typically assumed in existing MFG literature, barring convergence via fixed point iteration. Instead, we incorporate entropy-regularization and Boltzmann policies into the fixed point iteration. As a result, we obtain provable convergence to approximate fixed points where existing methods fail, and reach the original goal of approximate Nash equilibria. All proposed methods are evaluated with respect to their exploitability, on both instructive examples with tractable exact solutions and high-dimensional problems where exact methods become intractable. In high-dimensional scenarios, we apply established deep reinforcement learning methods and empirically combine fictitious play with our approximations.Comment: Accepted to the 24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021
    corecore