19,920 research outputs found

    Learning in Perturbed Asymmetric Games

    Get PDF
    We investigate the stability of mixed strategy equilibria in 2 person (bimatrix) games under perturbed best response dynamics. A mixed equilibrium is asymptotically stable under all such dynamics if and only if the game is linearly equivalent to a zero sum game. In this case, the mixed equilibrium is also globally asymptotically stable. Global convergence to the set of perturbed equilibria is shown also for (rescaled) partnership games (also know as games of identical interest). Some applications of these result to stochastic learning models are given.Games, Learning, Best Response Dynamics, Stochastic Fictitious Play, Mixed Strategy Equilibria, Zero Sum Games

    Q-CP: Learning Action Values for Cooperative Planning

    Get PDF
    Research on multi-robot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyper-redundant and groups of robot). To alleviate this problem, we present Q-CP a cooperative model-based reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Q-learning to attack the curse-of-dimensionality in the iterations of a Monte-Carlo Tree Search. We implement and evaluate Q-CP on different stochastic cooperative (general-sum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing hand-overs, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of Q-CP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance
    • 

    corecore