7 research outputs found

    Cooperative learning in multi-agent systems from intermittent measurements

    Full text link
    Motivated by the problem of tracking a direction in a decentralized way, we consider the general problem of cooperative learning in multi-agent systems with time-varying connectivity and intermittent measurements. We propose a distributed learning protocol capable of learning an unknown vector μ\mu from noisy measurements made independently by autonomous nodes. Our protocol is completely distributed and able to cope with the time-varying, unpredictable, and noisy nature of inter-agent communication, and intermittent noisy measurements of μ\mu. Our main result bounds the learning speed of our protocol in terms of the size and combinatorial features of the (time-varying) networks connecting the nodes

    Multiagent systems: games and learning from structures

    Get PDF
    Multiple agents have become increasingly utilized in various fields for both physical robots and software agents, such as search and rescue robots, automated driving, auctions and electronic commerce agents, and so on. In multiagent domains, agents interact and coadapt with other agents. Each agent's choice of policy depends on the others' joint policy to achieve the best available performance. During this process, the environment evolves and is no longer stationary, where each agent adapts to proceed towards its target. Each micro-level step in time may present a different learning problem which needs to be addressed. However, in this non-stationary environment, a holistic phenomenon forms along with the rational strategies of all players; we define this phenomenon as structural properties. In our research, we present the importance of analyzing the structural properties, and how to extract the structural properties in multiagent environments. According to the agents' objectives, a multiagent environment can be classified as self-interested, cooperative, or competitive. We examine the structure from these three general multiagent environments: self-interested random graphical game playing, distributed cooperative team playing, and competitive group survival. In each scenario, we analyze the structure in each environmental setting, and demonstrate the structure learned as a comprehensive representation: structure of players' action influence, structure of constraints in teamwork communication, and structure of inter-connections among strategies. This structure represents macro-level knowledge arising in a multiagent system, and provides critical, holistic information for each problem domain. Last, we present some open issues and point toward future research

    Leveraging repeated games for solving complex multiagent decision problems

    Get PDF
    Prendre de bonnes décisions dans des environnements multiagents est une tâche difficile dans la mesure où la présence de plusieurs décideurs implique des conflits d'intérêts, un manque de coordination, et une multiplicité de décisions possibles. Si de plus, les décideurs interagissent successivement à travers le temps, ils doivent non seulement décider ce qu'il faut faire actuellement, mais aussi comment leurs décisions actuelles peuvent affecter le comportement des autres dans le futur. La théorie des jeux est un outil mathématique qui vise à modéliser ce type d'interactions via des jeux stratégiques à plusieurs joueurs. Des lors, les problèmes de décision multiagent sont souvent étudiés en utilisant la théorie des jeux. Dans ce contexte, et si on se restreint aux jeux dynamiques, les problèmes de décision multiagent complexes peuvent être approchés de façon algorithmique. La contribution de cette thèse est triple. Premièrement, elle contribue à un cadre algorithmique pour la planification distribuée dans les jeux dynamiques non-coopératifs. La multiplicité des plans possibles est à l'origine de graves complications pour toute approche de planification. Nous proposons une nouvelle approche basée sur la notion d'apprentissage dans les jeux répétés. Une telle approche permet de surmonter lesdites complications par le biais de la communication entre les joueurs. Nous proposons ensuite un algorithme d'apprentissage pour les jeux répétés en ``self-play''. Notre algorithme permet aux joueurs de converger, dans les jeux répétés initialement inconnus, vers un comportement conjoint optimal dans un certain sens bien défini, et ce, sans aucune communication entre les joueurs. Finalement, nous proposons une famille d'algorithmes de résolution approximative des jeux dynamiques et d'extraction des stratégies des joueurs. Dans ce contexte, nous proposons tout d'abord une méthode pour calculer un sous-ensemble non vide des équilibres approximatifs parfaits en sous-jeu dans les jeux répétés. Nous montrons ensuite comment nous pouvons étendre cette méthode pour approximer tous les équilibres parfaits en sous-jeu dans les jeux répétés, et aussi résoudre des jeux dynamiques plus complexes.Making good decisions in multiagent environments is a hard problem in the sense that the presence of several decision makers implies conflicts of interests, a lack of coordination, and a multiplicity of possible decisions. If, then, the same decision makers interact continuously through time, they have to decide not only what to do in the present, but also how their present decisions may affect the behavior of the others in the future. Game theory is a mathematical tool that aims to model such interactions as strategic games of multiple players. Therefore, multiagent decision problems are often studied using game theory. In this context, and being restricted to dynamic games, complex multiagent decision problems can be algorithmically approached. The contribution of this thesis is three-fold. First, this thesis contributes an algorithmic framework for distributed planning in non-cooperative dynamic games. The multiplicity of possible plans is a matter of serious complications for any planning approach. We propose a novel approach based on the concept of learning in repeated games. Our approach permits overcoming the aforementioned complications by means of communication between players. We then propose a learning algorithm for repeated game self-play. Our algorithm allows players to converge, in an initially unknown repeated game, to a joint behavior optimal in a certain, well-defined sense, without communication between players. Finally, we propose a family of algorithms for approximately solving dynamic games, and for extracting equilibrium strategy profiles. In this context, we first propose a method to compute a nonempty subset of approximate subgame-perfect equilibria in repeated games. We then demonstrate how to extend this method for approximating all subgame-perfect equilibria in repeated games, and also for solving more complex dynamic games

    Efficient No-Regret Multiagent Learning

    No full text
    We present new results on the efficiency of no-regret algorithms in the context of multiagent learning. We use a known approach to augment a large class of no-regret algorithms to allow stochastic sampling of actions and observation of scalar reward of only the action played. We show that the average actual payoffs of the resulting learner gets (1) close to the best response against (eventually) stationary opponents, (2) close to the asymptotic optimal payoff against opponents that play a converging sequence of policies, and (3) close to at least a dynamic variant of minimax payoff against arbitrary opponents, with a high probability in polynomial time. In addition the polynomial bounds are shown to be significantly better than previously known bounds. Furthermore, we do not need to assume that the learner knows the game matrices and can observe the opponents\u27 actions, unlike previous work
    corecore