53,045 research outputs found

    Policy with Dispersed Information

    Get PDF
    Information regarding economic fundamentals is widely dispersed in society, is only imperfectly aggregated through prices or other indicators of aggregate activity, and can not be centralized by the government or any other institution. In this paper we seek to identify policies that can improve the decentralized use of such dispersed information without requiring the government to observe this information. We show that this can be achieved by appropriately designing the contingency of taxation on ex-post public information regarding the realized fundamentals and aggregate activity. When information is common (as in the Ramsey literature) or when agents have private information only about idiosyncratic shocks (as in the Mirrlees literature), the contingency on fundamentals alone suffices for efficiency. When instead agents have private information about aggregate shocks, the contingency on aggregate activity is crucial. An appropriate combination of the two contingencies permits the government to: (i) dampen the impact of noise and hence reduce non-fundamental volatility, without also dampening the impact of fundamentals; (ii) induce agents to internalize informational externalities, and hence improve the speed of social learning; (iii) restore a certain form of constrained efficiency in the decentralized use of information; (iv) guarantee that welfare increases with the provision of any additional information.Optimal policy, private information, complementarities, information externalities, social learning, efficiency

    Cooperation in Multi-Agent Reinforcement Learning

    Get PDF
    As progress in reinforcement learning (RL) gives rise to increasingly general and powerful artificial intelligence, society needs to anticipate a possible future in which multiple RL agents must learn and interact in a shared multi-agent environment. When a single principal has oversight of the multi-agent system, how should agents learn to cooperate via centralized training to achieve individual and global objectives? When agents belong to self-interested principals with imperfectly-aligned objectives, how can cooperation emerge from fully-decentralized learning? This dissertation addresses both questions by proposing novel methods for multi-agent reinforcement learning (MARL) and demonstrating the empirical effectiveness of these methods in high-dimensional simulated environments. To address the first case, we propose new algorithms for fully-cooperative MARL in the paradigm of centralized training with decentralized execution. Firstly, we propose a method based on multi-agent curriculum learning and multi-agent credit assignment to address the setting where global optimality is defined as the attainment of all individual goals. Secondly, we propose a hierarchical MARL algorithm to discover and learn interpretable and useful skills for a multi-agent team to optimize a single team objective. Extensive experiments with ablations show the strengths of our approaches over state-of-the-art baselines. To address the second case, we propose learning algorithms to attain cooperation within a population of self-interested RL agents. We propose the design of a new agent who is equipped with the new ability to incentivize other RL agents and explicitly account for the other agents' learning process. This agent overcomes the challenging limitation of fully-decentralized training and generates emergent cooperation in difficult social dilemmas. Then, we extend and apply this technique to the problem of incentive design, where a central incentive designer explicitly optimizes a global objective only by intervening on the rewards of a population of independent RL agents. Experiments on the problem of optimal taxation in a simulated market economy demonstrate the effectiveness of this approach.Ph.D

    Partner Selection for the Emergence of Cooperation in Multi-Agent Systems Using Reinforcement Learning

    Get PDF
    Social dilemmas have been widely studied to explain how humans are able to cooperate in society. Considerable effort has been invested in designing artificial agents for social dilemmas that incorporate explicit agent motivations that are chosen to favor coordinated or cooperative responses. The prevalence of this general approach points towards the importance of achieving an understanding of both an agent's internal design and external environment dynamics that facilitate cooperative behavior. In this paper, we investigate how partner selection can promote cooperative behavior between agents who are trained to maximize a purely selfish objective function. Our experiments reveal that agents trained with this dynamic learn a strategy that retaliates against defectors while promoting cooperation with other agents resulting in a prosocial society.Comment:

    Arena: A General Evaluation Platform and Building Toolkit for Multi-Agent Intelligence

    Full text link
    Learning agents that are not only capable of taking tests, but also innovating is becoming a hot topic in AI. One of the most promising paths towards this vision is multi-agent learning, where agents act as the environment for each other, and improving each agent means proposing new problems for others. However, existing evaluation platforms are either not compatible with multi-agent settings, or limited to a specific game. That is, there is not yet a general evaluation platform for research on multi-agent intelligence. To this end, we introduce Arena, a general evaluation platform for multi-agent intelligence with 35 games of diverse logics and representations. Furthermore, multi-agent intelligence is still at the stage where many problems remain unexplored. Therefore, we provide a building toolkit for researchers to easily invent and build novel multi-agent problems from the provided game set based on a GUI-configurable social tree and five basic multi-agent reward schemes. Finally, we provide Python implementations of five state-of-the-art deep multi-agent reinforcement learning baselines. Along with the baseline implementations, we release a set of 100 best agents/teams that we can train with different training schemes for each game, as the base for evaluating agents with population performance. As such, the research community can perform comparisons under a stable and uniform standard. All the implementations and accompanied tutorials have been open-sourced for the community at https://sites.google.com/view/arena-unity/
    • …
    corecore