578 research outputs found

    Computer Science and Game Theory: A Brief Survey

    Full text link
    There has been a remarkable increase in work at the interface of computer science and game theory in the past decade. In this article I survey some of the main themes of work in the area, with a focus on the work in computer science. Given the length constraints, I make no attempt at being comprehensive, especially since other surveys are also available, and a comprehensive survey book will appear shortly.Comment: To appear; Palgrave Dictionary of Economic

    Game theory for cooperation in multi-access edge computing

    Get PDF
    Cooperative strategies amongst network players can improve network performance and spectrum utilization in future networking environments. Game Theory is very suitable for these emerging scenarios, since it models high-complex interactions among distributed decision makers. It also finds the more convenient management policies for the diverse players (e.g., content providers, cloud providers, edge providers, brokers, network providers, or users). These management policies optimize the performance of the overall network infrastructure with a fair utilization of their resources. This chapter discusses relevant theoretical models that enable cooperation amongst the players in distinct ways through, namely, pricing or reputation. In addition, the authors highlight open problems, such as the lack of proper models for dynamic and incomplete information scenarios. These upcoming scenarios are associated to computing and storage at the network edge, as well as, the deployment of large-scale IoT systems. The chapter finalizes by discussing a business model for future networks.info:eu-repo/semantics/acceptedVersio

    Game theory for collaboration in future networks

    Get PDF
    Cooperative strategies have the great potential of improving network performance and spectrum utilization in future networking environments. This new paradigm in terms of network management, however, requires a novel design and analysis framework targeting a highly flexible networking solution with a distributed architecture. Game Theory is very suitable for this task, since it is a comprehensive mathematical tool for modeling the highly complex interactions among distributed and intelligent decision makers. In this way, the more convenient management policies for the diverse players (e.g. content providers, cloud providers, home providers, brokers, network providers or users) should be found to optimize the performance of the overall network infrastructure. The authors discuss in this chapter several Game Theory models/concepts that are highly relevant for enabling collaboration among the diverse players, using different ways to incentivize it, namely through pricing or reputation. In addition, the authors highlight several related open problems, such as the lack of proper models for dynamic and incomplete information games in this area.info:eu-repo/semantics/acceptedVersio

    Cooperation in Multi-Agent Reinforcement Learning

    Get PDF
    As progress in reinforcement learning (RL) gives rise to increasingly general and powerful artificial intelligence, society needs to anticipate a possible future in which multiple RL agents must learn and interact in a shared multi-agent environment. When a single principal has oversight of the multi-agent system, how should agents learn to cooperate via centralized training to achieve individual and global objectives? When agents belong to self-interested principals with imperfectly-aligned objectives, how can cooperation emerge from fully-decentralized learning? This dissertation addresses both questions by proposing novel methods for multi-agent reinforcement learning (MARL) and demonstrating the empirical effectiveness of these methods in high-dimensional simulated environments. To address the first case, we propose new algorithms for fully-cooperative MARL in the paradigm of centralized training with decentralized execution. Firstly, we propose a method based on multi-agent curriculum learning and multi-agent credit assignment to address the setting where global optimality is defined as the attainment of all individual goals. Secondly, we propose a hierarchical MARL algorithm to discover and learn interpretable and useful skills for a multi-agent team to optimize a single team objective. Extensive experiments with ablations show the strengths of our approaches over state-of-the-art baselines. To address the second case, we propose learning algorithms to attain cooperation within a population of self-interested RL agents. We propose the design of a new agent who is equipped with the new ability to incentivize other RL agents and explicitly account for the other agents' learning process. This agent overcomes the challenging limitation of fully-decentralized training and generates emergent cooperation in difficult social dilemmas. Then, we extend and apply this technique to the problem of incentive design, where a central incentive designer explicitly optimizes a global objective only by intervening on the rewards of a population of independent RL agents. Experiments on the problem of optimal taxation in a simulated market economy demonstrate the effectiveness of this approach.Ph.D

    Cooperative artificial intelligence

    Get PDF
    In the future, artificial learning agents are likely to become increasingly widespread in our society. They will interact with both other learning agents and humans in a variety of complex settings including social dilemmas. We argue that there is a need for research on the intersection between game theory and artificial intelligence, with the goal of achieving cooperative artificial intelligence that can navigate social dilemmas well. We consider the problem of how an external agent can promote cooperation between artificial learners by distributing additional rewards and punishments based on observing the learners' actions. We propose a rule for automatically learning how to create right incentives by considering the players' anticipated parameter updates. Using this learning rule leads to cooperation with high social welfare in matrix games in which the agents would otherwise learn to defect with high probability. We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off after a given number of episodes, while other games require ongoing intervention to maintain mutual cooperation. Finally, we reflect on what the goals of multi-agent reinforcement learning should be in the first place, and discuss the necessary building blocks towards the goal of building cooperative AI
    • …
    corecore