33,007 research outputs found
HCI Model with Learning Mechanism for Cooperative Design in Pervasive Computing Environment
This paper presents a human-computer interaction model with a three layers learning mechanism in a pervasive environment. We begin with a discussion around a number of important issues related to human-computer interaction followed by a description of the architecture for a multi-agent cooperative design system for pervasive computing environment. We present our proposed three- layer HCI model and introduce the group formation algorithm, which is predicated on a dynamic sharing niche technology. Finally, we explore the cooperative reinforcement learning and fusion algorithms; the paper closes with concluding observations and a summary of the principal work and contributions of this paper
Communication-Efficient Cooperative Multi-Agent PPO via Regulated Segment Mixture in Internet of Vehicles
Multi-Agent Reinforcement Learning (MARL) has become a classic paradigm to
solve diverse, intelligent control tasks like autonomous driving in Internet of
Vehicles (IoV). However, the widely assumed existence of a central node to
implement centralized federated learning-assisted MARL might be impractical in
highly dynamic scenarios, and the excessive communication overheads possibly
overwhelm the IoV system. Therefore, in this paper, we design a communication
efficient cooperative MARL algorithm, named RSM-MAPPO, to reduce the
communication overheads in a fully distributed architecture. In particular,
RSM-MAPPO enhances the multi-agent Proximal Policy Optimization (PPO) by
incorporating the idea of segment mixture and augmenting multiple model
replicas from received neighboring policy segments. Afterwards, RSM-MAPPO
adopts a theory-guided metric to regulate the selection of contributive
replicas to guarantee the policy improvement. Finally, extensive simulations in
a mixed-autonomy traffic control scenario verify the effectiveness of the
RSM-MAPPO algorithm
Spatial-Temporal-Aware Safe Multi-Agent Reinforcement Learning of Connected Autonomous Vehicles in Challenging Scenarios
Communication technologies enable coordination among connected and autonomous
vehicles (CAVs). However, it remains unclear how to utilize shared information
to improve the safety and efficiency of the CAV system. In this work, we
propose a framework of constrained multi-agent reinforcement learning (MARL)
with a parallel safety shield for CAVs in challenging driving scenarios. The
coordination mechanisms of the proposed MARL include information sharing and
cooperative policy learning, with Graph Convolutional Network (GCN)-Transformer
as a spatial-temporal encoder that enhances the agent's environment awareness.
The safety shield module with Control Barrier Functions (CBF)-based safety
checking protects the agents from taking unsafe actions. We design a
constrained multi-agent advantage actor-critic (CMAA2C) algorithm to train safe
and cooperative policies for CAVs. With the experiment deployed in the CARLA
simulator, we verify the effectiveness of the safety checking, spatial-temporal
encoder, and coordination mechanisms designed in our method by comparative
experiments in several challenging scenarios with the defined hazard vehicles
(HAZV). Results show that our proposed methodology significantly increases
system safety and efficiency in challenging scenarios.Comment: This paper has been accepted by the 2023 IEEE International
Conference on Robotics and Automation (ICRA 2023). 6 pages, 5 figure
Cooperation in Multi-Agent Reinforcement Learning
As progress in reinforcement learning (RL) gives rise to increasingly general and powerful artificial intelligence, society needs to anticipate a possible future in which multiple RL agents must learn and interact in a shared multi-agent environment. When a single principal has oversight of the multi-agent system, how should agents learn to cooperate via centralized training to achieve individual and global objectives? When agents belong to self-interested principals with imperfectly-aligned objectives, how can cooperation emerge from fully-decentralized learning? This dissertation addresses both questions by proposing novel methods for multi-agent reinforcement learning (MARL) and demonstrating the empirical effectiveness of these methods in high-dimensional simulated environments.
To address the first case, we propose new algorithms for fully-cooperative MARL in the paradigm of centralized training with decentralized execution. Firstly, we propose a method based on multi-agent curriculum learning and multi-agent credit assignment to address the setting where global optimality is defined as the attainment of all individual goals. Secondly, we propose a hierarchical MARL algorithm to discover and learn interpretable and useful skills for a multi-agent team to optimize a single team objective. Extensive experiments with ablations show the strengths of our approaches over state-of-the-art baselines.
To address the second case, we propose learning algorithms to attain cooperation within a population of self-interested RL agents. We propose the design of a new agent who is equipped with the new ability to incentivize other RL agents and explicitly account for the other agents' learning process. This agent overcomes the challenging limitation of fully-decentralized training and generates emergent cooperation in difficult social dilemmas. Then, we extend and apply this technique to the problem of incentive design, where a central incentive designer explicitly optimizes a global objective only by intervening on the rewards of a population of independent RL agents. Experiments on the problem of optimal taxation in a simulated market economy demonstrate the effectiveness of this approach.Ph.D
- …