190,018 research outputs found
Analysing the behaviour of robot teams through relational sequential pattern mining
This report outlines the use of a relational representation in a Multi-Agent
domain to model the behaviour of the whole system. A desired property in this
systems is the ability of the team members to work together to achieve a common
goal in a cooperative manner. The aim is to define a systematic method to
verify the effective collaboration among the members of a team and comparing
the different multi-agent behaviours. Using external observations of a
Multi-Agent System to analyse, model, recognize agent behaviour could be very
useful to direct team actions. In particular, this report focuses on the
challenge of autonomous unsupervised sequential learning of the team's
behaviour from observations. Our approach allows to learn a symbolic sequence
(a relational representation) to translate raw multi-agent, multi-variate
observations of a dynamic, complex environment, into a set of sequential
behaviours that are characteristic of the team in question, represented by a
set of sequences expressed in first-order logic atoms. We propose to use a
relational learning algorithm to mine meaningful frequent patterns among the
relational sequences to characterise team behaviours. We compared the
performance of two teams in the RoboCup four-legged league environment, that
have a very different approach to the game. One uses a Case Based Reasoning
approach, the other uses a pure reactive behaviour.Comment: 25 page
Survey of Recent Multi-Agent Reinforcement Learning Algorithms Utilizing Centralized Training
Much work has been dedicated to the exploration of Multi-Agent Reinforcement
Learning (MARL) paradigms implementing a centralized learning with
decentralized execution (CLDE) approach to achieve human-like collaboration in
cooperative tasks. Here, we discuss variations of centralized training and
describe a recent survey of algorithmic approaches. The goal is to explore how
different implementations of information sharing mechanism in centralized
learning may give rise to distinct group coordinated behaviors in multi-agent
systems performing cooperative tasks.Comment: This article appeared in the news at:
https://www.army.mil/article/247261/army_researchers_develop_innovative_framework_for_training_a
Inverse Factorized Q-Learning for Cooperative Multi-agent Imitation Learning
This paper concerns imitation learning (IL) (i.e, the problem of learning to
mimic expert behaviors from demonstrations) in cooperative multi-agent systems.
The learning problem under consideration poses several challenges,
characterized by high-dimensional state and action spaces and intricate
inter-agent dependencies. In a single-agent setting, IL has proven to be done
efficiently through an inverse soft-Q learning process given expert
demonstrations. However, extending this framework to a multi-agent context
introduces the need to simultaneously learn both local value functions to
capture local observations and individual actions, and a joint value function
for exploiting centralized learning. In this work, we introduce a novel
multi-agent IL algorithm designed to address these challenges. Our approach
enables the centralized learning by leveraging mixing networks to aggregate
decentralized Q functions. A main advantage of this approach is that the
weights of the mixing networks can be trained using information derived from
global states. We further establish conditions for the mixing networks under
which the multi-agent objective function exhibits convexity within the Q
function space. We present extensive experiments conducted on some challenging
competitive and cooperative multi-agent game environments, including an
advanced version of the Star-Craft multi-agent challenge (i.e., SMACv2), which
demonstrates the effectiveness of our proposed algorithm compared to existing
state-of-the-art multi-agent IL algorithms
Herd's Eye View: Improving Game AI Agent Learning with Collaborative Perception
We present a novel perception model named Herd's Eye View (HEV) that adopts a
global perspective derived from multiple agents to boost the decision-making
capabilities of reinforcement learning (RL) agents in multi-agent environments,
specifically in the context of game AI. The HEV approach utilizes cooperative
perception to empower RL agents with a global reasoning ability, enhancing
their decision-making. We demonstrate the effectiveness of the HEV within
simulated game environments and highlight its superior performance compared to
traditional ego-centric perception models. This work contributes to cooperative
perception and multi-agent reinforcement learning by offering a more realistic
and efficient perspective for global coordination and decision-making within
game environments. Moreover, our approach promotes broader AI applications
beyond gaming by addressing constraints faced by AI in other fields such as
robotics. The code is available at https://github.com/andrewnash/Herds-Eye-ViewComment: AIIDE 2023 Poste
- …