954 research outputs found
An optimal rewiring strategy for cooperative multiagent social learning
Multiagent coordination is a key problem in cooperative multiagent systems (MASs). It has been widely studied in both fixed-agent repeated interaction setting and static social learning framework. However, two aspects of dynamics in real-world MASs are currently neglected. First, the network topologies can change during the course of interaction dynamically. Second, the interaction utilities can be different among each pair of agents and usually unknown before interaction. Both issues mentioned above increase the difficulty of coordination. In this paper, we consider the multiagent social learning in a dynamic environment in which agents can alter their connections and interact with randomly chosen neighbors with unknown utilities beforehand. We propose an optimal rewiring strategy to select most beneficial peers to maximize the accumulated payoffs in long-run interactions. We empirically demonstrate the effects of our approach in a variety of large-scale MASs
Impact of Relational Networks in Multi-Agent Learning: A Value-Based Factorization View
Effective coordination and cooperation among agents are crucial for
accomplishing individual or shared objectives in multi-agent systems. In many
real-world multi-agent systems, agents possess varying abilities and
constraints, making it necessary to prioritize agents based on their specific
properties to ensure successful coordination and cooperation within the team.
However, most existing cooperative multi-agent algorithms do not take into
account these individual differences, and lack an effective mechanism to guide
coordination strategies. We propose a novel multi-agent learning approach that
incorporates relationship awareness into value-based factorization methods.
Given a relational network, our approach utilizes inter-agents relationships to
discover new team behaviors by prioritizing certain agents over other,
accounting for differences between them in cooperative tasks. We evaluated the
effectiveness of our proposed approach by conducting fifteen experiments in two
different environments. The results demonstrate that our proposed algorithm can
influence and shape team behavior, guide cooperation strategies, and expedite
agent learning. Therefore, our approach shows promise for use in multi-agent
systems, especially when agents have diverse properties.Comment: Accepted to International Conference on Decision and Control (IEEE
CDC 2023
- …