3 research outputs found
Recommended from our members
Combining coordination mechanisms to improve performance in multi-robot teams
Coordination is essential to achieving good performance in cooperative multiagent systems. To date, most work has focused on either implicit or explicit coordination mechanisms, while relatively little work has focused on the benefits of combining these two approaches. In this work we demonstrate that combining explicit and implicit mechanisms can significantly improve coordination and system performance over either approach individually. First, we use difference evaluations (which aim to compute an agent's contribution to the team) and stigmergy to promote implicit coordination. Second, we introduce an explicit coordination mechanism dubbed Intended Destination Enhanced Artificial State (IDEAS), where an agent incorporates other agents' intended destinations directly into its state. The IDEAS approach does not require any formal negotiation between agents, and is based on passive information sharing. Finally, we combine these two approaches on a variant of a team-based multi-robot exploration domain, and show that agents using a both explicit and implicit coordination outperform other learning agents up to 25%
Recommended from our members
Distortion of Agent States for Improved Coordination
Many real world problems have partial solutions or intermediary steps which can lead toward solving the problem. When assembling robotic teams to solve these problems, we have intuition about which intermediary steps are more useful than others. We examine methods to identify and apply our designer intuition onto tightly coupled multiagent problems. In a method analogous to potential based reward shaping, we shape the perceived value of points of interest (POI) in a ground-rover observation problem based on the potential for further team coordination if the observing agent goes to observe that POI. These state distortion methods utilize information from the current world, and as such are constructed independent of the learning method used. Methods for direct state shaping from Nasroullahi [1], which are based on future prediction about other agents’ actions, are extended into the future. From this initial work, we were inspired to create new methods that which readily scale to POI problems of arbitrary coupling dimension. These new methods show a no performance degradation in less coupled domains, and sustained operational capacity in more difficult, tightly coupled problems where traditional methods break down. This field of direct state distortion for increased cooperation and performance is relatively unexplored, and we finally lay out future directions of this area of work.
Key Words: multiagent learning, state-shaping methods, tightly coupled problem