4 research outputs found

    Algorithms for distributed exploration

    No full text
    In this paper we propose algorithms for a set of problems where a distributed team of agents tries to compile a global map of the environment from local observations. We focus on two approaches: one based on behavioural agent technology where agents are pulled (or repelled) by various forces, and another where agents follow a approximate planning approach that is based on dynamic programming. We study these approaches under different conditions, such as different types of environments, varying sensor and communication ranges, and the availability of prior knowledge of the map. The results show that in most cases the simpler behavioural agent teams perform at least as well, if not better, than the teams based on approximate planning and dynamic programming. The research has not only practical implications for distributed exploration tasks, but also for analogous distributed search or optimisation problems

    Solving Optimization Problem Using Multi-agent Model Based on Belief Interaction

    No full text

    Combining Policy Search with Planning in Multi-agent Cooperation

    No full text
    It is cooperation that essentially differentiates multi-agent systems (MASs) from single-agent intelligence. In realistic MAS applications such as RoboCup, repeated work has shown that traditional machine learning (ML) approaches have difficulty mapping directly from cooperative behaviours to actuator outputs. To overcome this problem, vertical layered architectures are commonly used to break cooperation down into behavioural layers; ML has then been used to generate different low-level skills, and a planning mechanism added to create high-level cooperation. We propose a novel method called Policy Search Planning (PSP), in which Policy Search is used to find an optimal policy for selecting plans from a plan pool. PSP extends an existing gradient-search method (GPOMDP) to a MAS domain. We demonstrate how PSP can be used in RoboCup Simulation, and our experimental results reveal robustness, adaptivity, and outperformance over other methods. © 2009 Springer Berlin Heidelberg
    corecore