91,577 research outputs found

    Towards formal models and languages for verifiable Multi-Robot Systems

    Get PDF
    Incorrect operations of a Multi-Robot System (MRS) may not only lead to unsatisfactory results, but can also cause economic losses and threats to safety. These threats may not always be apparent, since they may arise as unforeseen consequences of the interactions between elements of the system. This call for tools and techniques that can help in providing guarantees about MRSs behaviour. We think that, whenever possible, these guarantees should be backed up by formal proofs to complement traditional approaches based on testing and simulation. We believe that tailored linguistic support to specify MRSs is a major step towards this goal. In particular, reducing the gap between typical features of an MRS and the level of abstraction of the linguistic primitives would simplify both the specification of these systems and the verification of their properties. In this work, we review different agent-oriented languages and their features; we then consider a selection of case studies of interest and implement them useing the surveyed languages. We also evaluate and compare effectiveness of the proposed solution, considering, in particular, easiness of expressing non-trivial behaviour.Comment: Changed formattin

    Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach

    Full text link
    Recent years are seeing an increasing need for on-line monitoring of teams of cooperating agents, e.g., for visualization, or performance tracking. However, in monitoring deployed teams, we often cannot rely on the agents to always communicate their state to the monitoring system. This paper presents a non-intrusive approach to monitoring by 'overhearing', where the monitored team's state is inferred (via plan-recognition) from team-members' routine communications, exchanged as part of their coordinated task execution, and observed (overheard) by the monitoring system. Key challenges in this approach include the demanding run-time requirements of monitoring, the scarceness of observations (increasing monitoring uncertainty), and the need to scale-up monitoring to address potentially large teams. To address these, we present a set of complementary novel techniques, exploiting knowledge of the social structures and procedures in the monitored team: (i) an efficient probabilistic plan-recognition algorithm, well-suited for processing communications as observations; (ii) an approach to exploiting knowledge of the team's social behavior to predict future observations during execution (reducing monitoring uncertainty); and (iii) monitoring algorithms that trade expressivity for scalability, representing only certain useful monitoring hypotheses, but allowing for any number of agents and their different activities to be represented in a single coherent entity. We present an empirical evaluation of these techniques, in combination and apart, in monitoring a deployed team of agents, running on machines physically distributed across the country, and engaged in complex, dynamic task execution. We also compare the performance of these techniques to human expert and novice monitors, and show that the techniques presented are capable of monitoring at human-expert levels, despite the difficulty of the task
    • …
    corecore