1,596 research outputs found

    MetTeL: A Generic Tableau Prover.

    Get PDF

    Formal verification of an autonomous personal robotic assistant

    Get PDF
    Human–robot teams are likely to be used in a variety of situations wherever humans require the assistance of robotic systems. Obvious examples include healthcare and manufacturing, in which people need the assistance of machines to perform key tasks. It is essential for robots working in close proximity to people to be both safe and trustworthy. In this paper we examine formal verification of a high-level planner/scheduler for autonomous personal robotic assistants such as Care-O-bot ™ . We describe how a model of Care-O-bot and its environment was developed using Brahms, a multiagent workflow language. Formal verification was then carried out by translating this to the input language of an existing model checker. Finally we present some formal verification results and describe how these could be complemented by simulation-based testing and realworld end-user validation in order to increase the practical and perceived safety and trustworthiness of robotic assistants

    "The fridge door is open" : temporal verification of a robotic assistant's behaviours

    Get PDF
    Robotic assistants are being designed to help, or work with, humans in a variety of situations from assistance within domestic situations, through medical care, to industrial settings. Whilst robots have been used in industry for some time they are often limited in terms of their range of movement or range of tasks. A new generation of robotic assistants have more freedom to move, and are able to autonomously make decisions and decide between alternatives. For people to adopt such robots they will have to be shown to be both safe and trustworthy. In this paper we focus on formal verification of a set of rules that have been developed to control the Care-O-bot, a robotic assistant located in a typical domestic environment. In particular, we apply model-checking, an automated and exhaustive algorithmic technique, to check whether formal temporal properties are satisfied on all the possible behaviours of the system. We prove a number of properties relating to robot behaviours, their priority and interruptibility, helping to support both safety and trustworthiness of robot behaviours

    06261 Abstracts Collection -- Foundations and Practice of Programming Multi-Agent Systems

    Get PDF
    From 25.06.06 to 30.06.06, the Dagstuhl Seminar 06261 ``Foundations and Practice of Programming Multi-Agent Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    A Roadmap to Pervasive Systems Verification

    Get PDF
    yesThe complexity of pervasive systems arises from the many different aspects that such systems possess. A typical pervasive system may be autonomous, distributed, concurrent and context-based, and may involve humans and robotic devices working together. If we wish to formally verify the behaviour of such systems, the formal methods for pervasive systems will surely also be complex. In this paper, we move towards being able to formally verify pervasive systems and outline our approach wherein we distinguish four distinct dimensions within pervasive system behaviour and utilise different, but appropriate, formal techniques for verifying each one.EPSR

    Formal Verification of Astronaut-Rover Teams for Planetary Surface Operations

    Get PDF
    This paper describes an approach to assuring the reliability of autonomous systems for Astronaut-Rover (ASRO) teams using the formal verification of models in the Brahms multi-agent modelling language. Planetary surface rovers have proven essential to several manned and unmanned missions to the moon and Mars. The first rovers were tele- or manuallyoperated, but autonomous systems are increasingly being used to increase the effectiveness and range of rover operations on missions such as the NASA Mars Science Laboratory. It is anticipated that future manned missions to the moon and Mars will use autonomous rovers to assist astronauts during extravehicular activity (EVA), including science, technical and construction operations. These ASRO teams have the potential to significantly increase the safety and efficiency of surface operations. We describe a new Brahms model in which an autonomous rover may perform several different activities including assisting an astronaut during EVA. These activities compete for the autonomous rovers “attention’ and therefore the rover must decide which activity is currently the most important and engage in that activity. The Brahms model also includes an astronaut agent, which models an astronauts predicted behaviour during an EVA. The rover must also respond to the astronauts activities. We show how this Brahms model can be simulated using the Brahms integrated development environment. The model can then also be formally verified with respect to system requirements using the SPIN model checker, through automatic translation from Brahms to PROMELA (the input language for SPIN). We show that such formal verification can be used to determine that mission- and safety critical operations are conducted correctly, and therefore increase the reliability of autonomous systems for planetary rovers in ASRO teams

    A synergistic and extensible framework for multi-agent system verification

    Get PDF
    Recently there has been a proliferation of tools and languages for modeling multi-agent systems (MAS). Verification tools, correspondingly, have been developed to check properties of these systems. Most MAS verification tools, however, have their own input language and often specialize in one verification technology, or only support checking a specific type of property. In this work we present an extensible framework that leverages mainstream verification tools to successfully reason about various types of properties. We describe the verification of models specified in the Brahms agent modeling language to demonstrate the feasibility of our approach. We chose Brahms because it is used to model real instances of interactions between pilots, air-traffic controllers, and automated systems at NASA. Our framework takes as input a Brahms model along with a Java implementation of its semantics. We then use Java PathFinder to explore all possible behaviors of the model and, also, produce a generalized intermediate representation that encodes these behaviors. The intermediate representation is automatically transformed to the input language of mainstream model checkers, including PRISM, SPIN, and NuSMV allowing us to check different types of properties. We validate our approach on a model that contains key elements from the Air France Flight 447 acciden

    Cognitive modeling of social behaviors

    Get PDF
    To understand both individual cognition and collective activity, perhaps the greatest opportunity today is to integrate the cognitive modeling approach (which stresses how beliefs are formed and drive behavior) with social studies (which stress how relationships and informal practices drive behavior). The crucial insight is that norms are conceptualized in the individual mind as ways of carrying out activities. This requires for the psychologist a shift from only modeling goals and tasks —why people do what they do—to modeling behavioral patterns—what people do—as they are engaged in purposeful activities. Instead of a model that exclusively deduces actions from goals, behaviors are also, if not primarily, driven by broader patterns of chronological and located activities (akin to scripts). To illustrate these ideas, this article presents an extract from a Brahms simulation of the Flashline Mars Arctic Research Station (FMARS), in which a crew of six people are living and working for a week, physically simulating a Mars surface mission. The example focuses on the simulation of a planning meeting, showing how physiological constraints (e.g., hunger, fatigue), facilities (e.g., the habitat’s layout) and group decision making interact. Methods are described for constructing such a model of practice, from video and first-hand observation, and how this modeling approach changes how one relates goals, knowledge, and cognitive architecture. The resulting simulation model is a powerful complement to task analysis and knowledge-based simulations of reasoning, with many practical applications for work system design, operations management, and training

    Towards the Verification of Human-Robot Teams

    Get PDF
    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration
    corecore