26 research outputs found

    Using Formal Methods for Autonomous Systems: Five Recipes for Formal Verification

    Get PDF
    Formal Methods are mathematically-based techniques for software design and engineering, which enable the unambiguous description of and reasoning about a system's behaviour. Autonomous systems use software to make decisions without human control, are often embedded in a robotic system, are often safety-critical, and are increasingly being introduced into everyday settings. Autonomous systems need robust development and verification methods, but formal methods practitioners are often asked: Why use Formal Methods for Autonomous Systems? To answer this question, this position paper describes five recipes for formally verifying aspects of an autonomous system, collected from the literature. The recipes are examples of how Formal Methods can be an effective tool for the development and verification of autonomous systems. During design, they enable unambiguous description of requirements; in development, formal specifications can be verified against requirements; software components may be synthesised from verified specifications; and behaviour can be monitored at runtime and compared to its original specification. Modern Formal Methods often include highly automated tool support, which enables exhaustive checking of a system's state space. This paper argues that Formal Methods are a powerful tool for the repertoire of development techniques for safe autonomous systems, alongside other robust software engineering techniques.Comment: Accepted at Journal of Risk and Reliabilit

    Formal Verification of a Map Merging Protocol in the Multi-Agent Programming Contest

    Get PDF
    Communication is a critical part of enabling multi-agent systems to cooperate. This means that applying formal methods to protocols governing communication within multi-agent systems provides useful confidence in its reliability. In this paper, we describe the formal verification of a complex communication protocol that coordinates agents merging maps of their environment. The protocol was used by the LFC team in the 2019 edition of the Multi-Agent Programming Contest (MAPC). Our specification of the protocol is written in Communicating Sequential Processes (CSP), which is a well-suited approach to specifying agent communication protocols due to its focus on concurrent communicating systems. We validate the specification's behaviour using scenarios where the correct behaviour is known, and verify that eventually all the maps have merged

    A Compositional Approach to Verifying Modular Robotic Systems

    Full text link
    Robotic systems used in safety-critical industrial situations often rely on modular software architectures, and increasingly include autonomous components. Verifying that these modular robotic systems behave as expected requires approaches that can cope with, and preferably take advantage of, this inherent modularity. This paper describes a compositional approach to specifying the nodes in robotic systems built using the Robotic Operating System (ROS), where each node is specified using First-Order Logic (FOL) assume-guarantee contracts that link the specification to the ROS implementation. We introduce inference rules that facilitate the composition of these node-level contracts to derive system-level properties. We also present a novel Domain-Specific Language, the ROS Contract Language, which captures a node's FOL specification and links this contract to its implementation. RCL contracts can be automatically translated, by our tool Vanda, into executable monitors; which we use to verify the contracts at runtime. We illustrate our approach through the specification and verification of an autonomous rover engaged in the remote inspection of a nuclear site, and finish with smaller examples that illustrate other useful features of our framework.Comment: Version submitted to RA

    Verifiable self-certifying autonomous systems

    Get PDF
    Autonomous systems are increasingly being used in safety-and mission-critical domains, including aviation, manufacturing, healthcare and the automotive industry. Systems for such domains are often verified with respect to essential requirements set by a regulator, as part of a process called certification. In principle, autonomous systems can be deployed if they can be certified for use. However, certification is especially challenging as the condition of both the system and its environment will surely change, limiting the effective use of the system. In this paper we discuss the technological and regulatory background for such systems, and introduce an architectural framework that supports verifiably-correct dynamic self-certification by the system, potentially allowing deployed systems to operate more safely and effectively

    Modular Verification of Autonomous Space Robotics

    Get PDF
    Ensuring that autonomous space robot control software behaves as it should is crucial, particularly as software failure in space often equates to mission failure and could potentially endanger nearby astronauts and costly equipment. To minimise mission failure caused by software errors, we can utilise a variety of tools and techniques to verify that the software behaves as intended. In particular, distinct nodes in a robotic system often require different verification techniques to ensure that they behave as expected. This paper introduces a method for integrating the various verification techniques that are applied to robotic software, via a First-Order Logic (FOL) specification that captures each node's assumptions and guarantees. These FOL specifications are then used to guide the verification of the individual nodes, be it by testing or the use of a formal method. We also outline a way of measuring our confidence in the verification of the entire system in terms of the verification techniques used

    Proceedings Second Workshop on Formal Methods for Autonomous Systems

    No full text
    Autonomous systems are highly complex and present unique challenges for the application of formal methods. Autonomous systems act without human intervention, and are often embedded in a robotic system, so that they can interact with the real world. As such, they exhibit the properties of safety-critical, cyber-physical, hybrid, and real-time systems. The goal of FMAS is to bring together leading researchers who are tackling the unique challenges of autonomous systems using formal methods, to present recent and ongoing work. We are interested in the use of formal methods to specify, model, or verify autonomous or robotic systems; in whole or in part. We are also interested in successful industrial applications and potential future directions for this emerging application of formal methods
    corecore