6 research outputs found

    Verification of Decision Making Software in an Autonomous Vehicle: An Industrial Case Study

    Get PDF
    Correctness of autonomous driving systems is crucial as\ua0incorrect behaviour may have catastrophic consequences. Many different\ua0hardware and software components (e.g. sensing, decision making, actuation,\ua0and control) interact to solve the autonomous driving task, leading to a level of complexity that brings new challenges for the formal verification\ua0community. Though formal verification has been used to prove\ua0correctness of software, there are significant challenges in transferring\ua0such techniques to an agile software development process and to ensure\ua0widespread industrial adoption. In the light of these challenges, the identification\ua0of appropriate formalisms, and consequently the right verification\ua0tools, has significant impact on addressing them. In this paper, we\ua0evaluate the application of different formal techniques from supervisory\ua0control theory, model checking, and deductive verification to verify existing\ua0decision and control software (in development) for an autonomous\ua0vehicle. We discuss how the verification objective differs with respect tothe choice of formalism and the level of formality that can be applied.\ua0Insights from the case study show a need for multiple formal methods to\ua0prove correctness, the difficulty to capture the right level of abstraction\ua0to model and specify the formal properties for the verification objectives

    Automatically Learning Formal Models from Autonomous Driving Software

    Get PDF
    The correctness of autonomous driving software is of utmost importance, as incorrect behavior may have catastrophic consequences. Formal model-based engineering techniques can help guarantee correctness and thereby allow the safe deployment of autonomous vehicles. However, challenges exist for widespread industrial adoption of formal methods. One of these challenges is the model construction problem. Manual construction of formal models is time-consuming, error-prone, and intractable for large systems. Automating model construction would be a big step towards widespread industrial adoption of formal methods for system development, re-engineering, and reverse engineering. This article applies active learning techniques to obtain formal models of an existing (under development) autonomous driving software module implemented in MATLAB. This demonstrates the feasibility of automated learning for automotive industrial use. Additionally, practical challenges in applying automata learning, and possible directions for integrating automata learning into the automotive software development workflow, are discussed

    Correct-by-Construction Tactical Planners for Automated Cars

    Get PDF
    One goal of developing automated cars is to completely free people from driving tasks. Automated cars that require no human driver need to handle all traffic situations that a human driver is expected to handle, and possibly more. Although human drivers cause a lot of traffic accidents, they still have a very low accident and failure rate that automated systems must match.Tactical planners are responsible for making discrete decisions during the coming seconds or minute. As with all subsystems in an automated car, these planners need to be supported with a credible and convincing argument of their correctness. The planners\u27 decisions affect the environment and the planners need to interact with other road users in a feedback loop, so the correctness of the planners depend on their behavior in relation to other drivers and the environment over time. One possibility to ascertain their correctness is to deploy the planners in real traffic. To be sufficiently certain that a tactical planner is safe by that methods, it needs to be tested on 255 million miles without having an accident.Formal methods can, in contrast to testing, mathematically prove that the requirements are fulfilled. Hence, they are a promising alternative for making credible arguments of tactical planners\u27 correctness. The topic of this thesis is how formal methods can be used in the automotive industry to design safe tactical planners. What is interesting is both how automotive systems should be modeled in formal frameworks, and how formal methods can be used practically within the automotive development process.The main findings of this thesis are that it is natural to express desired properties of tactical planners in formal languages and use formal methods to prove their correctness. Model Checking, Reactive Synthesis, and Supervisory Control Theory have been used in the design and development process of tactical planners, and all three methods have their benefits, depending on the application.Formal synthesis is an especially interesting class of formal methods because they can automatically generate a planner based on requirements and models. Formal synthesis removes the need to manually develop and implement the planner, so the development efforts can be directed to formalizing good requirements on the planner and good assumptions on the environment. However, formal synthesis has two limitations: the resulting planner is a black box that is difficult to inspect, and it is difficult to find a level of abstraction that allows detailed requirements and generic planners

    Application of formal verification to the lane change module of an autonomous vehicle

    Get PDF
    For autonomous vehicles correct behavior is of the utmost importance, as unexpected incorrect behavior can have catastrophic outcomes. However, as with any large-scale software development, it is not easy to get the system correct. As the system is made up of multiple sub-modules that interact with each other, unexpected behavior can arise from incorrect interactions when one module may have unfulfilled expectations on the other. This paper describes how formal verification was applied to the lane change module of the decision and control software (under development) for an autonomous vehicle. The module was manually modelled as an extended finite-state machine, as were some of the requirements. When applying the Supremica software to perform the formal verification, some bugs were discovered in the model. Setting up additional unit tests triggering the incorrect behavior showed that this behavior was also present within the actual code. For some of the errors, applied corrections resulted in the absence of the particular error, thus demonstrating the power of true formal verification

    Feasible, Robust and Reliable Automation and Control for Autonomous Systems

    Get PDF
    The Special Issue book focuses on highlighting current research and developments in the automation and control field for autonomous systems as well as showcasing state-of-the-art control strategy approaches for autonomous platforms. The book is co-edited by distinguished international control system experts currently based in Sweden, the United States of America, and the United Kingdom, with contributions from reputable researchers from China, Austria, France, the United States of America, Poland, and Hungary, among many others. The editors believe the ten articles published within this Special Issue will be highly appealing to control-systems-related researchers in applications typified in the fields of ground, aerial, maritime vehicles, and robotics as well as industrial audiences
    corecore