3 research outputs found

    Automated Learning Setups in Automata Learning

    Full text link
    International audienc

    On Provably Correct Decision-Making for Automated Driving

    Get PDF
    The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings several other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Many different hardware and software components (e.g. sensing, decision-making, actuation, and control) interact to solve the autonomous driving task. Correctness of such automated driving systems is crucial as incorrect behaviour may have catastrophic consequences. Autonomous vehicles operate in complex and dynamic environments, which requires decision-making and planning at different levels. The aim of such decision-making components in these systems is to make safe decisions at all times. The challenge of safety verification of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, which are techniques that use rigorous mathematical models to build hardware and software systems can provide a mathematical proof of the correctness of the system. The focus of this thesis is to address some of the challenges in the safety verification of decision-making in automated driving systems. A central question here is how to establish formal verification as an efficient tool for automated driving software development.A key finding is the need for an integrated formal approach to prove correctness and to provide a complete safety argument. This thesis provides insights into how three different formal verification approaches, namely supervisory control theory, model checking, and deductive verification differ in their application to automated driving and identifies the challenges associated with each method. It identifies the need for the introduction of more rigour in the requirement refinement process and presents one possible solution by using a formal model-based safety analysis approach. To address challenges in the manual modelling process, a possible solution by automatically learning formal models directly from code is proposed

    Iterative refinement of specification for component based embedded systems

    No full text
    The current practice of component based engineering raises concerns in industry when the specification of proprietary components suffers from inaccuracy and incompleteness. The engineers face difficulties in producing quality systems since they lack knowledge on the interoperability of components. In order to address this issue, we present a novel framework for iterative refinement of specification for component based systems. The novelty is the use of a preliminary behavioral model as a source for triggering refinement iterations. Moreover, it exploits rigorous formal techniques to achieve high-level system validation as an integral part of the refinement procedure. The framework has been evaluated on an automotive system in which the embedded software control units were developed by third-party vendors. The final results produced an improved formal system specification that identified several behaviors that were previously unknown
    corecore