12 research outputs found

    Experimental Evaluation of Verification and Validation Tools on Martian Rover Software

    No full text
    We report on a study to determine the maturity of di#erent verification and validation technologies (V&V) applied to a representative example of NASA flight software. The study consisted of a controlled experiment where three technologies (static analysis, runtime analysis and model checking) were compared to traditional testing with respect to their ability to find seeded errors in a prototype Mars Rover controller. What makes this study unique is that it is the first (to the best of our knowledge) controlled experiment to compare formal methods based tools to testing on a realistic industrial-size example, where the emphasis was on collecting as much data on the performance of the tools and the participants as possible. The paper includes a description of the Rover code that was analyzed, the tools used, as well as a detailed description of the experimental setup and the results. Due to the complexity of setting up the experiment, our results cannot be generalized, but we believe it can still serve as a valuable point of reference for future studies of this kind. It confirmed our belief that advanced tools can outperform testing when trying to locate concurrency errors. Furthermore, the results of the experiment inspired a novel framework for testing the next generation of the Rover

    Automated Learning of Probabilistic Assumptions for Compositional Reasoning

    No full text
    Abstract. Probabilistic verification techniques have been applied to the formal modelling and analysis of a wide range of systems, from communication protocols such as Bluetooth, to nanoscale computing devices, to biological cellular processes. In order to tackle the inherent challenge of scalability, compositional approaches to verification are sorely needed. An example is assume-guarantee reasoning, where each component of a system is analysed independently, using assumptions about the other components that it interacts with. We discuss recent developments in the area of automated compositional verification techniques for probabilistic systems. In particular, we describe techniques to automatically generate probabilistic assumptions that can be used as the basis for compositional reasoning. We do so using algorithmic learning techniques, which have already proved to be successful for the generation of assumptions for compositional verification of non-probabilistic systems. We also present recent improvements and extensions to this work and survey some of the promising potential directions for further research in this area.

    Test Case Generation by Symbolic Execution: Basic Concepts, a CLP-based Instance, and Actor-based Concurrency

    Get PDF
    The focus of this tutorial is white-box test case generation (TCG) based on symbolic execution. Symbolic execution consists in executing a program with the contents of its input arguments being symbolic variables rather than concrete values. A symbolic execution tree characterizes the set of execution paths explored during the symbolic execution of a program. Test cases can be then obtained from the successful branches of the tree. The tutorial is split into three parts: (1) The first part overviews the basic techniques used in TCG to ensure termination, handling heap-manipulating programs, achieving compositionality in the process and guiding TCG towards interesting test cases. (2) In the second part, we focus on a particular implementation of the TCG framework in constraint logic programming (CLP). In essense, the imperative object-oriented program under test is automatically transformed into an equivalent executable CLP-translated program. The main advantage of CLP-based TCG is that the standard mechanism of CLP performs symbolic execution for free. The PET system is an open-source software that implements this approach. (3) Finally, in the last part, we study the extension of TCG to actor-based concurrent programs

    Active automata learning in practice: An annotated bibliography of the years 2011 to 2016

    No full text
    Active automata learning is slowly becoming a standard tool in the toolbox of the software engineer. As systems become ever more complex and development becomes more distributed, inferred models of system behavior become an increasingly valuable asset for understanding and analyzing a system’s behavior. Five years ago (in 2011) we have surveyed the then current state of active automata learning research and applications of active automata learning in practice. We predicted four major topics to be addressed in the then near future: efficiency, expressivity of models, bridging the semantic gap between formal languages and analyzed components, and solutions to the inherent problem of incompleteness of active learning in black-box scenarios. In this paper we review the progress that has been made over the past five years, assess the status of active automata learning techniques with respect to applications in the field of software engineering, and present an updated agenda for future research
    corecore