1,092 research outputs found

    Inferring behavioral specifications from large-scale repositories by leveraging collective intelligence

    Get PDF
    Despite their proven benefits, useful, comprehensible, and efficiently checkable specifications are not widely available. This is primarily because writing useful, non-trivial specifications from scratch is too hard, time consuming, and requires expertise that is not broadly available. Furthermore, the lack of specifications for widely-used libraries and frameworks, caused by the high cost of writing specifications, tends to have a snowball effect. Core libraries lack specifications, which makes specifying applications that use them expensive. To contain the skyrocketing development and maintenance costs of high assurance systems, this self-perpetuating cycle must be broken. The labor cost of specifying programs can be significantly decreased via advances in specification inference and synthesis, and this has been attempted several times, but with limited success. We believe that practical specification inference and synthesis is an idea whose time has come. Fundamental breakthroughs in this area can be achieved by leveraging the collective intelligence available in software artifacts from millions of open source projects. Finegrained access to such data sets has been unprecedented, but is now easily available. We identify research directions and report our preliminary results on advances in specification inference that can be had by using such data sets to infer specifications

    Model-Based Testing of Off-Nominal Behaviors

    Get PDF
    Off-nominal behaviors (ONBs) are unexpected or unintended behaviors that may be exhibited by a system. They can be caused by implementation and documentation errors and are often triggered by unanticipated external stimuli, such as unforeseen sequences of events, out of range data values, or environmental issues. System specifications typically focus on nominal behaviors (NBs), and do not refer to ONBs or their causes or explain how the system should respond to them. In addition, untested occurrences of ONBs can compromise the safety and reliability of a system. This can be very dangerous in mission- and safety-critical systems, like spacecraft, where software issues can lead to expensive mission failures, injuries, or even loss of life. In order to ensure the safety of the system, potential causes for ONBs need to be identified and their handling in the implementation has to be verified and documented. This thesis describes the development and evaluation of model-based techniques for the identification and documentation of ONBs. Model-Based Testing (MBT) techniques have been used to provide automated support for thorough evaluation of software behavior. In MBT, models are used to describe the system under test (SUT) and to derive test cases for that SUT. The thesis is divided into two parts. The first part develops and evaluates an approach for the automated generation of MBT models and their associated test infrastructure. The test infrastructure is responsible for executing the generated test cases of the models. The models and the test infrastructure are generated from manual test cases for web-based systems, using a set of heuristic transformation rules and leveraging the structured nature of the SUT. This improvement to the MBT process was motivated by three case studies of MBT that we conducted that evaluate MBT in terms of its effectiveness and efficiency for identifying ONBs. Our experience led us to develop automated approaches to model and test-infrastructure creation, since these were some of the most time-consuming tasks associated with MBT. The second part of the thesis presents a framework and associated tooling for the extraction and analysis of specifications for identifying and documenting ONBs. The framework infers behavioral specifications in the form of system invariants from automatically generated test data using data-mining techniques (e.g. association-rule mining). The framework follows an iterative test -> infer -> instrument -> retest paradigm, where the initial invariants are refined with additional test data. This work shows how the scalability and accuracy of the resulting invariants can be improved with the help of static data- and control-flow analysis. Other improvements include an algorithm that leverages the iterative process to accurately infer invariants from variables with continuous values. Our evaluations of the framework have shown the utility of such automatically generated invariants as a means for updating and completing system specifications; they also are useful as a means of understanding system behavior including ONBs
    • …
    corecore