29,805 research outputs found

    Symbolic PathFinder: Symbolic Execution of Java Bytecode

    Get PDF
    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry

    Automatically Generating Test Cases for Safety-Critical Software via Symbolic Execution

    Full text link
    Automated test generation based on symbolic execution can be beneficial for systematically testing safety-critical software, to facilitate test engineers to pursue the strict testing requirements mandated by the certification standards, while controlling at the same time the costs of the testing process. At the same time, the development of safety-critical software is often constrained with programming languages or coding conventions that ban linguistic features which are believed to downgrade the safety of the programs, e.g., they do not allow dynamic memory allocation and variable-length arrays, limit the way in which loops are used, forbid recursion, and bound the complexity of control conditions. As a matter of facts, these linguistic features are also the main efficiency-blockers for the test generation approaches based on symbolic execution at the state of the art. This paper contributes new evidence of the effectiveness of generating test cases with symbolic execution for a significant class of industrial safety critical-systems. We specifically focus on Scade, a largely adopted model-based development language for safety-critical embedded software, and we report on a case study in which we exploited symbolic execution to automatically generate test cases for a set of safety-critical programs developed in Scade. To this end, we introduce a novel test generator that we developed in a recent industrial project on testing safety-critical railway software written in Scade, and we report on our experience of using this test generator for testing a set of Scade programs that belong to the development of an on-board signaling unit for high-speed rail. The results provide empirically evidence that symbolic execution is indeed a viable approach for generating high-quality test suites for the safety-critical programs considered in our case study

    Automated Test Case Generation for an Autopilot Requirement Prototype

    Get PDF
    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain

    SmartUnit: Empirical Evaluations for Automated Unit Testing of Embedded Software in Industry

    Full text link
    In this paper, we aim at the automated unit coverage-based testing for embedded software. To achieve the goal, by analyzing the industrial requirements and our previous work on automated unit testing tool CAUT, we rebuild a new tool, SmartUnit, to solve the engineering requirements that take place in our partner companies. SmartUnit is a dynamic symbolic execution implementation, which supports statement, branch, boundary value and MC/DC coverage. SmartUnit has been used to test more than one million lines of code in real projects. For confidentiality motives, we select three in-house real projects for the empirical evaluations. We also carry out our evaluations on two open source database projects, SQLite and PostgreSQL, to test the scalability of our tool since the scale of the embedded software project is mostly not large, 5K-50K lines of code on average. From our experimental results, in general, more than 90% of functions in commercial embedded software achieve 100% statement, branch, MC/DC coverage, more than 80% of functions in SQLite achieve 100% MC/DC coverage, and more than 60% of functions in PostgreSQL achieve 100% MC/DC coverage. Moreover, SmartUnit is able to find the runtime exceptions at the unit testing level. We also have reported exceptions like array index out of bounds and divided-by-zero in SQLite. Furthermore, we analyze the reasons of low coverage in automated unit testing in our setting and give a survey on the situation of manual unit testing with respect to automated unit testing in industry.Comment: In Proceedings of 40th International Conference on Software Engineering: Software Engineering in Practice Track, Gothenburg, Sweden, May 27-June 3, 2018 (ICSE-SEIP '18), 10 page

    Integrated Module Testing and Module Verification

    Get PDF
    In this dissertation an integrated approach to formal module verification by model checking and module testing is described. The main focus lays on the verification of C functions. Specification-based testing and functional verification require a formalized module specification. For this purpose an annotation language as an extension of a pre-/post-condition syntax is developed and discussed. This annotation language allows the definition of logical conditions relating the program s pre-state to its post-state after executing the module. For requirements tracking a test case specification is developed. The correctness conditions can be refined by the introduction of auxiliary variables. Besides the specification of the module under test, the presented annotation language allows to model the behavior of external functions called by the module under test. By the specification of pre- and post-conditions as well as test cases, test data generation for both structural and functional testing is reduced to a reachability problem (as known from bounded model checking) within the module s control flow graph. These reachability problems are investigated using symbolic execution. The strength of symbolic execution is in its precision and its ability to reason about multiple program inputs simultaneously, but it also has limitations like aliasing or external function calls. These in turn are analyzed and new algorithms are developed which overtake most of the detected limitations. The expansion and selection strategies for test case selection are developed and described. They allow to minimize the size of investigated states and the number of generated test cases, while achieving maximal branch coverage. The developed algorithms and strategies are implemented in the test generator CTGEN, which generates test data for C1 structural coverage and for functional coverage. It also supports automated stub generation where the data returned by a stub during test execution depends on the specification provided by the user. CTGEN is evaluated and compared with competing tools and produces competitive results

    CTGEN - a Unit Test Generator for C

    Full text link
    We present a new unit test generator for C code, CTGEN. It generates test data for C1 structural coverage and functional coverage based on pre-/post-condition specifications or internal assertions. The generator supports automated stub generation, and data to be returned by the stub to the unit under test (UUT) may be specified by means of constraints. The typical application field for CTGEN is embedded systems testing; therefore the tool can cope with the typical aliasing problems present in low-level C, including pointer arithmetics, structures and unions. CTGEN creates complete test procedures which are ready to be compiled and run against the UUT. In this paper we describe the main features of CTGEN, their technical realisation, and we elaborate on its performance in comparison to a list of competing test generation tools. Since 2011, CTGEN is used in industrial scale test campaigns for embedded systems code in the automotive domain.Comment: In Proceedings SSV 2012, arXiv:1211.587

    An empirical investigation into branch coverage for C programs using CUTE and AUSTIN

    Get PDF
    Automated test data generation has remained a topic of considerable interest for several decades because it lies at the heart of attempts to automate the process of Software Testing. This paper reports the results of an empirical study using the dynamic symbolic-execution tool. CUTE, and a search based tool, AUSTIN on five non-trivial open source applications. The aim is to provide practitioners with an assessment of what can be achieved by existing techniques with little or no specialist knowledge and to provide researchers with baseline data against which to measure subsequent work. To achieve this, each tool is applied 'as is', with neither additional tuning nor supporting harnesses and with no adjustments applied to the subject programs under test. The mere fact that these tools can be applied 'out of the box' in this manner reflects the growing maturity of Automated test data generation. However, as might be expected, the study reveals opportunities for improvement and suggests ways to hybridize these two approaches that have hitherto been developed entirely independently. (C) 2010 Elsevier Inc. All rights reserved

    A Symbolic Execution Algorithm for Constraint-Based Testing of Database Programs

    Full text link
    In so-called constraint-based testing, symbolic execution is a common technique used as a part of the process to generate test data for imperative programs. Databases are ubiquitous in software and testing of programs manipulating databases is thus essential to enhance the reliability of software. This work proposes and evaluates experimentally a symbolic ex- ecution algorithm for constraint-based testing of database programs. First, we describe SimpleDB, a formal language which offers a minimal and well-defined syntax and seman- tics, to model common interaction scenarios between pro- grams and databases. Secondly, we detail the proposed al- gorithm for symbolic execution of SimpleDB models. This algorithm considers a SimpleDB program as a sequence of operations over a set of relational variables, modeling both the database tables and the program variables. By inte- grating this relational model of the program with classical static symbolic execution, the algorithm can generate a set of path constraints for any finite path to test in the control- flow graph of the program. Solutions of these constraints are test inputs for the program, including an initial content for the database. When the program is executed with respect to these inputs, it is guaranteed to follow the path with re- spect to which the constraints were generated. Finally, the algorithm is evaluated experimentally using representative SimpleDB models.Comment: 12 pages - preliminary wor
    corecore