9,274 research outputs found

    Relational Symbolic Execution

    Full text link
    Symbolic execution is a classical program analysis technique used to show that programs satisfy or violate given specifications. In this work we generalize symbolic execution to support program analysis for relational specifications in the form of relational properties - these are properties about two runs of two programs on related inputs, or about two executions of a single program on related inputs. Relational properties are useful to formalize notions in security and privacy, and to reason about program optimizations. We design a relational symbolic execution engine, named RelSym which supports interactive refutation, as well as proving of relational properties for programs written in a language with arrays and for-like loops

    Exact Gap Computation for Code Coverage Metrics in ISO-C

    Full text link
    Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.Comment: In Proceedings MBT 2012, arXiv:1202.582

    A Static Analyzer for Large Safety-Critical Software

    Get PDF
    We show that abstract interpretation-based static program analysis can be made efficient and precise enough to formally verify a class of properties for a family of large programs with few or no false alarms. This is achieved by refinement of a general purpose static analyzer and later adaptation to particular programs of the family by the end-user through parametrization. This is applied to the proof of soundness of data manipulation operations at the machine level for periodic synchronous safety critical embedded software. The main novelties are the design principle of static analyzers by refinement and adaptation through parametrization, the symbolic manipulation of expressions to improve the precision of abstract transfer functions, the octagon, ellipsoid, and decision tree abstract domains, all with sound handling of rounding errors in floating point computations, widening strategies (with thresholds, delayed) and the automatic determination of the parameters (parametrized packing)

    CTGEN - a Unit Test Generator for C

    Full text link
    We present a new unit test generator for C code, CTGEN. It generates test data for C1 structural coverage and functional coverage based on pre-/post-condition specifications or internal assertions. The generator supports automated stub generation, and data to be returned by the stub to the unit under test (UUT) may be specified by means of constraints. The typical application field for CTGEN is embedded systems testing; therefore the tool can cope with the typical aliasing problems present in low-level C, including pointer arithmetics, structures and unions. CTGEN creates complete test procedures which are ready to be compiled and run against the UUT. In this paper we describe the main features of CTGEN, their technical realisation, and we elaborate on its performance in comparison to a list of competing test generation tools. Since 2011, CTGEN is used in industrial scale test campaigns for embedded systems code in the automotive domain.Comment: In Proceedings SSV 2012, arXiv:1211.587

    Non-polynomial Worst-Case Analysis of Recursive Programs

    Full text link
    We study the problem of developing efficient approaches for proving worst-case bounds of non-deterministic recursive programs. Ranking functions are sound and complete for proving termination and worst-case bounds of nonrecursive programs. First, we apply ranking functions to recursion, resulting in measure functions. We show that measure functions provide a sound and complete approach to prove worst-case bounds of non-deterministic recursive programs. Our second contribution is the synthesis of measure functions in nonpolynomial forms. We show that non-polynomial measure functions with logarithm and exponentiation can be synthesized through abstraction of logarithmic or exponentiation terms, Farkas' Lemma, and Handelman's Theorem using linear programming. While previous methods obtain worst-case polynomial bounds, our approach can synthesize bounds of the form O(nlogn)\mathcal{O}(n\log n) as well as O(nr)\mathcal{O}(n^r) where rr is not an integer. We present experimental results to demonstrate that our approach can obtain efficiently worst-case bounds of classical recursive algorithms such as (i) Merge-Sort, the divide-and-conquer algorithm for the Closest-Pair problem, where we obtain O(nlogn)\mathcal{O}(n \log n) worst-case bound, and (ii) Karatsuba's algorithm for polynomial multiplication and Strassen's algorithm for matrix multiplication, where we obtain O(nr)\mathcal{O}(n^r) bound such that rr is not an integer and close to the best-known bounds for the respective algorithms.Comment: 54 Pages, Full Version to CAV 201

    On Verifying Resource Contracts using Code Contracts

    Full text link
    In this paper we present an approach to check resource consumption contracts using an off-the-shelf static analyzer. We propose a set of annotations to support resource usage specifications, in particular, dynamic memory consumption constraints. Since dynamic memory may be recycled by a memory manager, the consumption of this resource is not monotone. The specification language can express both memory consumption and lifetime properties in a modular fashion. We develop a proof-of-concept implementation by extending Code Contracts' specification language. To verify the correctness of these annotations we rely on the Code Contracts static verifier and a points-to analysis. We also briefly discuss possible extensions of our approach to deal with non-linear expressions.Comment: In Proceedings LAFM 2013, arXiv:1401.056

    Transfer Function Synthesis without Quantifier Elimination

    Get PDF
    Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape

    Abstracting strings for model checking of C programs

    Get PDF
    Data type abstraction plays a crucial role in software verification. In this paper, we introduce a domain for abstracting strings in the C programming language, where strings are managed as null-terminated arrays of characters. The new domain M-String is parametrized on an index (bound) domain and a character domain. By means of these different constituent domains, M-Strings captures shape information on the array structure as well as value information on the characters occurring in the string. By tuning these two parameters, M-String can be easily tailored for specific verification tasks, balancing precision against complexity. The concrete and the abstract semantics of basic operations on strings are carefully formalized, and soundness proofs are fully detailed. Moreover, for a selection of functions contained in the standard C library, we provide the semantics for character access and update, enabling an automatic lifting of arbitrary string-manipulating code into our new domain. An implementation of abstract operations is provided within a tool that automatically lifts existing programs into the M-String domain along with an explicit-state model checker. The accuracy of the proposed domain is experimentally evaluated on real-case test programs, showing that M-String can efficiently detect real-world bugs as well as to prove that program does not contain them after they are fixed
    corecore