1,774 research outputs found

    Symbolic crosschecking of data-parallel floating-point code

    Get PDF

    Shadow symbolic execution for better testing of evolving software

    Get PDF
    In this idea paper, we propose a novel way for improving the testing of program changes via symbolic execution. At a high-level, our technique runs two different program versions in the same symbolic execution instance, with the old version effectively shadowing the new one. In this way, the technique can exploit precise dynamic value information to effectively drive execution toward the behaviour that has changed from one version to the next. We discuss the main challenges and opportunities of this approach in terms of pruning and prioritising path exploration, mapping elements across versions, and sharing common symbolic state between versions. Copyright © 2014 ACM

    Badger: Complexity Analysis with Fuzzing and Symbolic Execution

    Full text link
    Hybrid testing approaches that involve fuzz testing and symbolic execution have shown promising results in achieving high code coverage, uncovering subtle errors and vulnerabilities in a variety of software applications. In this paper we describe Badger - a new hybrid approach for complexity analysis, with the goal of discovering vulnerabilities which occur when the worst-case time or space complexity of an application is significantly higher than the average case. Badger uses fuzz testing to generate a diverse set of inputs that aim to increase not only coverage but also a resource-related cost associated with each path. Since fuzzing may fail to execute deep program paths due to its limited knowledge about the conditions that influence these paths, we complement the analysis with a symbolic execution, which is also customized to search for paths that increase the resource-related cost. Symbolic execution is particularly good at generating inputs that satisfy various program conditions but by itself suffers from path explosion. Therefore, Badger uses fuzzing and symbolic execution in tandem, to leverage their benefits and overcome their weaknesses. We implemented our approach for the analysis of Java programs, based on Kelinci and Symbolic PathFinder. We evaluated Badger on Java applications, showing that our approach is significantly faster in generating worst-case executions compared to fuzzing or symbolic execution on their own

    MintHint: Automated Synthesis of Repair Hints

    Full text link
    Being able to automatically repair programs is an extremely challenging task. In this paper, we present MintHint, a novel technique for program repair that is a departure from most of today's approaches. Instead of trying to fully automate program repair, which is often an unachievable goal, MintHint performs statistical correlation analysis to identify expressions that are likely to occur in the repaired code and generates, using pattern-matching based synthesis, repair hints from these expressions. Intuitively, these hints suggest how to rectify a faulty statement and help developers find a complete, actual repair. MintHint can address a variety of common faults, including incorrect, spurious, and missing expressions. We present a user study that shows that developers' productivity can improve manyfold with the use of repair hints generated by MintHint -- compared to having only traditional fault localization information. We also apply MintHint to several faults of a widely used Unix utility program to further assess the effectiveness of the approach. Our results show that MintHint performs well even in situations where (1) the repair space searched does not contain the exact repair, and (2) the operational specification obtained from the test cases for repair is incomplete or even imprecise

    Towards deployment-time dynamic analysis of server applications

    No full text
    © 2015 ACM.Bug-finding tools based on dynamic analysis (DA), such as Valgrind or the compiler sanitizers provided by Clang and GCC, have become ubiquitous during software development. These analyses are precise but incur a large performance overhead (often several times slower than native execution), which makes them prohibitively expensive to use in production. In this work, we investigate the exciting possibility of deploying such dynamic analyses in production code, using a multi-version execution approach

    Shadow of a Doubt: Testing for Divergences Between Software Versions

    No full text
    © 2016 ACM.While developers are aware of the importance of comprehensively testing patches, the large effort involved in coming up with relevant test cases means that such testing rarely happens in practice. Furthermore, even when test cases are written to cover the patch, they often exercise the same behaviour in the old and the new version of the code. In this paper, we present a symbolic execution-based technique that is designed to generate test inputs that cover the new program behaviours introduced by a patch. The technique works by executing both the old and the new version in the same symbolic execution instance, with the old version shadowing the new one. During this combined shadow execution, whenever a branch point is reached where the old and the new version diverge, we generate a test case exercising the divergence and comprehensively test the new behaviours of the new version. We evaluate our technique on the Coreutils patches from the CoREBench suite of regression bugs, and show that it is able to generate test inputs that exercise newly added behaviours and expose some of the regression bugs

    Inductive Verification of Data Model Invariants for Web Applications ∗

    Get PDF
    Modern software applications store their data in remote cloud servers. Users interact with these applications using web browsers or thin clients running on mobile devices. A key issue in dependability of these applications is the correctness of the actions that update the data store, which are triggered by user requests. In this paper, we present techniques for automatically checking if the actions of an application preserve the data model invariants. Our approach first automatically data store, from a given application using instrumented execution. The abstract data store identifies the sets of objects and relations (associations) used by the application, and the actions that update the data store by deleting or creating objects or by changing the relations among the objects. We show that checking invariants of an abstract data store corresponds to inductive invariant verification, and can be done using a mapping to First Order Logic (FOL) and using a FOL theorem prover. We implemented this approach for the Rails framework and applied it to three open source applications. We found four previously unknown bugs and reported them to the developers, who confirmed and immediately fixed two of them

    Covrig: a framework for the analysis of code, test, and coverage evolution in real software

    Get PDF
    Copyright 2014 ACM.Software repositories provide rich information about the construction and evolution of software systems. While static data that can be mined directly from version control systems has been extensively studied, dynamic metrics concerning the execution of the software have received much less attention, due to the inherent difficulty of running and monitoring a large number of software versions. In this paper, we present Covrig, a flexible infrastructure that can be used to run each version of a system in isolation and collect static and dynamic software metrics, using a lightweight virtual machine environment that can be deployed on a cluster of local or cloud machines. We use Covrig to conduct an empirical study examining how code and tests co-evolve in six popular open-source systems. We report the main characteristics of software patches, analyse the evolution of program and patch coverage, assess the impact of nondeterminism on the execution of test suites, and investigate whether the coverage of code containing bugs and bug fixes is higher than average

    Prototyping symbolic execution engines for interpreted languages

    Get PDF
    Symbolic execution is being successfully used to automatically test statically compiled code. However, increasingly more systems and applications are written in dynamic interpreted languages like Python. Building a new symbolic execution engine is a monumental effort, and so is keeping it up-to-date as the target language evolves. Furthermore, ambiguous language specifications lead to their implementation in a symbolic execution engine potentially differing from the production interpreter in subtle ways. We address these challenges by flipping the problem and using the interpreter itself as a specification of the language semantics. We present a recipe and tool (called Chef) for turning a vanilla interpreter into a sound and complete symbolic execution engine. Chef symbolically executes the target program by symbolically executing the interpreter's binary while exploiting inferred knowledge about the program's high-level structure. Using Chef, we developed a symbolic execution engine for Python in 5 person-days and one for Lua in 3 person-days. They offer complete and faithful coverage of language features in a way that keeps up with future language versions at near-zero cost. Chef-produced engines are up to 1000 times more performant than if directly executing the interpreter symbolically without Chef
    corecore