116 research outputs found

    Enhancing availability and security through boundless memory blocks

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (leaves 49-52).We present a new technique, boundless memory blocks, that automatically eliminates buffer overflow errors, enabling programs to continue to execute through memory errors without memory corruption. Buffer overflow vulnerabilities are caused by programming errors that allow an attacker to cause the program to write beyond the bounds of an allocated memory block to corrupt other data structures. The standard way to exploit a buffer overflow vulnerability involves a request that is too large for the buffer intended to hold it. The buffer overflow error causes the program to write part of the request beyond the bounds of the buffer, corrupting the address space of the program and causing the program to execute injected code contained in the request. Our boundless memory blocks compiler inserts checks that dynamically detect all out of bounds accesses. When it detects an out of bounds write, it stores the value away in a hash. Our compiler can then return the stored value as the result of an out of bounds read to that address. In the case of uninitialized addresses, our compiler simply returns a predefined value. We have acquired several widely used open source applications (Apache, Sendmail, Pine, Mutt, and Midnight Commander). With standard compilers, all of these applications are vulnerable to buffer overflow attacks as documented at security tracking web sites. Instead, our compiler enables the applications to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities. We have also found that only one application contains uninitialized reads, which means that in most cases, the net effect of our compiler is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error.by Cristian Cadar.M.Eng

    Badger: Complexity Analysis with Fuzzing and Symbolic Execution

    Full text link
    Hybrid testing approaches that involve fuzz testing and symbolic execution have shown promising results in achieving high code coverage, uncovering subtle errors and vulnerabilities in a variety of software applications. In this paper we describe Badger - a new hybrid approach for complexity analysis, with the goal of discovering vulnerabilities which occur when the worst-case time or space complexity of an application is significantly higher than the average case. Badger uses fuzz testing to generate a diverse set of inputs that aim to increase not only coverage but also a resource-related cost associated with each path. Since fuzzing may fail to execute deep program paths due to its limited knowledge about the conditions that influence these paths, we complement the analysis with a symbolic execution, which is also customized to search for paths that increase the resource-related cost. Symbolic execution is particularly good at generating inputs that satisfy various program conditions but by itself suffers from path explosion. Therefore, Badger uses fuzzing and symbolic execution in tandem, to leverage their benefits and overcome their weaknesses. We implemented our approach for the analysis of Java programs, based on Kelinci and Symbolic PathFinder. We evaluated Badger on Java applications, showing that our approach is significantly faster in generating worst-case executions compared to fuzzing or symbolic execution on their own

    Dynamic Tainting for Automatic Test Case Generation

    Get PDF

    Enhancing Reuse of Constraint Solutions to Improve Symbolic Execution

    Full text link
    Constraint solution reuse is an effective approach to save the time of constraint solving in symbolic execution. Most of the existing reuse approaches are based on syntactic or semantic equivalence of constraints; e.g. the Green framework is able to reuse constraints which have different representations but are semantically equivalent, through canonizing constraints into syntactically equivalent normal forms. However, syntactic/semantic equivalence is not a necessary condition for reuse--some constraints are not syntactically or semantically equivalent, but their solutions still have potential for reuse. Existing approaches are unable to recognize and reuse such constraints. In this paper, we present GreenTrie, an extension to the Green framework, which supports constraint reuse based on the logical implication relations among constraints. GreenTrie provides a component, called L-Trie, which stores constraints and solutions into tries, indexed by an implication partial order graph of constraints. L-Trie is able to carry out logical reduction and logical subset and superset querying for given constraints, to check for reuse of previously solved constraints. We report the results of an experimental assessment of GreenTrie against the original Green framework, which shows that our extension achieves better reuse of constraint solving result and saves significant symbolic execution time.Comment: this paper has been submitted to conference ISSTA 201

    FairFuzz: Targeting Rare Branches to Rapidly Increase Greybox Fuzz Testing Coverage

    Full text link
    In recent years, fuzz testing has proven itself to be one of the most effective techniques for finding correctness bugs and security vulnerabilities in practice. One particular fuzz testing tool, American Fuzzy Lop or AFL, has become popular thanks to its ease-of-use and bug-finding power. However, AFL remains limited in the depth of program coverage it achieves, in particular because it does not consider which parts of program inputs should not be mutated in order to maintain deep program coverage. We propose an approach, FairFuzz, that helps alleviate this limitation in two key steps. First, FairFuzz automatically prioritizes inputs exercising rare parts of the program under test. Second, it automatically adjusts the mutation of inputs so that the mutated inputs are more likely to exercise these same rare parts of the program. We conduct evaluation on real-world programs against state-of-the-art versions of AFL, thoroughly repeating experiments to get good measures of variability. We find that on certain benchmarks FairFuzz shows significant coverage increases after 24 hours compared to state-of-the-art versions of AFL, while on others it achieves high program coverage at a significantly faster rate

    Symbooglix: A Symbolic Execution Engine for Boogie Programs

    Get PDF
    Abstract-We present the design and implementation of Symbooglix, a symbolic execution engine for the Boogie intermediate verification language. Symbooglix aims to find bugs in Boogie programs efficiently, providing bug-finding capabilities for any program analysis framework that uses Boogie as a target language. We discuss the technical challenges associated with handling Boogie, and describe how we optimised Symbooglix using a small training set of benchmarks. This empiricallydriven optimisation approach avoids over-fitting Symbooglix to our benchmarks, enabling a fair comparison with other tools. We present an evaluation across 3749 Boogie programs generated from the SV-COMP suite of C programs using the SMACK frontend, and 579 Boogie programs originating from several OpenCL and CUDA GPU benchmark suites, translated by the GPUVerify front-end. Our results show that Symbooglix significantly outperforms Boogaloo, an existing symbolic execution tool for Boogie, and is competitive with GPUVerify on benchmarks for which GPUVerify is highly optimised. While generally less effective than the Corral and Duality tools on the SV-COMP suite, Symbooglix is complementary to them in terms of bug-finding ability

    Interoperability-Guided Testing of QUIC Implementations using Symbolic Execution

    Full text link
    The main reason for the standardization of network protocols, like QUIC, is to ensure interoperability between implementations, which poses a challenging task. Manual tests are currently used to test the different existing implementations for interoperability, but given the complex nature of network protocols, it is hard to cover all possible edge cases. State-of-the-art automated software testing techniques, such as Symbolic Execution (SymEx), have proven themselves capable of analyzing complex real-world software and finding hard to detect bugs. We present a SymEx-based method for finding interoperability issues in QUIC implementations, and explore its merit in a case study that analyzes the interoperability of picoquic and QUANT. We find that, while SymEx is able to analyze deep interactions between different implementations and uncovers several bugs, in order to enable efficient interoperability testing, implementations need to provide additional information about their current protocol state.Comment: 6 page

    Understanding GCC Builtins to Develop Better Tools

    Get PDF
    C programs can use compiler builtins to provide functionality that the C language lacks. On Linux, GCC provides several thousands of builtins that are also supported by other mature compilers, such as Clang and ICC. Maintainers of other tools lack guidance on whether and which builtins should be implemented to support popular projects. To assist tool developers who want to support GCC builtins, we analyzed builtin use in 4,913 C projects from GitHub. We found that 37% of these projects relied on at least one builtin. Supporting an increasing proportion of projects requires support of an exponentially increasing number of builtins; however, implementing only 10 builtins already covers over 30% of the projects. Since we found that many builtins in our corpus remained unused, the effort needed to support 90% of the projects is moderate, requiring about 110 builtins to be implemented. For each project, we analyzed the evolution of builtin use over time and found that the majority of projects mostly added builtins. This suggests that builtins are not a legacy feature and must be supported in future tools. Systematic testing of builtin support in existing tools revealed that many lacked support for builtins either partially or completely; we also discovered incorrect implementations in various tools, including the formally verified CompCert compiler

    Harvey: A Greybox Fuzzer for Smart Contracts

    Full text link
    We present Harvey, an industrial greybox fuzzer for smart contracts, which are programs managing accounts on a blockchain. Greybox fuzzing is a lightweight test-generation approach that effectively detects bugs and security vulnerabilities. However, greybox fuzzers randomly mutate program inputs to exercise new paths; this makes it challenging to cover code that is guarded by narrow checks, which are satisfied by no more than a few input values. Moreover, most real-world smart contracts transition through many different states during their lifetime, e.g., for every bid in an auction. To explore these states and thereby detect deep vulnerabilities, a greybox fuzzer would need to generate sequences of contract transactions, e.g., by creating bids from multiple users, while at the same time keeping the search space and test suite tractable. In this experience paper, we explain how Harvey alleviates both challenges with two key fuzzing techniques and distill the main lessons learned. First, Harvey extends standard greybox fuzzing with a method for predicting new inputs that are more likely to cover new paths or reveal vulnerabilities in smart contracts. Second, it fuzzes transaction sequences in a targeted and demand-driven way. We have evaluated our approach on 27 real-world contracts. Our experiments show that the underlying techniques significantly increase Harvey's effectiveness in achieving high coverage and detecting vulnerabilities, in most cases orders-of-magnitude faster; they also reveal new insights about contract code.Comment: arXiv admin note: substantial text overlap with arXiv:1807.0787
    corecore