6 research outputs found

    Proving memory safety of floating-point computations by combining static and dynamic program analysis

    Get PDF
    Whitebox fuzzing is a novel form of security testing based on runtime symbolic execution and constraint solving. Over the last couple of years, whitebox fuzzers have found dozens of new security vulnerabilities (buffer overflows) in Windows and Linux applications, including codecs, image viewers and media players. Those types of applications tend to use floating-point instructions available on modern processors, yet existing whitebox fuzzers and SMT constraint solvers do not handle floating-point arithmetic. Are there new security vulnerabilities lurking in floating-point code? A naive solution would be to extend symbolic execu-tion to floating-point (FP) instructions (months of work), ex-tend SMT solvers to reason about FP constraints (months of work), and then face more complex constraints and an even worse path explosion problem. Instead, we propose an alternative approach, based on the rough intuition that FP code should only perform memory-safe data-processing of the “payload ” of an image or video file, while the non-FP part of the application should deal with buffer alloca-tions and memory address computations, with only the lat-ter being prone to buffer overflows and other security critical bugs. Our approach combines (1) a lightweight local path-insensitive “may ” static analysis of FP instructions with (2) a high-precision whole-program path-sensitive “must ” dy-namic analysis of non-FP instructions. The aim of this com-bination is to prove memory safety of the FP part and a form of non-interference between the FP part and the non-FP part with respect to memory address computations. We have implemented our approach using two existing tools for, respectively, static and dynamic x86 binary analysis. We present preliminary results of experiments with standard JPEG, GIF and ANI Windows parsers. For a given test suite of diverse input files, our mixed static/dynamic analysis is able to prove memory safety of FP code in those parsers for a small upfront static analysis cost and a marginal runtime expense compared to regular runtime symbolic execution

    Implementation and testing of a blackbox and a whitebox fuzzer for file compression routines

    Get PDF
    Fuzz testing is a software testing technique that has risen to prominence over the past two decades. The unifying feature of all fuzz testers (fuzzers) is their ability to somehow automatically produce random test cases for software. Fuzzers can generally be placed in one of two classes: black-box or white-box. Blackbox fuzzers do not derive information from a program\u27s source or binary in order to restrict the domain of their generated input while white-box fuzzers do. A tradeoff involved in the choice between blackbox and whitebox fuzzing is the rate at which inputs can be produced; since blackbox fuzzers need not do any thinking about the software under test to generate inputs, blackbox fuzzers can generate more inputs per unit time if all other factors are equal. The question of how blackbox and whitebox fuzzing should be used together for ideal economy of software testing has been posed and even speculated about, however, to my knowledge, no publically available study with the intent of characterizing an answer exists. The purpose of this thesis is to provide an initial exploration of the bug-finding characteristics of blackbox and whitebox fuzzers. A blackbox fuzzer is implemented and extended with a concolic execution program to make it whitebox. Both versions of the fuzzer are then used to run tests on some small programs and some parts of a file compression library

    Search-based inference of polynomial metamorphic relations

    Full text link
    Metamorphic testing (MT) is an effective methodology for testing those so-called 'non-testable' programs (e.g., scientific programs), where it is sometimes very difficult for testers to know whether the outputs are correct. In metamorphic testing, metamorphic relations (MRs) (which specify how particular changes to the input of the program under test would change the output) play an essential role. However, testers may typically have to obtain MRs manually. In this paper, we propose a search-based approach to automatic inference of polynomial MRs for a program under test. In particular, we use a set of parameters to represent a particular class of MRs, which we refer to as polynomial MRs, and turn the problem of inferring MRs into a problem of searching for suitable values of the parameters. We then dynamically analyze multiple executions of the program, and use particle swarm optimization to solve the search problem. To improve the quality of inferred MRs, we further use MR filtering to remove some inferred MRs. We also conducted three empirical studies to evaluate our approach using four scientific libraries (including 189 scientific functions). From our empirical results, our approach is able to infer many high-quality MRs in acceptable time (i.e., from 9.87 seconds to 1231.16 seconds), which are effective in detecting faults with no false detection. ? 2014 ACM.EI

    Automated synthesis of symbolic instruction encodings from I/O samples

    Full text link

    Combining Model Checking and Testing

    Get PDF
    Abstract Model checking and testing have a lot in common. Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. One way to do this consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. This chapter presents an overview of this strand of software model checking
    corecore