5,984 research outputs found

    A Survey of Symbolic Execution Techniques

    Get PDF
    Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience. The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5Fv

    The development of a program analysis environment for Ada

    Get PDF
    A unit level, Ada software module testing system, called Query Utility Environment for Software Testing of Ada (QUEST/Ada), is described. The project calls for the design and development of a prototype system. QUEST/Ada design began with a definition of the overall system structure and a description of component dependencies. The project team was divided into three groups to resolve the preliminary designs of the parser/scanner: the test data generator, and the test coverage analyzer. The Phase 1 report is a working document from which the system documentation will evolve. It provides history, a guide to report sections, a literature review, the definition of the system structure and high level interfaces, descriptions of the prototype scope, the three major components, and the plan for the remainder of the project. The appendices include specifications, statistics, two papers derived from the current research, a preliminary users' manual, and the proposal and work plan for Phase 2

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    On Verifying Complex Properties using Symbolic Shape Analysis

    Get PDF
    One of the main challenges in the verification of software systems is the analysis of unbounded data structures with dynamic memory allocation, such as linked data structures and arrays. We describe Bohne, a new analysis for verifying data structures. Bohne verifies data structure operations and shows that 1) the operations preserve data structure invariants and 2) the operations satisfy their specifications expressed in terms of changes to the set of objects stored in the data structure. During the analysis, Bohne infers loop invariants in the form of disjunctions of universally quantified Boolean combinations of formulas. To synthesize loop invariants of this form, Bohne uses a combination of decision procedures for Monadic Second-Order Logic over trees, SMT-LIB decision procedures (currently CVC Lite), and an automated reasoner within the Isabelle interactive theorem prover. This architecture shows that synthesized loop invariants can serve as a useful communication mechanism between different decision procedures. Using Bohne, we have verified operations on data structures such as linked lists with iterators and back pointers, trees with and without parent pointers, two-level skip lists, array data structures, and sorted lists. We have deployed Bohne in the Hob and Jahob data structure analysis systems, enabling us to combine Bohne with analyses of data structure clients and apply it in the context of larger programs. This report describes the Bohne algorithm as well as techniques that Bohne uses to reduce the ammount of annotations and the running time of the analysis

    Recursive Program Optimization Through Inductive Synthesis Proof Transformation

    Get PDF
    The research described in this paper involved developing transformation techniques which increase the efficiency of the noriginal program, the source, by transforming its synthesis proof into one, the target, which yields a computationally more efficient algorithm. We describe a working proof transformation system which, by exploiting the duality between mathematical induction and recursion, employs the novel strategy of optimizing recursive programs by transforming inductive proofs. We compare and contrast this approach with the more traditional approaches to program transformation, and highlight the benefits of proof transformation with regards to search, correctness, automatability and generality

    Proving memory safety of floating-point computations by combining static and dynamic program analysis

    Get PDF
    Whitebox fuzzing is a novel form of security testing based on runtime symbolic execution and constraint solving. Over the last couple of years, whitebox fuzzers have found dozens of new security vulnerabilities (buffer overflows) in Windows and Linux applications, including codecs, image viewers and media players. Those types of applications tend to use floating-point instructions available on modern processors, yet existing whitebox fuzzers and SMT constraint solvers do not handle floating-point arithmetic. Are there new security vulnerabilities lurking in floating-point code? A naive solution would be to extend symbolic execu-tion to floating-point (FP) instructions (months of work), ex-tend SMT solvers to reason about FP constraints (months of work), and then face more complex constraints and an even worse path explosion problem. Instead, we propose an alternative approach, based on the rough intuition that FP code should only perform memory-safe data-processing of the ā€œpayload ā€ of an image or video file, while the non-FP part of the application should deal with buffer alloca-tions and memory address computations, with only the lat-ter being prone to buffer overflows and other security critical bugs. Our approach combines (1) a lightweight local path-insensitive ā€œmay ā€ static analysis of FP instructions with (2) a high-precision whole-program path-sensitive ā€œmust ā€ dy-namic analysis of non-FP instructions. The aim of this com-bination is to prove memory safety of the FP part and a form of non-interference between the FP part and the non-FP part with respect to memory address computations. We have implemented our approach using two existing tools for, respectively, static and dynamic x86 binary analysis. We present preliminary results of experiments with standard JPEG, GIF and ANI Windows parsers. For a given test suite of diverse input files, our mixed static/dynamic analysis is able to prove memory safety of FP code in those parsers for a small upfront static analysis cost and a marginal runtime expense compared to regular runtime symbolic execution
    • ā€¦
    corecore