75,323 research outputs found

    A Survey of Symbolic Execution Techniques

    Get PDF
    Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience. The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5Fv

    Abstract Model Counting: A Novel Approach for Quantification of Information Leaks

    Get PDF
    acmid: 2590328 keywords: model checking, quantitative information flow, satisfiability modulo theories, symbolic execution location: Kyoto, Japan numpages: 10acmid: 2590328 keywords: model checking, quantitative information flow, satisfiability modulo theories, symbolic execution location: Kyoto, Japan numpages: 10acmid: 2590328 keywords: model checking, quantitative information flow, satisfiability modulo theories, symbolic execution location: Kyoto, Japan numpages: 10We present a novel method for Quantitative Information Flow analysis. We show how the problem of computing information leakage can be viewed as an extension of the Satisfiability Modulo Theories (SMT) problem. This view enables us to develop a framework for QIF analysis based on the framework DPLL(T) used in SMT solvers. We then show that the methodology of Symbolic Execution (SE) also fits our framework. Based on these ideas, we build two QIF analysis tools: the first one employs CBMC, a bounded model checker for ANSI C, and the second one is built on top of Symbolic PathFinder, a Symbolic Executor for Java. We use these tools to quantify leaks in industrial code such as C programs from the Linux kernel, a Java tax program from the European project HATS, and anonymity protocol

    Symbolic Quantitative Information Flow

    Get PDF
    acmid: 2382791 issue_date: November 2012 keywords: algorithms, security, verification numpages: 5acmid: 2382791 issue_date: November 2012 keywords: algorithms, security, verification numpages: 5acmid: 2382791 issue_date: November 2012 keywords: algorithms, security, verification numpages: 5acmid: 2382791 issue_date: November 2012 keywords: algorithms, security, verification numpages: 5acmid: 2382791 issue_date: November 2012 keywords: algorithms, security, verification numpages:

    Symbolic Algorithms for Qualitative Analysis of Markov Decision Processes with B\"uchi Objectives

    Full text link
    We consider Markov decision processes (MDPs) with \omega-regular specifications given as parity objectives. We consider the problem of computing the set of almost-sure winning states from where the objective can be ensured with probability 1. The algorithms for the computation of the almost-sure winning set for parity objectives iteratively use the solutions for the almost-sure winning set for B\"uchi objectives (a special case of parity objectives). Our contributions are as follows: First, we present the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with B\"uchi objectives; our algorithm takes O(n \sqrt{m}) symbolic steps as compared to the previous known algorithm that takes O(n^2) symbolic steps, where nn is the number of states and mm is the number of edges of the MDP. In practice MDPs have constant out-degree, and then our symbolic algorithm takes O(n \sqrt{n}) symbolic steps, as compared to the previous known O(n2)O(n^2) symbolic steps algorithm. Second, we present a new algorithm, namely win-lose algorithm, with the following two properties: (a) the algorithm iteratively computes subsets of the almost-sure winning set and its complement, as compared to all previous algorithms that discover the almost-sure winning set upon termination; and (b) requires O(n \sqrt{K}) symbolic steps, where K is the maximal number of edges of strongly connected components (scc's) of the MDP. The win-lose algorithm requires symbolic computation of scc's. Third, we improve the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and our new algorithm improves the constants associated with the linear number of steps. In the worst case the previous known algorithm takes 5n symbolic steps, whereas our new algorithm takes 4n symbolic steps

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    An empirical investigation into branch coverage for C programs using CUTE and AUSTIN

    Get PDF
    Automated test data generation has remained a topic of considerable interest for several decades because it lies at the heart of attempts to automate the process of Software Testing. This paper reports the results of an empirical study using the dynamic symbolic-execution tool. CUTE, and a search based tool, AUSTIN on five non-trivial open source applications. The aim is to provide practitioners with an assessment of what can be achieved by existing techniques with little or no specialist knowledge and to provide researchers with baseline data against which to measure subsequent work. To achieve this, each tool is applied 'as is', with neither additional tuning nor supporting harnesses and with no adjustments applied to the subject programs under test. The mere fact that these tools can be applied 'out of the box' in this manner reflects the growing maturity of Automated test data generation. However, as might be expected, the study reveals opportunities for improvement and suggests ways to hybridize these two approaches that have hitherto been developed entirely independently. (C) 2010 Elsevier Inc. All rights reserved
    • …
    corecore