1,241 research outputs found
A Survey of Symbolic Execution Techniques
Many security and software testing applications require checking whether
certain properties of a program hold for any possible usage scenario. For
instance, a tool for identifying software vulnerabilities may need to rule out
the existence of any backdoor to bypass a program's authentication. One
approach would be to test the program using different, possibly random inputs.
As the backdoor may only be hit for very specific program workloads, automated
exploration of the space of possible inputs is of the essence. Symbolic
execution provides an elegant solution to the problem, by systematically
exploring many possible execution paths at the same time without necessarily
requiring concrete inputs. Rather than taking on fully specified input values,
the technique abstractly represents them as symbols, resorting to constraint
solvers to construct actual instances that would cause property violations.
Symbolic execution has been incubated in dozens of tools developed over the
last four decades, leading to major practical breakthroughs in a number of
prominent software reliability applications. The goal of this survey is to
provide an overview of the main ideas, challenges, and solutions developed in
the area, distilling them for a broad audience.
The present survey has been accepted for publication at ACM Computing
Surveys. If you are considering citing this survey, we would appreciate if you
could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing
this survey, we would appreciate if you could use the following BibTeX entry:
http://goo.gl/Hf5Fv
On a New Notion of Partial Refinement
Formal specification techniques allow expressing idealized specifications,
which abstract from restrictions that may arise in implementations. However,
partial implementations are universal in software development due to practical
limitations. Our goal is to contribute to a method of program refinement that
allows for partial implementations. For programs with a normal and an
exceptional exit, we propose a new notion of partial refinement which allows an
implementation to terminate exceptionally if the desired results cannot be
achieved, provided the initial state is maintained. Partial refinement leads to
a systematic method of developing programs with exception handling.Comment: In Proceedings Refine 2013, arXiv:1305.563
Featherweight VeriFast
VeriFast is a leading research prototype tool for the sound modular
verification of safety and correctness properties of single-threaded and
multithreaded C and Java programs. It has been used as a vehicle for
exploration and validation of novel program verification techniques and for
industrial case studies; it has served well at a number of program verification
competitions; and it has been used for teaching by multiple teachers
independent of the authors. However, until now, while VeriFast's operation has
been described informally in a number of publications, and specific
verification techniques have been formalized, a clear and precise exposition of
how VeriFast works has not yet appeared. In this article we present for the
first time a formal definition and soundness proof of a core subset of the
VeriFast program verification approach. The exposition aims to be both
accessible and rigorous: the text is based on lecture notes for a graduate
course on program verification, and it is backed by an executable
machine-readable definition and machine-checked soundness proof in Coq
Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and recovery from soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. In this paper we present Rely, a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification, that characterizes the reliability of the underlying hardware components, and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.This research was supported in part by the National Science Foundation (Grants CCF-0905244, CCF-1036241, CCF-1138967, CCF-1138967, and IIS-0835652), the United States Department of Energy (Grant DE-SC0008923), and DARPA (Grants FA8650-11-C-7192, FA8750-12-2-0110)
On Verifying Complex Properties using Symbolic Shape Analysis
One of the main challenges in the verification of software systems is the
analysis of unbounded data structures with dynamic memory allocation, such as
linked data structures and arrays. We describe Bohne, a new analysis for
verifying data structures. Bohne verifies data structure operations and shows
that 1) the operations preserve data structure invariants and 2) the operations
satisfy their specifications expressed in terms of changes to the set of
objects stored in the data structure. During the analysis, Bohne infers loop
invariants in the form of disjunctions of universally quantified Boolean
combinations of formulas. To synthesize loop invariants of this form, Bohne
uses a combination of decision procedures for Monadic Second-Order Logic over
trees, SMT-LIB decision procedures (currently CVC Lite), and an automated
reasoner within the Isabelle interactive theorem prover. This architecture
shows that synthesized loop invariants can serve as a useful communication
mechanism between different decision procedures. Using Bohne, we have verified
operations on data structures such as linked lists with iterators and back
pointers, trees with and without parent pointers, two-level skip lists, array
data structures, and sorted lists. We have deployed Bohne in the Hob and Jahob
data structure analysis systems, enabling us to combine Bohne with analyses of
data structure clients and apply it in the context of larger programs. This
report describes the Bohne algorithm as well as techniques that Bohne uses to
reduce the ammount of annotations and the running time of the analysis
Test Case Generation for Object-Oriented Imperative Languages in CLP
Testing is a vital part of the software development process. Test Case
Generation (TCG) is the process of automatically generating a collection of
test cases which are applied to a system under test. White-box TCG is usually
performed by means of symbolic execution, i.e., instead of executing the
program on normal values (e.g., numbers), the program is executed on symbolic
values representing arbitrary values. When dealing with an object-oriented (OO)
imperative language, symbolic execution becomes challenging as, among other
things, it must be able to backtrack, complex heap-allocated data structures
should be created during the TCG process and features like inheritance, virtual
invocations and exceptions have to be taken into account. Due to its inherent
symbolic execution mechanism, we pursue in this paper that Constraint Logic
Programming (CLP) has a promising unexploited application field in TCG. We will
support our claim by developing a fully CLP-based framework to TCG of an OO
imperative language, and by assessing it on a corresponding implementation on a
set of challenging Java programs. A unique characteristic of our approach is
that it handles all language features using only CLP and without the need of
developing specific constraint operators (e.g., to model the heap)
Field-Sensitive Value Analysis by Field-Insensitive Analysis
Shared and mutable data-structures pose major problems in static analysis and most analyzers are unable to keep track of the values of numeric variables stored in the heap. In this paper, we first identify sufficient conditions under which heap allocated numeric variables in object oriented programs (i.e., numeric fields) can be handled as non-heap allocated variables. Then, we present a static analysis to infer which numeric fields satisfy these conditions at the level of (sequential) bytecode. This allows instrumenting the code with ghost variables which make such numeric fields observable to any field-insensitive value analysis. Our experimental results in termination analysis show that we greatly enlarge the class of analyzable programs with a reasonable overhea
- …