73 research outputs found
Using Bounded Model Checking to Focus Fixpoint Iterations
Two classical sources of imprecision in static analysis by abstract
interpretation are widening and merge operations. Merge operations can be done
away by distinguishing paths, as in trace partitioning, at the expense of
enumerating an exponential number of paths. In this article, we describe how to
avoid such systematic exploration by focusing on a single path at a time,
designated by SMT-solving. Our method combines well with acceleration
techniques, thus doing away with widenings as well in some cases. We illustrate
it over the well-known domain of convex polyhedra
Invariant Generation through Strategy Iteration in Succinctly Represented Control Flow Graphs
We consider the problem of computing numerical invariants of programs, for
instance bounds on the values of numerical program variables. More
specifically, we study the problem of performing static analysis by abstract
interpretation using template linear constraint domains. Such invariants can be
obtained by Kleene iterations that are, in order to guarantee termination,
accelerated by widening operators. In many cases, however, applying this form
of extrapolation leads to invariants that are weaker than the strongest
inductive invariant that can be expressed within the abstract domain in use.
Another well-known source of imprecision of traditional abstract interpretation
techniques stems from their use of join operators at merge nodes in the control
flow graph. The mentioned weaknesses may prevent these methods from proving
safety properties. The technique we develop in this article addresses both of
these issues: contrary to Kleene iterations accelerated by widening operators,
it is guaranteed to yield the strongest inductive invariant that can be
expressed within the template linear constraint domain in use. It also eschews
join operators by distinguishing all paths of loop-free code segments. Formally
speaking, our technique computes the least fixpoint within a given template
linear constraint domain of a transition relation that is succinctly expressed
as an existentially quantified linear real arithmetic formula. In contrast to
previously published techniques that rely on quantifier elimination, our
algorithm is proved to have optimal complexity: we prove that the decision
problem associated with our fixpoint problem is in the second level of the
polynomial-time hierarchy.Comment: 35 pages, conference version published at ESOP 2011, this version is
a CoRR version of our submission to Logical Methods in Computer Scienc
Enhanced sharing analysis techniques: a comprehensive evaluation
Sharing, an abstract domain developed by D. Jacobs and A. Langen for the analysis of logic
programs, derives useful aliasing information. It is well-known that a commonly used core
of techniques, such as the integration of Sharing with freeness and linearity information, can
significantly improve the precision of the analysis. However, a number of other proposals for
refined domain combinations have been circulating for years. One feature that is common
to these proposals is that they do not seem to have undergone a thorough experimental
evaluation even with respect to the expected precision gains.
In this paper we experimentally
evaluate: helping Sharing with the definitely ground variables found using Pos, the domain
of positive Boolean formulas; the incorporation of explicit structural information; a full
implementation of the reduced product of Sharing and Pos; the issue of reordering the
bindings in the computation of the abstract mgu; an original proposal for the addition of
a new mode recording the set of variables that are deemed to be ground or free; a refined
way of using linearity to improve the analysis; the recovery of hidden information in the
combination of Sharing with freeness information. Finally, we discuss the issue of whether
tracking compoundness allows the computation of more sharing information
Abstract Fixpoint Computations with Numerical Acceleration Methods
Static analysis by abstract interpretation aims at automatically proving
properties of computer programs. To do this, an over-approximation of program
semantics, defined as the least fixpoint of a system of semantic equations,
must be computed. To enforce the convergence of this computation, widening
operator is used but it may lead to coarse results. We propose a new method to
accelerate the computation of this fixpoint by using standard techniques of
numerical analysis. Our goal is to automatically and dynamically adapt the
widening operator in order to maintain precision
Experiments with a Convex Polyhedral Analysis Tool for Logic Programs
Convex polyhedral abstractions of logic programs have been found very useful
in deriving numeric relationships between program arguments in order to prove
program properties and in other areas such as termination and complexity
analysis. We present a tool for constructing polyhedral analyses of
(constraint) logic programs. The aim of the tool is to make available, with a
convenient interface, state-of-the-art techniques for polyhedral analysis such
as delayed widening, narrowing, "widening up-to", and enhanced automatic
selection of widening points. The tool is accessible on the web, permits user
programs to be uploaded and analysed, and is integrated with related program
transformations such as size abstractions and query-answer transformation. We
then report some experiments using the tool, showing how it can be conveniently
used to analyse transition systems arising from models of embedded systems, and
an emulator for a PIC microcontroller which is used for example in wearable
computing systems. We discuss issues including scalability, tradeoffs of
precision and computation time, and other program transformations that can
enhance the results of analysis.Comment: Paper presented at the 17th Workshop on Logic-based Methods in
Programming Environments (WLPE2007
Subsumer-First: Steering Symbolic Reachability Analysis
Abstract. Symbolic reachability analysis provides a basis for the veri-fication of software systems by offering algorithmic support for the ex-ploration of the program state space when searching for proofs or coun-terexamples. The choice of exploration strategy employed by the anal-ysis has direct impact on its success, whereas the ability to find short counterexamples quickly and—as a complementary task—to efficiently perform the exhaustive state space traversal are of utmost importance for the majority of verification efforts. Existing exploration strategies can optimize only one of these objectives which leads to a sub-optimal reach-ability analysis, e.g., breadth-first search may sacrifice the exploration ef-ficiency and chaotic iteration can miss minimal counterexamples. In this paper we present subsumer-first, a new approach for steering symbolic reachability analysis that targets both minimal counterexample discovery and efficiency of exhaustive exploration. Our approach leverages the re-sult of fixpoint checks performed during symbolic reachability analysis to bias the exploration strategy towards its objectives, and does not require any additional computation. We demonstrate how the subsumer-first ap-proach can be applied to improve efficiency of software verification tools based on predicate abstraction. Our experimental evaluation indicates the practical usefulness of the approach: we observe significant efficiency improvements (median value 40%) on difficult verification benchmarks from the transportation domain.
BCFA: Bespoke Control Flow Analysis for CFA at Scale
Many data-driven software engineering tasks such as discovering programming
patterns, mining API specifications, etc., perform source code analysis over
control flow graphs (CFGs) at scale. Analyzing millions of CFGs can be
expensive and performance of the analysis heavily depends on the underlying CFG
traversal strategy. State-of-the-art analysis frameworks use a fixed traversal
strategy. We argue that a single traversal strategy does not fit all kinds of
analyses and CFGs and propose bespoke control flow analysis (BCFA). Given a
control flow analysis (CFA) and a large number of CFGs, BCFA selects the most
efficient traversal strategy for each CFG. BCFA extracts a set of properties of
the CFA by analyzing the code of the CFA and combines it with properties of the
CFG, such as branching factor and cyclicity, for selecting the optimal
traversal strategy. We have implemented BCFA in Boa, and evaluated BCFA using a
set of representative static analyses that mainly involve traversing CFGs and
two large datasets containing 287 thousand and 162 million CFGs. Our results
show that BCFA can speedup the large scale analyses by 1%-28%. Further, BCFA
has low overheads; less than 0.2%, and low misprediction rate; less than 0.01%.Comment: 12 page
- …