182 research outputs found
Transfer Function Synthesis without Quantifier Elimination
Traditionally, transfer functions have been designed manually for each
operation in a program, instruction by instruction. In such a setting, a
transfer function describes the semantics of a single instruction, detailing
how a given abstract input state is mapped to an abstract output state. The net
effect of a sequence of instructions, a basic block, can then be calculated by
composing the transfer functions of the constituent instructions. However,
precision can be improved by applying a single transfer function that captures
the semantics of the block as a whole. Since blocks are program-dependent, this
approach necessitates automation. There has thus been growing interest in
computing transfer functions automatically, most notably using techniques based
on quantifier elimination. Although conceptually elegant, quantifier
elimination inevitably induces a computational bottleneck, which limits the
applicability of these methods to small blocks. This paper contributes a method
for calculating transfer functions that finesses quantifier elimination
altogether, and can thus be seen as a response to this problem. The
practicality of the method is demonstrated by generating transfer functions for
input and output states that are described by linear template constraints,
which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape
Taming Numbers and Durations in the Model Checking Integrated Planning System
The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization
Building a Better Racetrack
We find IIb compactifications on Calabi-Yau orientifolds in which all Kahler
moduli are stabilized, along lines suggested by Kachru, Kallosh, Linde and
Trivedi.Comment: 47 pages, 1 figure, harvmac (v2: added references, minor comments,
v3: improved discussion of metastability and explicit flux vacua
Recommended from our members
Incremental closure for systems of two variables per inequality
Subclasses of linear inequalities where each inequality has at most two vari- ables are popular in abstract interpretation and model checking, because they strike a balance between what can be described and what can be efficiently computed. This paper focuses on the TVPI class of inequalities, for which each coefficient of each two variable inequality is unrestricted. An implied TVPI in- equality can be generated from a pair of TVPI inequalities by eliminating a given common variable (echoing resolution on clauses). This operation, called result , can be applied to derive TVPI inequalities which are entailed (implied) by a given TVPI system. The key operation on TVPI is calculating closure: satisfiability can be observed from a closed system and a closed system also simplifies the calculation of other operations. A closed system can be derived by repeatedly applying the result operator. The process of adding a single TVPI inequality to an already closed input TVPI system and then finding the closure of this augmented system is called incremental closure. This too can be calcu- lated by the repeated application of the result operator. This paper studies the calculus defined by result , the structure of result derivations, and how deriva- tions can be combined and controlled. A series of lemmata on derivations are presented that, collectively, provide a pathway for synthesising an algorithm for incremental closure. The complexity of the incremental closure algorithm is analysed and found to be O (( n 2 + m 2 )lg( m )), where n is the number of variables and m the number of inequalities of the input TVPI system
Abstract domains for bit-level machine integer and floating-point operations
International audienceWe present a few lightweight numeric abstract domains to analyze C programs that exploit the binary representation of numbers in computers, for instance to perform "compute-through-overflow" on machine integers, or to directly manipulate the exponent and mantissa of floating-point numbers. On integers, we propose an extension of intervals with a modular component, as well as a bitfield domain. On floating-point numbers, we propose a predicate domain to match, infer, and propagate selected expression patterns. These domains are simple, efficient, and extensible. We have included them into the Astrée and AstréeA static analyzers to supplement existing domains. Experimental results show that they can improve the analysis precision at a reasonable cost
Inductive Program Synthesis via Iterative Forward-Backward Abstract Interpretation
A key challenge in example-based program synthesis is the gigantic search
space of programs. To address this challenge, various work proposed to use
abstract interpretation to prune the search space. However, most of existing
approaches have focused only on forward abstract interpretation, and thus
cannot fully exploit the power of abstract interpretation. In this paper, we
propose a novel approach to inductive program synthesis via iterative
forward-backward abstract interpretation. The forward abstract interpretation
computes possible outputs of a program given inputs, while the backward
abstract interpretation computes possible inputs of a program given outputs. By
iteratively performing the two abstract interpretations in an alternating
fashion, we can effectively determine if any completion of each partial program
as a candidate can satisfy the input-output examples. We apply our approach to
a standard formulation, syntax-guided synthesis (SyGuS), thereby supporting a
wide range of inductive synthesis tasks. We have implemented our approach and
evaluated it on a set of benchmarks from the prior work. The experimental
results show that our approach significantly outperforms the state-of-the-art
approaches thanks to the sophisticated abstract interpretation techniques
Polymers in Fractal Disorder
This work presents a numerical investigation of self-avoiding walks (SAWs) on percolation clusters, a canonical model for polymers in disordered media. A new algorithm has been developed allowing exact enumeration of over ten thousand steps. This is an increase of several orders of magnitude compared to previously existing enumeration methods, which allow for barely more than forty steps. Such an increase is achieved by exploiting the fractal structure of critical percolation clusters: they are hierarchically organized into a tree of loosely connected nested regions in which the walks segments are enumerated separately. After the enumeration process, a region is \"decimated\" and behaves in the following effectively as a single point. Since this method only works efficiently near the percolation threshold, a chain-growth Monte Carlo algorithm has also been used.
Main focus of the investigations was the asymptotic scaling behavior of the average end-to-end distance as function of the number of steps on critical clusters in different dimensions. Thanks the highly efficient new method, existing estimates of the scaling exponents could be improved substantially. Also investigated were the number of possible chain conformation and the average entropy, which were found to follow an unusual scaling behavior. For concentrations above the percolation threshold the exponent describing the growth of the end-to-end distance turned out to differ from that on regular lattices, defying the prediction of the accepted theory. Finally, SAWs with short range attractions on percolation clusters are discussed. Here, it emerged that there seems to be no temperature-driven collapse transition as the asymptotic scaling behavior of the end-to-end distance even at zero temperature is the same as for athermal SAWs.Die vorliegenden Arbeit präsentiert eine numerische Studie von selbstvermeidenden
Zufallswegen (SAWs) auf Perkolationsclustern, ein kanonisches Modell für Polymere in stark ungeordneten Medien. Hierfür wurde ein neuer Algorithmus entwickelt, welcher es ermöglicht SAWs von mehr als zehntausend Schritten exakt auszuzählen. Dies bedeutet eine Steigerung von mehreren Größenordnungen gegenüber der zuvor existierenden Methode, welche kaum mehr als vierzig Schritte zulässt. Solch eine Steigerung wird erreicht, indem die fraktale Struktur der Perkolationscluster geziehlt ausgenutzt wird: Die Cluster werden hierarchisch in lose verbundene Gebiete unterteilt, innerhalb welcher Wegstücke separat ausgezählt werden können. Nach dem Auszählen wird ein Gebiet \"dezimiert\" und verhält sich während der Behandlung größerer Gebiete effektiv wie ein Gitterpunkt. Da diese neue Methode nur nahe der Perkolationsschwelle funktioniert, wurde zum Erzielen der Ergebnisse zudem ein Kettenwachstums-Monte-Carlo-Algorithmus (PERM) eingesetzt.
Untersucht wurde zunächst das asymptotische Skalenverhalten des Abstands der beiden Kettenenden als Funktion der Schrittzahl auf kritischen Clustern in verschiedenen Dimensionen. Dank der neuen hochperformanten Methode konnten die bisherigen Schätzer für den dies beschreibenden Exponenten signifikant verbessert werden. Neben dem Abstand wurde zudem die Anzahl der möglichen Konformationen und die mittlere Entropie angeschaut, für welche ein ungewöhnliches Skalenverhalten gefunden wurde.
Für Konzentrationen oberhalb der Perkolationsschwelle wurde festgestellt, dass der Exponent, welcher das Wachstum des Endabstands beschreibt, nicht dem für freie SAWs entspricht, was nach gängiger Lehrmeinung der Fall sein sollte. Schlussendlich wurden SAWs mit Anziehung zwischen benachbarten Monomeren untersucht. Hier zeigte sich, dass es auf kritischen Perkolationsclustern keinen Phasenübergang zu geben scheint, an welchem die Ketten kollabieren, sondern dass das Skalenverhalten des Endabstands selbst am absoluten Nullpunkt der Temperatur unverändert ist
- …