182 research outputs found

    Transfer Function Synthesis without Quantifier Elimination

    Get PDF
    Traditionally, transfer functions have been designed manually for each operation in a program, instruction by instruction. In such a setting, a transfer function describes the semantics of a single instruction, detailing how a given abstract input state is mapped to an abstract output state. The net effect of a sequence of instructions, a basic block, can then be calculated by composing the transfer functions of the constituent instructions. However, precision can be improved by applying a single transfer function that captures the semantics of the block as a whole. Since blocks are program-dependent, this approach necessitates automation. There has thus been growing interest in computing transfer functions automatically, most notably using techniques based on quantifier elimination. Although conceptually elegant, quantifier elimination inevitably induces a computational bottleneck, which limits the applicability of these methods to small blocks. This paper contributes a method for calculating transfer functions that finesses quantifier elimination altogether, and can thus be seen as a response to this problem. The practicality of the method is demonstrated by generating transfer functions for input and output states that are described by linear template constraints, which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    Abstract domains for bit-level machine integer and floating-point operations

    Get PDF
    International audienceWe present a few lightweight numeric abstract domains to analyze C programs that exploit the binary representation of numbers in computers, for instance to perform "compute-through-overflow" on machine integers, or to directly manipulate the exponent and mantissa of floating-point numbers. On integers, we propose an extension of intervals with a modular component, as well as a bitfield domain. On floating-point numbers, we propose a predicate domain to match, infer, and propagate selected expression patterns. These domains are simple, efficient, and extensible. We have included them into the Astrée and AstréeA static analyzers to supplement existing domains. Experimental results show that they can improve the analysis precision at a reasonable cost

    Inductive Program Synthesis via Iterative Forward-Backward Abstract Interpretation

    Full text link
    A key challenge in example-based program synthesis is the gigantic search space of programs. To address this challenge, various work proposed to use abstract interpretation to prune the search space. However, most of existing approaches have focused only on forward abstract interpretation, and thus cannot fully exploit the power of abstract interpretation. In this paper, we propose a novel approach to inductive program synthesis via iterative forward-backward abstract interpretation. The forward abstract interpretation computes possible outputs of a program given inputs, while the backward abstract interpretation computes possible inputs of a program given outputs. By iteratively performing the two abstract interpretations in an alternating fashion, we can effectively determine if any completion of each partial program as a candidate can satisfy the input-output examples. We apply our approach to a standard formulation, syntax-guided synthesis (SyGuS), thereby supporting a wide range of inductive synthesis tasks. We have implemented our approach and evaluated it on a set of benchmarks from the prior work. The experimental results show that our approach significantly outperforms the state-of-the-art approaches thanks to the sophisticated abstract interpretation techniques

    Polymers in Fractal Disorder

    Get PDF
    This work presents a numerical investigation of self-avoiding walks (SAWs) on percolation clusters, a canonical model for polymers in disordered media. A new algorithm has been developed allowing exact enumeration of over ten thousand steps. This is an increase of several orders of magnitude compared to previously existing enumeration methods, which allow for barely more than forty steps. Such an increase is achieved by exploiting the fractal structure of critical percolation clusters: they are hierarchically organized into a tree of loosely connected nested regions in which the walks segments are enumerated separately. After the enumeration process, a region is \"decimated\" and behaves in the following effectively as a single point. Since this method only works efficiently near the percolation threshold, a chain-growth Monte Carlo algorithm has also been used. Main focus of the investigations was the asymptotic scaling behavior of the average end-to-end distance as function of the number of steps on critical clusters in different dimensions. Thanks the highly efficient new method, existing estimates of the scaling exponents could be improved substantially. Also investigated were the number of possible chain conformation and the average entropy, which were found to follow an unusual scaling behavior. For concentrations above the percolation threshold the exponent describing the growth of the end-to-end distance turned out to differ from that on regular lattices, defying the prediction of the accepted theory. Finally, SAWs with short range attractions on percolation clusters are discussed. Here, it emerged that there seems to be no temperature-driven collapse transition as the asymptotic scaling behavior of the end-to-end distance even at zero temperature is the same as for athermal SAWs.Die vorliegenden Arbeit präsentiert eine numerische Studie von selbstvermeidenden Zufallswegen (SAWs) auf Perkolationsclustern, ein kanonisches Modell für Polymere in stark ungeordneten Medien. Hierfür wurde ein neuer Algorithmus entwickelt, welcher es ermöglicht SAWs von mehr als zehntausend Schritten exakt auszuzählen. Dies bedeutet eine Steigerung von mehreren Größenordnungen gegenüber der zuvor existierenden Methode, welche kaum mehr als vierzig Schritte zulässt. Solch eine Steigerung wird erreicht, indem die fraktale Struktur der Perkolationscluster geziehlt ausgenutzt wird: Die Cluster werden hierarchisch in lose verbundene Gebiete unterteilt, innerhalb welcher Wegstücke separat ausgezählt werden können. Nach dem Auszählen wird ein Gebiet \"dezimiert\" und verhält sich während der Behandlung größerer Gebiete effektiv wie ein Gitterpunkt. Da diese neue Methode nur nahe der Perkolationsschwelle funktioniert, wurde zum Erzielen der Ergebnisse zudem ein Kettenwachstums-Monte-Carlo-Algorithmus (PERM) eingesetzt. Untersucht wurde zunächst das asymptotische Skalenverhalten des Abstands der beiden Kettenenden als Funktion der Schrittzahl auf kritischen Clustern in verschiedenen Dimensionen. Dank der neuen hochperformanten Methode konnten die bisherigen Schätzer für den dies beschreibenden Exponenten signifikant verbessert werden. Neben dem Abstand wurde zudem die Anzahl der möglichen Konformationen und die mittlere Entropie angeschaut, für welche ein ungewöhnliches Skalenverhalten gefunden wurde. Für Konzentrationen oberhalb der Perkolationsschwelle wurde festgestellt, dass der Exponent, welcher das Wachstum des Endabstands beschreibt, nicht dem für freie SAWs entspricht, was nach gängiger Lehrmeinung der Fall sein sollte. Schlussendlich wurden SAWs mit Anziehung zwischen benachbarten Monomeren untersucht. Hier zeigte sich, dass es auf kritischen Perkolationsclustern keinen Phasenübergang zu geben scheint, an welchem die Ketten kollabieren, sondern dass das Skalenverhalten des Endabstands selbst am absoluten Nullpunkt der Temperatur unverändert ist

    Programming self developing blob machines for spatial computing.

    Get PDF
    • …
    corecore