79,515 research outputs found

    Unfolding-based Partial Order Reduction

    Get PDF
    Partial order reduction (POR) and net unfoldings are two alternative methods to tackle state-space explosion caused by concurrency. In this paper, we propose the combination of both approaches in an effort to combine their strengths. We first define, for an abstract execution model, unfolding semantics parameterized over an arbitrary independence relation. Based on it, our main contribution is a novel stateless POR algorithm that explores at most one execution per Mazurkiewicz trace, and in general, can explore exponentially fewer, thus achieving a form of super-optimality. Furthermore, our unfolding-based POR copes with non-terminating executions and incorporates state-caching. Over benchmarks with busy-waits, among others, our experiments show a dramatic reduction in the number of executions when compared to a state-of-the-art DPOR.Comment: Long version of a paper with the same title appeared on the proceedings of CONCUR 201

    Abstract Interpretation with Unfoldings

    Full text link
    We present and evaluate a technique for computing path-sensitive interference conditions during abstract interpretation of concurrent programs. In lieu of fixed point computation, we use prime event structures to compactly represent causal dependence and interference between sequences of transformers. Our main contribution is an unfolding algorithm that uses a new notion of independence to avoid redundant transformer application, thread-local fixed points to reduce the size of the unfolding, and a novel cutoff criterion based on subsumption to guarantee termination of the analysis. Our experiments show that the abstract unfolding produces an order of magnitude fewer false alarms than a mature abstract interpreter, while being several orders of magnitude faster than solver-based tools that have the same precision.Comment: Extended version of the paper (with the same title and authors) to appear at CAV 201

    Symbolic Partial-Order Execution for Testing Multi-Threaded Programs

    Full text link
    We describe a technique for systematic testing of multi-threaded programs. We combine Quasi-Optimal Partial-Order Reduction, a state-of-the-art technique that tackles path explosion due to interleaving non-determinism, with symbolic execution to handle data non-determinism. Our technique iteratively and exhaustively finds all executions of the program. It represents program executions using partial orders and finds the next execution using an underlying unfolding semantics. We avoid the exploration of redundant program traces using cutoff events. We implemented our technique as an extension of KLEE and evaluated it on a set of large multi-threaded C programs. Our experiments found several previously undiscovered bugs and undefined behaviors in memcached and GNU sort, showing that the new method is capable of finding bugs in industrial-size benchmarks.Comment: Extended version of a paper presented at CAV'2

    Transformation-Based Bottom-Up Computation of the Well-Founded Model

    Full text link
    We present a framework for expressing bottom-up algorithms to compute the well-founded model of non-disjunctive logic programs. Our method is based on the notion of conditional facts and elementary program transformations studied by Brass and Dix for disjunctive programs. However, even if we restrict their framework to nondisjunctive programs, their residual program can grow to exponential size, whereas for function-free programs our program remainder is always polynomial in the size of the extensional database (EDB). We show that particular orderings of our transformations (we call them strategies) correspond to well-known computational methods like the alternating fixpoint approach, the well-founded magic sets method and the magic alternating fixpoint procedure. However, due to the confluence of our calculi, we come up with computations of the well-founded model that are provably better than these methods. In contrast to other approaches, our transformation method treats magic set transformed programs correctly, i.e. it always computes a relevant part of the well-founded model of the original program.Comment: 43 pages, 3 figure

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    A General Theory of Sharing Graphs

    Get PDF
    Sharing graphs are the structures introduced by Lamping to implement optimal reductions of lambda calculus. Gonthier’s reformulation of Lamping’s technique inside Geometry of Interaction, and Asperti and Laneve’s work on Interaction Systems have shown that sharing graphs can be used to implement a wide class of calculi. Here, we give a general characterization of sharing graphs independent from the calculus to be implemented. Such a characterization rests on an algebraic semantics of sharing graphs exploiting the methods of Geometry of Interaction. By this semantics we can define an unfolding partial order between proper sharing graphs, whose minimal elements are unshared graphs. The least-shared-instance of a sharing graph is the unique unshared graph that the unfolding partial order associates to it. The algebraic semantics allows to prove that we can associate a semantical read-back to each unshared graph and that such a read-back can be computed via suitable read-back reductions. The result is then lifted to sharing graphs proving that any read-back (or unfolding) reduction of them can be simulated on their least-shared- instances. The sharing graphs defined in this way allow to implement in a distributed and local way any calculus with a global reduction rule in the style of the beta rule of lambda calculus. Also in this case the proof technique is to prove that sharing reductions can be simulated on least-shared-instances. The result is proved under the only assumption that the structures of the calculus have a box nesting property, that is, that two beta redexes may not partially overlap. As a result, we get a sharing graph machine that seems to be the most natural low-level computational model for functional languages. The paper concludes showing that optimality is a by-product of this technique: optimal reductions are lazy reductions of the sharing graph machine. We stress the proof strategy followed in the paper: it is based on an amazing interplay between standard rewriting system properties (strong normalization, confluence, and unique normal form) and algebraic properties definable via the techniques of Geometry of Interaction

    Test Data Generation of Bytecode by CLP Partial Evaluation

    Full text link
    We employ existing partial evaluation (PE) techniques developed for Constraint Logic Programming (CLP) in order to automatically generate test-case generators for glass-box testing of bytecode. Our approach consists of two independent CLP PE phases. (1) First, the bytecode is transformed into an equivalent (decompiled) CLP program. This is already a well studied transformation which can be done either by using an ad-hoc decompiler or by specialising a bytecode interpreter by means of existing PE techniques. (2) A second PE is performed in order to supervise the generation of test-cases by execution of the CLP decompiled program. Interestingly, we employ control strategies previously defined in the context of CLP PE in order to capture coverage criteria for glass-box testing of bytecode. A unique feature of our approach is that, this second PE phase allows generating not only test-cases but also test-case generators. To the best of our knowledge, this is the first time that (CLP) PE techniques are applied for test-case generation as well as to generate test-case generators
    • 

    corecore