44,547 research outputs found

    The Complexity of Planning Revisited - A Parameterized Analysis

    Full text link
    The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS+ (Baeckstroem and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning have seemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partial-order planner exploit this fact.Comment: (author's self-archived copy

    A Framework to Synergize Partial Order Reduction with State Interpolation

    Full text link
    We address the problem of reasoning about interleavings in safety verification of concurrent programs. In the literature, there are two prominent techniques for pruning the search space. First, there are well-investigated trace-based methods, collectively known as "Partial Order Reduction (POR)", which operate by weakening the concept of a trace by abstracting the total order of its transitions into a partial order. Second, there is state-based interpolation where a collection of formulas can be generalized by taking into account the property to be verified. Our main contribution is a framework that synergistically combines POR with state interpolation so that the sum is more than its parts

    Visibly Linear Dynamic Logic

    Get PDF
    We introduce Visibly Linear Dynamic Logic (VLDL), which extends Linear Temporal Logic (LTL) by temporal operators that are guarded by visibly pushdown languages over finite words. In VLDL one can, e.g., express that a function resets a variable to its original value after its execution, even in the presence of an unbounded number of intermediate recursive calls. We prove that VLDL describes exactly the ω\omega-visibly pushdown languages. Thus it is strictly more expressive than LTL and able to express recursive properties of programs with unbounded call stacks. The main technical contribution of this work is a translation of VLDL into ω\omega-visibly pushdown automata of exponential size via one-way alternating jumping automata. This translation yields exponential-time algorithms for satisfiability, validity, and model checking. We also show that visibly pushdown games with VLDL winning conditions are solvable in triply-exponential time. We prove all these problems to be complete for their respective complexity classes.Comment: 25 Page

    Verifying multi-threaded software using SMT-based context-bounded model checking

    No full text
    We describe and evaluate three approaches to model check multi-threaded software with shared variables and locks using bounded model checking based on Satisfiability Modulo Theories (SMT) and our modelling of the synchronization primitives of the Pthread library. In the lazy approach, we generate all possible interleavings and call the SMT solver on each of them individually, until we either find a bug, or have systematically explored all interleavings. In the schedule recording approach, we encode all possible interleavings into one single formula and then exploit the high speed of the SMT solvers. In the underapproximation and widening approach, we reduce the state space by abstracting the number of interleavings from the proofs of unsatisfiability generated by the SMT solvers. In all three approaches, we bound the number of context switches allowed among threads in order to reduce the number of interleavings explored. We implemented these approaches in ESBMC, our SMT-based bounded model checker for ANSI-C programs. Our experiments show that ESBMC can analyze larger problems and substantially reduce the verification time compared to state-of-the-art techniques that use iterative context-bounding algorithms or counter-example guided abstraction refinement

    Bounded Quantifier Instantiation for Checking Inductive Invariants

    Full text link
    We consider the problem of checking whether a proposed invariant φ\varphi expressed in first-order logic with quantifier alternation is inductive, i.e. preserved by a piece of code. While the problem is undecidable, modern SMT solvers can sometimes solve it automatically. However, they employ powerful quantifier instantiation methods that may diverge, especially when φ\varphi is not preserved. A notable difficulty arises due to counterexamples of infinite size. This paper studies Bounded-Horizon instantiation, a natural method for guaranteeing the termination of SMT solvers. The method bounds the depth of terms used in the quantifier instantiation process. We show that this method is surprisingly powerful for checking quantified invariants in uninterpreted domains. Furthermore, by producing partial models it can help the user diagnose the case when φ\varphi is not inductive, especially when the underlying reason is the existence of infinite counterexamples. Our main technical result is that Bounded-Horizon is at least as powerful as instrumentation, which is a manual method to guarantee convergence of the solver by modifying the program so that it admits a purely universal invariant. We show that with a bound of 1 we can simulate a natural class of instrumentations, without the need to modify the code and in a fully automatic way. We also report on a prototype implementation on top of Z3, which we used to verify several examples by Bounded-Horizon of bound 1

    Synthesis of sup-interpretations: a survey

    Get PDF
    In this paper, we survey the complexity of distinct methods that allow the programmer to synthesize a sup-interpretation, a function providing an upper- bound on the size of the output values computed by a program. It consists in a static space analysis tool without consideration of the time consumption. Although clearly related, sup-interpretation is independent from termination since it only provides an upper bound on the terminating computations. First, we study some undecidable properties of sup-interpretations from a theoretical point of view. Next, we fix term rewriting systems as our computational model and we show that a sup-interpretation can be obtained through the use of a well-known termination technique, the polynomial interpretations. The drawback is that such a method only applies to total functions (strongly normalizing programs). To overcome this problem we also study sup-interpretations through the notion of quasi-interpretation. Quasi-interpretations also suffer from a drawback that lies in the subterm property. This property drastically restricts the shape of the considered functions. Again we overcome this problem by introducing a new notion of interpretations mainly based on the dependency pairs method. We study the decidability and complexity of the sup-interpretation synthesis problem for all these three tools over sets of polynomials. Finally, we take benefit of some previous works on termination and runtime complexity to infer sup-interpretations.Comment: (2012
    corecore