2,667 research outputs found

    Disjunctive Answer Set Solvers via Templates

    Get PDF
    Answer set programming is a declarative programming paradigm oriented towards difficult combinatorial search problems. A fundamental task in answer set programming is to compute stable models, i.e., solutions of logic programs. Answer set solvers are the programs that perform this task. The problem of deciding whether a disjunctive program has a stable model is ÎŁ2P\Sigma^P_2-complete. The high complexity of reasoning within disjunctive logic programming is responsible for few solvers capable of dealing with such programs, namely DLV, GnT, Cmodels, CLASP and WASP. In this paper we show that transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model and analyze satisfiability solvers can be adapted for disjunctive answer set solvers. Transition systems give a unifying perspective and bring clarity in the description and comparison of solvers. They can be effectively used for analyzing, comparing and proving correctness of search algorithms as well as inspiring new ideas in the design of disjunctive answer set solvers. In this light, we introduce a general template, which accounts for major techniques implemented in disjunctive solvers. We then illustrate how this general template captures solvers DLV, GnT and Cmodels. We also show how this framework provides a convenient tool for designing new solving algorithms by means of combinations of techniques employed in different solvers.Comment: To appear in Theory and Practice of Logic Programming (TPLP

    Collection analysis for Horn clause programs

    Get PDF
    We consider approximating data structures with collections of the items that they contain. For examples, lists, binary trees, tuples, etc, can be approximated by sets or multisets of the items within them. Such approximations can be used to provide partial correctness properties of logic programs. For example, one might wish to specify than whenever the atom sort(t,s)sort(t,s) is proved then the two lists tt and ss contain the same multiset of items (that is, ss is a permutation of tt). If sorting removes duplicates, then one would like to infer that the sets of items underlying tt and ss are the same. Such results could be useful to have if they can be determined statically and automatically. We present a scheme by which such collection analysis can be structured and automated. Central to this scheme is the use of linear logic as a omputational logic underlying the logic of Horn clauses

    Constraint-based reachability

    Get PDF
    Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.Comment: In Proceedings Infinity 2012, arXiv:1302.310

    Towards Static Analysis of Functional Programs using Tree Automata Completion

    Get PDF
    This paper presents the first step of a wider research effort to apply tree automata completion to the static analysis of functional programs. Tree Automata Completion is a family of techniques for computing or approximating the set of terms reachable by a rewriting relation. The completion algorithm we focus on is parameterized by a set E of equations controlling the precision of the approximation and influencing its termination. For completion to be used as a static analysis, the first step is to guarantee its termination. In this work, we thus give a sufficient condition on E and T(F) for completion algorithm to always terminate. In the particular setting of functional programs, this condition can be relaxed into a condition on E and T(C) (terms built on the set of constructors) that is closer to what is done in the field of static analysis, where abstractions are performed on data.Comment: Proceedings of WRLA'14. 201

    On Decidable Growth-Rate Properties of Imperative Programs

    Full text link
    In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple "core" programming language - an imperative language with bounded loops, and arithmetics limited to addition and multiplication - it was possible to decide precisely whether a program had certain growth-rate properties, namely polynomial (or linear) bounds on computed values, or on the running time. This work emphasized the role of the core language in mitigating the notorious undecidability of program properties, so that one deals with decidable problems. A natural and intriguing problem was whether more elements can be added to the core language, improving its utility, while keeping the growth-rate properties decidable. In particular, the method presented could not handle a command that resets a variable to zero. This paper shows how to handle resets. The analysis is given in a logical style (proof rules), and its complexity is shown to be PSPACE-complete (in contrast, without resets, the problem was PTIME). The analysis algorithm evolved from the previous solution in an interesting way: focus was shifted from proving a bound to disproving it, and the algorithm works top-down rather than bottom-up

    An algorithmic approach to the existence of ideal objects in commutative algebra

    Full text link
    The existence of ideal objects, such as maximal ideals in nonzero rings, plays a crucial role in commutative algebra. These are typically justified using Zorn's lemma, and thus pose a challenge from a computational point of view. Giving a constructive meaning to ideal objects is a problem which dates back to Hilbert's program, and today is still a central theme in the area of dynamical algebra, which focuses on the elimination of ideal objects via syntactic methods. In this paper, we take an alternative approach based on Kreisel's no counterexample interpretation and sequential algorithms. We first give a computational interpretation to an abstract maximality principle in the countable setting via an intuitive, state based algorithm. We then carry out a concrete case study, in which we give an algorithmic account of the result that in any commutative ring, the intersection of all prime ideals is contained in its nilradical

    Finitary reducibility on equivalence relations

    Full text link
    We introduce the notion of finitary computable reducibility on equivalence relations on the natural numbers. This is a weakening of the usual notion of computable reducibility, and we show it to be distinct in several ways. In particular, whereas no equivalence relation can be Πn+2\Pi_{n+2}-complete under computable reducibility, we show that, for every nn, there does exist a natural equivalence relation which is Πn+2\Pi_{n+2}-complete under finitary reducibility. We also show that our hierarchy of finitary reducibilities does not collapse, and illustrate how it sharpens certain known results. Along the way, we present several new results which use computable reducibility to establish the complexity of various naturally defined equivalence relations in the arithmetical hierarchy
    • …
    corecore