2,524 research outputs found

    Symmetry-breaking Answer Set Solving

    Full text link
    In the context of Answer Set Programming, this paper investigates symmetry-breaking to eliminate symmetric parts of the search space and, thereby, simplify the solution process. We propose a reduction of disjunctive logic programs to a coloured digraph such that permutational symmetries can be constructed from graph automorphisms. Symmetries are then broken by introducing symmetry-breaking constraints. For this purpose, we formulate a preprocessor that integrates a graph automorphism system. Experiments demonstrate its computational impact.Comment: Proceedings of ICLP'10 Workshop on Answer Set Programming and Other Computing Paradig

    Symmetry Breaking for Answer Set Programming

    Full text link
    In the context of answer set programming, this work investigates symmetry detection and symmetry breaking to eliminate symmetric parts of the search space and, thereby, simplify the solution process. We contribute a reduction of symmetry detection to a graph automorphism problem which allows to extract symmetries of a logic program from the symmetries of the constructed coloured graph. We also propose an encoding of symmetry-breaking constraints in terms of permutation cycles and use only generators in this process which implicitly represent symmetries and always with exponential compression. These ideas are formulated as preprocessing and implemented in a completely automated flow that first detects symmetries from a given answer set program, adds symmetry-breaking constraints, and can be applied to any existing answer set solver. We demonstrate computational impact on benchmarks versus direct application of the solver. Furthermore, we explore symmetry breaking for answer set programming in two domains: first, constraint answer set programming as a novel approach to represent and solve constraint satisfaction problems, and second, distributed nonmonotonic multi-context systems. In particular, we formulate a translation-based approach to constraint answer set solving which allows for the application of our symmetry detection and symmetry breaking methods. To compare their performance with a-priori symmetry breaking techniques, we also contribute a decomposition of the global value precedence constraint that enforces domain consistency on the original constraint via the unit-propagation of an answer set solver. We evaluate both options in an empirical analysis. In the context of distributed nonmonotonic multi-context system, we develop an algorithm for distributed symmetry detection and also carry over symmetry-breaking constraints for distributed answer set programming.Comment: Diploma thesis. Vienna University of Technology, August 201

    Effectively Solving NP-SPEC Encodings by Translation to ASP

    Get PDF
    NP-SPEC is a language for specifying problems in NP in a declarative way. Despite the fact that the semantics of the language was given by referring to Datalog with circumscription, which is very close to ASP, so far the only existing implementations are by means of ECLiPSe Prolog and via Boolean satisfiability solvers. In this paper, we present translations from NP-SPEC into ASP, and provide an experimental evaluation of existing implementations and the proposed translations to ASP using various ASP solvers. The results show that translating to ASP clearly has an edge over the existing translation into SAT, which involves an intrinsic grounding process. We also argue that it might be useful to incorporate certain language constructs of NPSPEC into mainstream ASP

    A Multi-Engine Approach to Answer Set Programming

    Full text link
    Answer Set Programming (ASP) is a truly-declarative programming paradigm proposed in the area of non-monotonic reasoning and logic programming, that has been recently employed in many applications. The development of efficient ASP systems is, thus, crucial. Having in mind the task of improving the solving methods for ASP, there are two usual ways to reach this goal: (i)(i) extending state-of-the-art techniques and ASP solvers, or (ii)(ii) designing a new ASP solver from scratch. An alternative to these trends is to build on top of state-of-the-art solvers, and to apply machine learning techniques for choosing automatically the "best" available solver on a per-instance basis. In this paper we pursue this latter direction. We first define a set of cheap-to-compute syntactic features that characterize several aspects of ASP programs. Then, we apply classification methods that, given the features of the instances in a {\sl training} set and the solvers' performance on these instances, inductively learn algorithm selection strategies to be applied to a {\sl test} set. We report the results of a number of experiments considering solvers and different training and test sets of instances taken from the ones submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows that, by applying machine learning techniques to ASP solving, it is possible to obtain very robust performance: our approach can solve more instances compared with any solver that entered the 3rd ASP Competition. (To appear in Theory and Practice of Logic Programming (TPLP).)Comment: 26 pages, 8 figure

    Lazy Model Expansion: Interleaving Grounding with Search

    Full text link
    Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages, like ASP, FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding-on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach

    Magic Sets for Disjunctive Datalog Programs

    Get PDF
    In this paper, a new technique for the optimization of (partially) bound queries over disjunctive Datalog programs with stratified negation is presented. The technique exploits the propagation of query bindings and extends the Magic Set (MS) optimization technique. An important feature of disjunctive Datalog is nonmonotonicity, which calls for nondeterministic implementations, such as backtracking search. A distinguishing characteristic of the new method is that the optimization can be exploited also during the nondeterministic phase. In particular, after some assumptions have been made during the computation, parts of the program may become irrelevant to a query under these assumptions. This allows for dynamic pruning of the search space. In contrast, the effect of the previously defined MS methods for disjunctive Datalog is limited to the deterministic portion of the process. In this way, the potential performance gain by using the proposed method can be exponential, as could be observed empirically. The correctness of MS is established thanks to a strong relationship between MS and unfounded sets that has not been studied in the literature before. This knowledge allows for extending the method also to programs with stratified negation in a natural way. The proposed method has been implemented in DLV and various experiments have been conducted. Experimental results on synthetic data confirm the utility of MS for disjunctive Datalog, and they highlight the computational gain that may be obtained by the new method w.r.t. the previously proposed MS methods for disjunctive Datalog programs. Further experiments on real-world data show the benefits of MS within an application scenario that has received considerable attention in recent years, the problem of answering user queries over possibly inconsistent databases originating from integration of autonomous sources of information.Comment: 67 pages, 19 figures, preprint submitted to Artificial Intelligenc

    The PITA System: Tabling and Answer Subsumption for Reasoning under Uncertainty

    Full text link
    Many real world domains require the representation of a measure of uncertainty. The most common such representation is probability, and the combination of probability with logic programs has given rise to the field of Probabilistic Logic Programming (PLP), leading to languages such as the Independent Choice Logic, Logic Programs with Annotated Disjunctions (LPADs), Problog, PRISM and others. These languages share a similar distribution semantics, and methods have been devised to translate programs between these languages. The complexity of computing the probability of queries to these general PLP programs is very high due to the need to combine the probabilities of explanations that may not be exclusive. As one alternative, the PRISM system reduces the complexity of query answering by restricting the form of programs it can evaluate. As an entirely different alternative, Possibilistic Logic Programs adopt a simpler metric of uncertainty than probability. Each of these approaches -- general PLP, restricted PLP, and Possibilistic Logic Programming -- can be useful in different domains depending on the form of uncertainty to be represented, on the form of programs needed to model problems, and on the scale of the problems to be solved. In this paper, we show how the PITA system, which originally supported the general PLP language of LPADs, can also efficiently support restricted PLP and Possibilistic Logic Programs. PITA relies on tabling with answer subsumption and consists of a transformation along with an API for library functions that interface with answer subsumption
    corecore