2,232 research outputs found

    Using Answer Set Programming for pattern mining

    Get PDF
    Serial pattern mining consists in extracting the frequent sequential patterns from a unique sequence of itemsets. This paper explores the ability of a declarative language, such as Answer Set Programming (ASP), to solve this issue efficiently. We propose several ASP implementations of the frequent sequential pattern mining task: a non-incremental and an incremental resolution. The results show that the incremental resolution is more efficient than the non-incremental one, but both ASP programs are less efficient than dedicated algorithms. Nonetheless, this approach can be seen as a first step toward a generic framework for sequential pattern mining with constraints.Comment: Intelligence Artificielle Fondamentale (2014

    Anytime Computation of Cautious Consequences in Answer Set Programming

    Full text link
    Query answering in Answer Set Programming (ASP) is usually solved by computing (a subset of) the cautious consequences of a logic program. This task is computationally very hard, and there are programs for which computing cautious consequences is not viable in reasonable time. However, current ASP solvers produce the (whole) set of cautious consequences only at the end of their computation. This paper reports on strategies for computing cautious consequences, also introducing anytime algorithms able to produce sound answers during the computation.Comment: To appear in Theory and Practice of Logic Programmin

    Lazy Model Expansion: Interleaving Grounding with Search

    Full text link
    Finding satisfying assignments for the variables involved in a set of constraints can be cast as a (bounded) model generation problem: search for (bounded) models of a theory in some logic. The state-of-the-art approach for bounded model generation for rich knowledge representation languages, like ASP, FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or propositional one and apply a search algorithm to the resulting theory. An important bottleneck is the blowup of the size of the theory caused by the reduction phase. Lazily grounding the theory during search is a way to overcome this bottleneck. We present a theoretical framework and an implementation in the context of the FO(.) knowledge representation language. Instead of grounding all parts of a theory, justifications are derived for some parts of it. Given a partial assignment for the grounded part of the theory and valid justifications for the formulas of the non-grounded part, the justifications provide a recipe to construct a complete assignment that satisfies the non-grounded part. When a justification for a particular formula becomes invalid during search, a new one is derived; if that fails, the formula is split in a part to be grounded and a part that can be justified. The theoretical framework captures existing approaches for tackling the grounding bottleneck such as lazy clause generation and grounding-on-the-fly, and presents a generalization of the 2-watched literal scheme. We present an algorithm for lazy model expansion and integrate it in a model generator for FO(ID), a language extending first-order logic with inductive definitions. The algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base System IDP. Experimental results illustrate the power and generality of the approach
    • …
    corecore