5 research outputs found

    An incremental algorithm for generating all minimal models

    Get PDF
    AbstractThe task of generating minimal models of a knowledge base is at the computational heart of diagnosis systems like truth maintenance systems, and of nonmonotonic systems like autoepistemic logic, default logic, and disjunctive logic programs. Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes of knowledge bases, Ψ1,Ψ2,… , with the following properties: first, Ψ1 is the class of all Horn knowledge bases; second, if a knowledge base T is in Ψk, then T has at most k minimal models, and all of them may be found in time O(lk2), where l is the length of the knowledge base; third, for an arbitrary knowledge base T, we can find the minimum k such that T belongs to Ψk in time polynomial in the size of T; and, last, where K is the class of all knowledge bases, it is the case that ⋃i=1∞Ψi=K, that is, every knowledge base belongs to some class in the hierarchy. The algorithm is incremental, that is, it is capable of generating one model at a time

    Reasoning with minimal models: efficient algorithms and applications

    Get PDF
    AbstractReasoning with minimal models is at the heart of many knowledge-representation systems. Yet it turns out that this task is formidable, even when very simple theories are considered. In this paper, we introduce the elimination algorithm, which performs, in linear time, minimal model finding and minimal model checking for a significant subclass of positive CNF theories which we call positive head-cycle-free (HCF) theories. We also prove that the task of minimal entailment is easier for positive HCF theories than it is for the class of all positive CNF theories. Finally, we show how variations of the elimination algorithm can be applied to allow queries posed on disjunctive deductive databases and disjunctive default theories to be answered in an efficient way

    Reformulating Non-Monotonic Theories for Inference and Updating

    Get PDF
    We aim to help build programs that do large-scale, expressive non-monotonic reasoning (NMR): especially, 'learning agents' that store, and revise, a body of conclusions while continually acquiring new, possibly defeasible, premise beliefs. Currently available procedures for forward inference and belief revision are exhaustive, and thus impractical: they compute the entire non-monotonic theory, then re-compute from scratch upon updating with new axioms. These methods are thus badly intractable. In most theories of interest, even backward reasoning is combinatoric (at least NP-hard). Here, we give theoretical results for prioritized circumscription that show how to reformulate default theories so as to make forward inference be selective, as well as concurrent; and to restrict belief revision to a part of the theory. We elaborate a detailed divide-and-conquer strategy. We develop concepts of structure in NM theories, by showing how to reformulate them in a particular fashion: to be conjunctively decomposed into a collection of smaller 'part' theories. We identify two well-behaved special cases that are easily recognized in terms of syntactic properties: disjoint appearances of predicates, and disjoint appearances of individuals (terms). As part of this, we also definitionally reformulate the global axioms, one by one, in addition to applying decomposition. We identify a broad class of prioritized default theories, generalizing default inheritance, for which our results especially bear fruit. For this asocially monadic class, decomposition permits reasoning to be localized to individuals (ground terms), and reduced to propositional. Our reformulation methods are implementable in polynomial time, and apply to several other NM formalisms beyond circumscription

    Proceedings of the Workshop on Change of Representation and Problem Reformulation

    Get PDF
    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning
    corecore