272 research outputs found

    Complexity of Prioritized Default Logics

    Full text link
    In default reasoning, usually not all possible ways of resolving conflicts between default rules are acceptable. Criteria expressing acceptable ways of resolving the conflicts may be hardwired in the inference mechanism, for example specificity in inheritance reasoning can be handled this way, or they may be given abstractly as an ordering on the default rules. In this article we investigate formalizations of the latter approach in Reiter's default logic. Our goal is to analyze and compare the computational properties of three such formalizations in terms of their computational complexity: the prioritized default logics of Baader and Hollunder, and Brewka, and a prioritized default logic that is based on lexicographic comparison. The analysis locates the propositional variants of these logics on the second and third levels of the polynomial hierarchy, and identifies the boundary between tractable and intractable inference for restricted classes of prioritized default theories

    Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME

    Full text link
    We present a heuristic based algorithm to induce \textit{nonmonotonic} logic programs that will explain the behavior of XGBoost trained classifiers. We use the technique based on the LIME approach to locally select the most important features contributing to the classification decision. Then, in order to explain the model's global behavior, we propose the LIME-FOLD algorithm ---a heuristic-based inductive logic programming (ILP) algorithm capable of learning non-monotonic logic programs---that we apply to a transformed dataset produced by LIME. Our proposed approach is agnostic to the choice of the ILP algorithm. Our experiments with UCI standard benchmarks suggest a significant improvement in terms of classification evaluation metrics. Meanwhile, the number of induced rules dramatically decreases compared to ALEPH, a state-of-the-art ILP system

    Perspectives in deductive databases

    Get PDF
    AbstractI discuss my experiences, some of the work that I have done, and related work that influenced me, concerning deductive databases, over the last 30 years. I divide this time period into three roughly equal parts: 1957–1968, 1969–1978, 1979–present. For the first I describe how my interest started in deductive databases in 1957, at a time when the field of databases did not even exist. I describe work in the beginning years, leading to the start of deductive databases about 1968 with the work of Cordell Green and Bertram Raphael. The second period saw a great deal of work in theorem providing as well as the introduction of logic programming. The existence and importance of deductive databases as a formal and viable discipline received its impetus at a workshop held in Toulouse, France, in 1977, which culminated in the book Logic and Data Bases. The relationship of deductive databases and logic programming was recognized at that time. During the third period we have seen formal theories of databases come about as an outgrowth of that work, and the recognition that artificial intelligence and deductive databases are closely related, at least through the so-called expert database systems. I expect that the relationships between techniques from formal logic, databases, logic programming, and artificial intelligence will continue to be explored and the field of deductive databases will become a more prominent area of computer science in coming years

    Incremental Recompilation of Knowledge

    Full text link
    Approximating a general formula from above and below by Horn formulas (its Horn envelope and Horn core, respectively) was proposed by Selman and Kautz (1991, 1996) as a form of ``knowledge compilation,'' supporting rapid approximate reasoning; on the negative side, this scheme is static in that it supports no updates, and has certain complexity drawbacks pointed out by Kavvadias, Papadimitriou and Sideri (1993). On the other hand, the many frameworks and schemes proposed in the literature for theory update and revision are plagued by serious complexity-theoretic impediments, even in the Horn case, as was pointed out by Eiter and Gottlob (1992), and is further demonstrated in the present paper. More fundamentally, these schemes are not inductive, in that they may lose in a single update any positive properties of the represented sets of formulas (small size, Horn structure, etc.). In this paper we propose a new scheme, incremental recompilation, which combines Horn approximation and model-based updates; this scheme is inductive and very efficient, free of the problems facing its constituents. A set of formulas is represented by an upper and lower Horn approximation. To update, we replace the upper Horn formula by the Horn envelope of its minimum-change update, and similarly the lower one by the Horn core of its update; the key fact which enables this scheme is that Horn envelopes and cores are easy to compute when the underlying formula is the result of a minimum-change update of a Horn formula by a clause. We conjecture that efficient algorithms are possible for more complex updates.Comment: See http://www.jair.org/ for any accompanying file

    On Properties of Update Sequences Based on Causal Rejection

    Full text link
    We consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under answer set semantics. New information is incorporated into the current knowledge base subject to a causal rejection principle enforcing that, in case of conflicts, more recent rules are preferred and older rules are overridden. Such a rejection principle is also exploited in other approaches to update logic programs, e.g., in dynamic logic programming by Alferes et al. We give a thorough analysis of properties of our approach, to get a better understanding of the causal rejection principle. We review postulates for update and revision operators from the area of theory change and nonmonotonic reasoning, and some new properties are considered as well. We then consider refinements of our semantics which incorporate a notion of minimality of change. As well, we investigate the relationship to other approaches, showing that our approach is semantically equivalent to inheritance programs by Buccafurri et al. and that it coincides with certain classes of dynamic logic programs, for which we provide characterizations in terms of graph conditions. Therefore, most of our results about properties of causal rejection principle apply to these approaches as well. Finally, we deal with computational complexity of our approach, and outline how the update semantics and its refinements can be implemented on top of existing logic programming engines.Comment: 59 pages, 2 figures, 3 tables, to be published in "Theory and Practice of Logic Programming
    • …
    corecore