103 research outputs found

    The CIFF Proof Procedure for Abductive Logic Programming with Constraints: Theory, Implementation and Experiments

    Get PDF
    We present the CIFF proof procedure for abductive logic programming with constraints, and we prove its correctness. CIFF is an extension of the IFF proof procedure for abductive logic programming, relaxing the original restrictions over variable quantification (allowedness conditions) and incorporating a constraint solver to deal with numerical constraints as in constraint logic programming. Finally, we describe the CIFF system, comparing it with state of the art abductive systems and answer set solvers and showing how to use it to program some applications. (To appear in Theory and Practice of Logic Programming - TPLP)

    Coherent Integration of Databases by Abductive Logic Programming

    Full text link
    We introduce an abductive method for a coherent integration of independent data-sources. The idea is to compute a list of data-facts that should be inserted to the amalgamated database or retracted from it in order to restore its consistency. This method is implemented by an abductive solver, called Asystem, that applies SLDNFA-resolution on a meta-theory that relates different, possibly contradicting, input databases. We also give a pure model-theoretic analysis of the possible ways to `recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. This allows us to characterize the `recovered databases' in terms of the `preferred' (i.e., most consistent) models of the theory. The outcome is an abductive-based application that is sound and complete with respect to a corresponding model-based, preferential semantics, and -- to the best of our knowledge -- is more expressive (thus more general) than any other implementation of coherent integration of databases

    Computing abduction by using TMS with top-down expectation

    Get PDF
    AbstractWe present a method to compute abduction in logic programming. We translate an abductive framework into a normal logic program with integrity constraints and show the correspondence between generalized stable models and stable models for the translation of the abductive framework. Abductive explanations for an observation can be found from the stable models for the translated program by adding a special kind of integrity constraint for the observation. Then, we show a bottom-up procedure to compute stable models for a normal logic program with integrity constraints. The proposed procedure excludes the unnecessary construction of stable models on early stages of the procedure by checking integrity constraints during the construction and by deriving some facts from integrity constraints. Although a bottom-up procedure has the disadvantage of constructing stable models not related to an observation for computing abductive explanations in general, our procedure avoids the disadvantage by expecting which rule should be used for satisfaction of integrity constraints and starting bottom-up computation based on the expectation. This expectation is not only a technique to scope rule selection but also an indispensable part of our stable model construction because the expectation is done for dynamically generated constraints as well as the constraint for the observation

    Abduction in Well-Founded Semantics and Generalized Stable Models

    Full text link
    Abductive logic programming offers a formalism to declaratively express and solve problems in areas such as diagnosis, planning, belief revision and hypothetical reasoning. Tabled logic programming offers a computational mechanism that provides a level of declarativity superior to that of Prolog, and which has supported successful applications in fields such as parsing, program analysis, and model checking. In this paper we show how to use tabled logic programming to evaluate queries to abductive frameworks with integrity constraints when these frameworks contain both default and explicit negation. The result is the ability to compute abduction over well-founded semantics with explicit negation and answer sets. Our approach consists of a transformation and an evaluation method. The transformation adjoins to each objective literal OO in a program, an objective literal not(O)not(O) along with rules that ensure that not(O)not(O) will be true if and only if OO is false. We call the resulting program a {\em dual} program. The evaluation method, \wfsmeth, then operates on the dual program. \wfsmeth{} is sound and complete for evaluating queries to abductive frameworks whose entailment method is based on either the well-founded semantics with explicit negation, or on answer sets. Further, \wfsmeth{} is asymptotically as efficient as any known method for either class of problems. In addition, when abduction is not desired, \wfsmeth{} operating on a dual program provides a novel tabling method for evaluating queries to ground extended programs whose complexity and termination properties are similar to those of the best tabling methods for the well-founded semantics. A publicly available meta-interpreter has been developed for \wfsmeth{} using the XSB system.Comment: 48 pages; To appear in Theory and Practice in Logic Programmin

    Recycling Computed Answers in Rewrite Systems for Abduction

    Full text link
    In rule-based systems, goal-oriented computations correspond naturally to the possible ways that an observation may be explained. In some applications, we need to compute explanations for a series of observations with the same domain. The question whether previously computed answers can be recycled arises. A yes answer could result in substantial savings of repeated computations. For systems based on classic logic, the answer is YES. For nonmonotonic systems however, one tends to believe that the answer should be NO, since recycling is a form of adding information. In this paper, we show that computed answers can always be recycled, in a nontrivial way, for the class of rewrite procedures that we proposed earlier for logic programs with negation. We present some experimental results on an encoding of the logistics domain.Comment: 20 pages. Full version of our IJCAI-03 pape

    CRAFTING THE MIND OF PROSOCS AGENTS

    Get PDF
    PROSOCS agents are software agents that are built according to the KGP model of agency. KGP is used as a model for the mind of the agent, so that the agent can act autonomously using a collection of logic theories, providing the mind's reasoning functionalities. The behavior of the agent is controlled by a cycle theory that specifies the agent's preferred patterns of operation. The implementation of the mind's generic functionality in PROSOCS is worked out in such a way so it can be instantiated by the platform for different agents across applications. In this context, the development of a concrete example illustrates how an agent developer might program the generic functionality of the mind for a simple application. 20 2-4 105 131 Cited By :1

    Distributed Abductive Reasoning: Theory, Implementation and Application

    Get PDF
    Abductive reasoning is a powerful logic inference mechanism that allows assumptions to be made during answer computation for a query, and thus is suitable for reasoning over incomplete knowledge. Multi-agent hypothetical reasoning is the application of abduction in a distributed setting, where each computational agent has its local knowledge representing partial world and the union of all agents' knowledge is still incomplete. It is different from simple distributed query processing because the assumptions made by the agents must also be consistent with global constraints. Multi-agent hypothetical reasoning has many potential applications, such as collaborative planning and scheduling, distributed diagnosis and cognitive perception. Many of these applications require the representation of arithmetic constraints in their problem specifications as well as constraint satisfaction support during the computation. In addition, some applications may have confidentiality concerns as restrictions on the information that can be exchanged between the agents during their collaboration. Although a limited number of distributed abductive systems have been developed, none of them is generic enough to support the above requirements. In this thesis we develop, in the spirit of Logic Programming, a generic and extensible distributed abductive system that has the potential to target a wide range of distributed problem solving applications. The underlying distributed inference algorithm incorporates constraint satisfaction and allows non-ground conditional answers to be computed. Its soundness and completeness have been proved. The algorithm is customisable in that different inference and coordination strategies (such as goal selection and agent selection strategies) can be adopted while maintaining correctness. A customisation that supports confidentiality during problem solving has been developed, and is used in application domains such as distributed security policy analysis. Finally, for evaluation purposes, a flexible experimental environment has been built for automatically generating different classes of distributed abductive constraint logic programs. This environment has been used to conduct empirical investigation of the performance of the customised system
    • 

    corecore