9 research outputs found

    Negative Reinforcement and Backtrack-Points for Recurrent Neural Networks for Cost-Based Abduction

    Get PDF
    Abduction is the process of proceeding from data describing a set of observations or events, to a set of hypotheses which best explains or accounts for the data. Cost-based abduction (CKA) is an AI formalism in which evidence to be explained is treated as a goal to be proven, proofs have costs based on how much needs to be assumed to complete the proof, and the set of assumptions needed to complete the least-cost proof are taken as the best explanation for the given evidence. In this paper, we introduce two techniques for improving the performance of high order recurrent networks (HORN) applied to cost-based abduction. In the backtrack-points technique, we use heuristics to recognize early that the network trajectory is moving in the wrong direction; we then restore the network state to a previously-stored point, and apply heuristic perturbations to nudge the network trajectory in a different direction. In the negative reinforcement technique, we add hyperedges to the network to reduce the attractiveness of local-minima. We apply these techniques on a 300-hypothesis, 900-rule particularly-difficult instance of CBA

    A linear constraint satisfaction approach to cost-based abduction

    Get PDF
    Abstract Santos Jr, E., A linear constraint satisfaction approach to cost-based abduction, Artificial Intelligence 65 (1994) 1-27. Abduction is the problem of finding the best explanation for a given set of observations. Within AI, this has been modeled as proving the observation by assuming some set of hypotheses. Cost-based abduction associates a cost with each hypothesis. The best proof is the one which assumes the least costly set. Previous approaches to finding the least cost set have formalized cost-based abduction as a heuristic graph search problem. However, efficient admissible heuristics have proven difficult to find. In this paper, we present a new technique for finding least cost sets by using linear constraints to represent causal relationships. In particular, we are able to recast the problem as a 0-1 integer linear programming problem. We can then use the highly efficient optimization tools of operations research yielding a computationally efficient method for solving cost-based abduction problems. Experiments comparing our linear constraint satisfaction approach to standard graph searching methodologies suggest that our approach is superior to existing search techniques in that our approach exhibits an expected-case polynomial run-time growth rate

    On the Generation of Alternative Explanations with Implications for Belief Revision

    No full text
    In general, the best explanation for a given observation makes no promises on how good it is with respect to other alternative explanations. A major deficiency of message-passing schemes for belief revision in Bayesian networks is their inability to generate alternatives beyond the second best. In this paper, we present a general approach based on linear constraint systems that naturally generates alternative explanations in an orderly and highly efficient manner. This approach is then applied to cost-based abduction problems as well as belief revision in Bayesian networks
    corecore