8,731 research outputs found

    Reformulating Non-Monotonic Theories for Inference and Updating

    Get PDF
    We aim to help build programs that do large-scale, expressive non-monotonic reasoning (NMR): especially, 'learning agents' that store, and revise, a body of conclusions while continually acquiring new, possibly defeasible, premise beliefs. Currently available procedures for forward inference and belief revision are exhaustive, and thus impractical: they compute the entire non-monotonic theory, then re-compute from scratch upon updating with new axioms. These methods are thus badly intractable. In most theories of interest, even backward reasoning is combinatoric (at least NP-hard). Here, we give theoretical results for prioritized circumscription that show how to reformulate default theories so as to make forward inference be selective, as well as concurrent; and to restrict belief revision to a part of the theory. We elaborate a detailed divide-and-conquer strategy. We develop concepts of structure in NM theories, by showing how to reformulate them in a particular fashion: to be conjunctively decomposed into a collection of smaller 'part' theories. We identify two well-behaved special cases that are easily recognized in terms of syntactic properties: disjoint appearances of predicates, and disjoint appearances of individuals (terms). As part of this, we also definitionally reformulate the global axioms, one by one, in addition to applying decomposition. We identify a broad class of prioritized default theories, generalizing default inheritance, for which our results especially bear fruit. For this asocially monadic class, decomposition permits reasoning to be localized to individuals (ground terms), and reduced to propositional. Our reformulation methods are implementable in polynomial time, and apply to several other NM formalisms beyond circumscription

    Reasoning Biases, Non-Monotonic Logics and Belief Revision

    Get PDF
    A range of formal models of human reasoning have been proposed in a number of fields such as philosophy, logic, artificial intelligence, computer science, psychology, cognitive science, etc.: various logics (epistemic logics; non-monotonic logics), probabilistic systems (most notably, but not exclusively, Bayesian probability theory), belief revision systems, neural networks, among others. Now, it seems reasonable to require that formal models of human reasoning be (minimally) empirically adequate if they are to be viewed as models of the phenomena in question. How are formal models of human reasoning typically put to empirical test? One way to do so is to isolate a number of key principles of the system, and design experiments to gauge the extent to which participants do or do not follow them in reasoning tasks. Another way is to take relevant existing results and check whether a particular formal model predicts these results. The present investigation provides an illustration of the second kind of empirical testing by comparing two formal models for reasoning -namely the non-monotonic logic known as preferential logic; and a particular version of belief revision theories, screened belief revision -against the reasoning phenomenon known as belief bias in the psychology of reasoning literature: human reasoners typically seek to maintain the beliefs they already hold, and conversely to reject contradicting incoming information. The conclusion of our analysis will be that screened belief revision is more empirically adequate with respect to belief bias than preferential logic and non-monotonic logics in general, as what participants seem to be doing is above all a form of belief management on the basis of background knowledge. The upshot is thus that, while it may offer valuable insights into the nature of human reasoning, preferential logic (and non-monotonic logics in general) is ultimately inadequate as a formal model of the phenomena in question

    Conditionals and modularity in general logics

    Full text link
    In this work in progress, we discuss independence and interpolation and related topics for classical, modal, and non-monotonic logics

    Tractability of Theory Patching

    Full text link
    In this paper we consider the problem of `theory patching', in which we are given a domain theory, some of whose components are indicated to be possibly flawed, and a set of labeled training examples for the domain concept. The theory patching problem is to revise only the indicated components of the theory, such that the resulting theory correctly classifies all the training examples. Theory patching is thus a type of theory revision in which revisions are made to individual components of the theory. Our concern in this paper is to determine for which classes of logical domain theories the theory patching problem is tractable. We consider both propositional and first-order domain theories, and show that the theory patching problem is equivalent to that of determining what information contained in a theory is `stable' regardless of what revisions might be performed to the theory. We show that determining stability is tractable if the input theory satisfies two conditions: that revisions to each theory component have monotonic effects on the classification of examples, and that theory components act independently in the classification of examples in the theory. We also show how the concepts introduced can be used to determine the soundness and completeness of particular theory patching algorithms.Comment: See http://www.jair.org/ for any accompanying file
    • …
    corecore