2 research outputs found

    Extending Automated Deduction for Commonsense Reasoning

    Full text link
    Commonsense reasoning has long been considered as one of the holy grails of artificial intelligence. Most of the recent progress in the field has been achieved by novel machine learning algorithms for natural language processing. However, without incorporating logical reasoning, these algorithms remain arguably shallow. With some notable exceptions, developers of practical automated logic-based reasoners have mostly avoided focusing on the problem. The paper argues that the methods and algorithms used by existing automated reasoners for classical first-order logic can be extended towards commonsense reasoning. Instead of devising new specialized logics we propose a framework of extensions to the mainstream resolution-based search methods to make these capable of performing search tasks for practical commonsense reasoning with reasonable efficiency. The proposed extensions mostly rely on operating on ordinary proof trees and are devised to handle commonsense knowledge bases containing inconsistencies, default rules, taxonomies, topics, relevance, confidence and similarity measures. We claim that machine learning is best suited for the construction of commonsense knowledge bases while the extended logic-based methods would be well-suited for actually answering queries from these knowledge bases.Comment: 19 pages, no figure

    The Relevance of Proofs of the Rationality of Probability Theory to Automated Reasoning and Cognitive Models

    No full text
    A number of well-known theorems, such as Cox's theorem and de Finetti's theorem. prove that any model of reasoning with uncertain information that satisfies specified conditions of "rationality " must satisfy the axioms of probability theory. We argue here that these theorems do not in themselves demonstrate that probabilistic models are in fact suitable for any specific task in automated reasoning or plausible for cognitive models. First, the theorems only establish the existence of some probabilistic model; they do not establish that there exists a useful probabilistic model, i.e. one with a tractably small number of numerical parameters and a large number of independence assumptions. Second, there are in general many different probabilistic models for a given situation, many of which may be far more irrational, in the usual sense of the term, than a model that violates the axioms of probability theory. We illustrate this second point with an extended examples of two tasks of induction, of a similar structure, where the reasonable probabilistic models are very different. Advocates of probabilistic methods in artificial intelligence (AI) and cognitive modeling have often claimed that the only rational approach to representing and reasoning with uncertain knowledge to use models based on the standard theory of probability; and that the only rational approach to making decisions wit
    corecore