1,915 research outputs found

    A Lightweight Defeasible Description Logic in Depth: Quantification in Rational Reasoning and Beyond

    Get PDF
    Description Logics (DLs) are increasingly successful knowledge representation formalisms, useful for any application requiring implicit derivation of knowledge from explicitly known facts. A prominent example domain benefiting from these formalisms since the 1990s is the biomedical field. This area contributes an intangible amount of facts and relations between low- and high-level concepts such as the constitution of cells or interactions between studied illnesses, their symptoms and remedies. DLs are well-suited for handling large formal knowledge repositories and computing inferable coherences throughout such data, relying on their well-founded first-order semantics. In particular, DLs of reduced expressivity have proven a tremendous worth for handling large ontologies due to their computational tractability. In spite of these assets and prevailing influence, classical DLs are not well-suited to adequately model some of the most intuitive forms of reasoning. The capability for abductive reasoning is imperative for any field subjected to incomplete knowledge and the motivation to complete it with typical expectations. When such default expectations receive contradicting evidence, an abductive formalism is able to retract previously drawn, conflicting conclusions. Common examples often include human reasoning or a default characterisation of properties in biology, such as the normal arrangement of organs in the human body. Treatment of such defeasible knowledge must be aware of exceptional cases - such as a human suffering from the congenital condition situs inversus - and therefore accommodate for the ability to retract defeasible conclusions in a non-monotonic fashion. Specifically tailored non-monotonic semantics have been continuously investigated for DLs in the past 30 years. A particularly promising approach, is rooted in the research by Kraus, Lehmann and Magidor for preferential (propositional) logics and Rational Closure (RC). The biggest advantages of RC are its well-behaviour in terms of formal inference postulates and the efficient computation of defeasible entailments, by relying on a tractable reduction to classical reasoning in the underlying formalism. A major contribution of this work is a reorganisation of the core of this reasoning method, into an abstract framework formalisation. This framework is then easily instantiated to provide the reduction method for RC in DLs as well as more advanced closure operators, such as Relevant or Lexicographic Closure. In spite of their practical aptitude, we discovered that all reduction approaches fail to provide any defeasible conclusions for elements that only occur in the relational neighbourhood of the inspected elements. More explicitly, a distinguishing advantage of DLs over propositional logic is the capability to model binary relations and describe aspects of a related concept in terms of existential and universal quantification. Previous approaches to RC (and more advanced closures) are not able to derive typical behaviour for the concepts that occur within such quantification. The main contribution of this work is to introduce stronger semantics for the lightweight DL EL_bot with the capability to infer the expected entailments, while maintaining a close relation to the reduction method. We achieve this by introducing a new kind of first-order interpretation that allocates defeasible information on its elements directly. This allows to compare the level of typicality of such interpretations in terms of defeasible information satisfied at elements in the relational neighbourhood. A typicality preference relation then provides the means to single out those sets of models with maximal typicality. Based on this notion, we introduce two types of nested rational semantics, a sceptical and a selective variant, each capable of deriving the missing entailments under RC for arbitrarily nested quantified concepts. As a proof of versatility for our new semantics, we also show that the stronger Relevant Closure, can be imbued with typical information in the successors of binary relations. An extensive investigation into the computational complexity of our new semantics shows that the sceptical nested variant comes at considerable additional effort, while the selective semantics reside in the complexity of classical reasoning in the underlying DL, which remains tractable in our case

    Logic and Commonsense Reasoning: Lecture Notes

    Get PDF
    MasterThese are the lecture notes of a course on logic and commonsense reasoning given to master students in philosophy of the University of Rennes 1. N.B.: Some parts of these lectures notes are sometimes largely based on or copied verbatim from publications of other authors. When this is the case, these parts are mentioned at the end of each chapter in the section “Further reading”

    Default reasoning using maximum entropy and variable strength defaults

    Get PDF
    PhDThe thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction of conclusions should they subsequently turn out to be incorrect. The model can be used both to reason from a given knowledge base of default rules, and to aid in the construction of such knowledge bases by allowing their designer to compare the consequences of his design with his own default assumptions. The conclusions supported by the proposed model are justified by the use of a probabilistic semantics for default rules in conjunction with the application of a rational means of inference from incomplete knowledge the principle of maximum entropy (ME). The thesis develops both the theory and algorithms for the ME approach and argues that it should be considered as a general theory of default reasoning. The argument supporting the thesis has two main threads. Firstly, the ME approach is tested on the benchmark examples required of nonmonotonic behaviour, and it is found to handle them appropriately. Moreover, these patterns of commonsense reasoning emerge as consequences of the chosen semantics rather than being design features. It is argued that this makes the ME approach more objective, and its conclusions more justifiable, than other default systems. Secondly, the ME approach is compared with two existing systems: the lexicographic approach (LEX) and system Z+. It is shown that the former can be equated with ME under suitable conditions making it strictly less expressive, while the latter is too crude to perform the subtle resolution of default conflict which the ME approach allows. Finally, a program called DRS is described which implements all systems discussed in the thesis and provides a tool for testing their behaviours.Engineering and Physical Science Research Council (EPSRC

    The Boltzmann Machine: a Connectionist Model for Supra-Classical Logic

    Get PDF
    This thesis moves towards reconciliation of two of the major paradigms of artificial intelligence: by exploring the representation of symbolic logic in an artificial neural network. Previous attempts at the machine representation of classical logic are reviewed. We however, consider the requirements of inference in the broader realm of supra-classical, non-monotonic logic. This logic is concerned with the tolerance of exceptions, thought to be associated with common-sense reasoning. Biological plausibility extends these requirements in the context of human cognition. The thesis identifies the requirements of supra-classical, non-monotonic logic in relation to the properties of candidate neural networks. Previous research has theoretically identified the Boltzmann machine as a potential candidate. We provide experimental evidence supporting a version of the Boltzmann machine as a practical representation of this logic. The theme is pursued by looking at the benefits of utilising the relationship between the logic and the Boltzmann machine in two areas. We report adaptations to the machine architecture which select for different information distributions. These distributions correspond to state preference in traditional logic versus the concept of atomic typicality in contemporary approaches to logic. We also show that the learning algorithm of the Boltzmann machine can be adapted to implement pseudo-rehearsal during retraining. The results of machine retraining are then utilised to consider the plausibility of some current theories of belief revision in logic. Furthermore, we propose an alternative approach to belief revision based on the experimental results of retraining the Boltzmann machine

    Computational Complexity of Strong Admissibility for Abstract Dialectical Frameworks

    Get PDF
    Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling and evaluating argumentation allowing general logical satisfaction conditions. Different criteria used to settle the acceptance of arguments arecalled semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. Recently, the notion of strong admissibility has been introduced for ADFs. In the current work we study the computational complexityof the following reasoning tasks under strong admissibility semantics. We address 1. the credulous/skeptical decision problem; 2. the verification problem; 3. the strong justification problem; and 4. the problem of finding a smallest witness of strong justification of a queried argument
    • …
    corecore