103 research outputs found
Monotonicity and Persistence in Preferential Logics
An important characteristic of many logics for Artificial Intelligence is
their nonmonotonicity. This means that adding a formula to the premises can
invalidate some of the consequences. There may, however, exist formulae that
can always be safely added to the premises without destroying any of the
consequences: we say they respect monotonicity. Also, there may be formulae
that, when they are a consequence, can not be invalidated when adding any
formula to the premises: we call them conservative. We study these two classes
of formulae for preferential logics, and show that they are closely linked to
the formulae whose truth-value is preserved along the (preferential) ordering.
We will consider some preferential logics for illustration, and prove syntactic
characterization results for them. The results in this paper may improve the
efficiency of theorem provers for preferential logics.Comment: See http://www.jair.org/ for any accompanying file
A Lightweight Defeasible Description Logic in Depth: Quantification in Rational Reasoning and Beyond
Description Logics (DLs) are increasingly successful knowledge representation formalisms, useful for any application requiring implicit derivation of knowledge from explicitly known facts.
A prominent example domain benefiting from these formalisms since the 1990s is the biomedical field.
This area contributes an intangible amount of facts and relations between low- and high-level concepts such as the constitution of cells or interactions between studied illnesses, their symptoms and remedies.
DLs are well-suited for handling large formal knowledge repositories and computing inferable coherences throughout such data, relying on their well-founded first-order semantics.
In particular, DLs of reduced expressivity have proven a tremendous worth for handling large ontologies due to their computational tractability.
In spite of these assets and prevailing influence, classical DLs are not well-suited to adequately model some of the most intuitive forms of reasoning.
The capability for abductive reasoning is imperative for any field subjected to incomplete knowledge and the motivation to complete it with typical expectations.
When such default expectations receive contradicting evidence, an abductive formalism is able to retract previously drawn, conflicting conclusions.
Common examples often include human reasoning or a default characterisation of properties in biology, such as the normal arrangement of organs in the human body.
Treatment of such defeasible knowledge must be aware of exceptional cases - such as a human suffering from the congenital condition situs inversus - and therefore accommodate for the ability to retract defeasible conclusions in a non-monotonic fashion.
Specifically tailored non-monotonic semantics have been continuously investigated for DLs in the past 30 years.
A particularly promising approach, is rooted in the research by Kraus, Lehmann and Magidor for preferential (propositional) logics and Rational Closure (RC).
The biggest advantages of RC are its well-behaviour in terms of formal inference postulates and the efficient computation of defeasible entailments, by relying on a tractable reduction to classical reasoning in the underlying formalism.
A major contribution of this work is a reorganisation of the core of this reasoning method, into an abstract framework formalisation.
This framework is then easily instantiated to provide the reduction method for RC in DLs as well as more advanced closure operators, such as Relevant or Lexicographic Closure.
In spite of their practical aptitude, we discovered that all reduction approaches fail to provide any defeasible conclusions for elements that only occur in the relational neighbourhood of the inspected elements.
More explicitly, a distinguishing advantage of DLs over propositional logic is the capability to model binary relations and describe aspects of a related concept in terms of existential and universal quantification.
Previous approaches to RC (and more advanced closures) are not able to derive typical behaviour for the concepts that occur within such quantification.
The main contribution of this work is to introduce stronger semantics for the lightweight DL EL_bot with the capability to infer the expected entailments, while maintaining a close relation to the reduction method.
We achieve this by introducing a new kind of first-order interpretation that allocates defeasible information on its elements directly.
This allows to compare the level of typicality of such interpretations in terms of defeasible information satisfied at elements in the relational neighbourhood.
A typicality preference relation then provides the means to single out those sets of models with maximal typicality.
Based on this notion, we introduce two types of nested rational semantics, a sceptical and a selective variant, each capable of deriving the missing entailments under RC for arbitrarily nested quantified concepts.
As a proof of versatility for our new semantics, we also show that the stronger Relevant Closure, can be imbued with typical information in the successors of binary relations.
An extensive investigation into the computational complexity of our new semantics shows that the sceptical nested variant comes at considerable additional effort, while the selective semantics reside in the complexity of classical reasoning in the underlying DL, which remains tractable in our case
Modeling Belief in Dynamic Systems, Part II: Revision and Update
The study of belief change has been an active area in philosophy and AI. In
recent years two special cases of belief change, belief revision and belief
update, have been studied in detail. In a companion paper (Friedman & Halpern,
1997), we introduce a new framework to model belief change. This framework
combines temporal and epistemic modalities with a notion of plausibility,
allowing us to examine the change of beliefs over time. In this paper, we show
how belief revision and belief update can be captured in our framework. This
allows us to compare the assumptions made by each method, and to better
understand the principles underlying them. In particular, it shows that Katsuno
and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on
several strong assumptions that may limit its applicability in artificial
intelligence. Finally, our analysis allow us to identify a notion of minimal
change that underlies a broad range of belief change operations including
revision and update.Comment: See http://www.jair.org/ for other files accompanying this articl
Logic and Commonsense Reasoning: Lecture Notes
MasterThese are the lecture notes of a course on logic and commonsense reasoning given to master students in philosophy of the University of Rennes 1. N.B.: Some parts of these lectures notes are sometimes largely based on or copied verbatim from publications of other authors. When this is the case, these parts are mentioned at the end of each chapter in the section “Further reading”
Ceteris Paribus Laws
Laws of nature take center stage in philosophy of science. Laws are usually believed to stand in a tight conceptual relation to many important key concepts such as causation, explanation, confirmation, determinism, counterfactuals etc. Traditionally, philosophers of science have focused on physical laws, which were taken to be at least true, universal statements that support counterfactual claims. But, although this claim about laws might be true with respect to physics, laws in the special sciences (such as biology, psychology, economics etc.) appear to have—maybe not surprisingly—different features than the laws of physics. Special science laws—for instance, the economic law “Under the condition of perfect competition, an increase of demand of a commodity leads to an increase of price, given that the quantity of the supplied commodity remains constant” and, in biology, Mendel's Laws—are usually taken to “have exceptions”, to be “non-universal” or “to be ceteris paribus laws”. How and whether the laws of physics and the laws of the special sciences differ is one of the crucial questions motivating the debate on ceteris paribus laws. Another major, controversial question concerns the determination of the precise meaning of “ceteris paribus”. Philosophers have attempted to explicate the meaning of ceteris paribus clauses in different ways. The question of meaning is connected to the problem of empirical content, i.e., the question whether ceteris paribus laws have non-trivial and empirically testable content. Since many philosophers have argued that ceteris paribus laws lack empirically testable content, this problem constitutes a major challenge to a theory of ceteris paribus laws
Practical reasoning for defeasable description logics.
Doctor of Philosophy in Mathematics, Statistics and Computer Science. University of KwaZulu-Natal, Durban 2016.Description Logics (DLs) are a family of logic-based languages for formalising
ontologies. They have useful computational properties allowing the development
of automated reasoning engines to infer implicit knowledge from
ontologies. However, classical DLs do not tolerate exceptions to speci ed
knowledge. This led to the prominent research area of nonmonotonic or defeasible
reasoning for DLs, where most techniques were adapted from seminal
works for propositional and rst-order logic.
Despite the topic's attention in the literature, there remains no consensus
on what \sensible" defeasible reasoning means for DLs. Furthermore, there
are solid foundations for several approaches and yet no serious implementations
and practical tools. In this thesis we address the aforementioned issues
in a broad sense. We identify the preferential approach, by Kraus, Lehmann
and Magidor (KLM) in propositional logic, as a suitable abstract framework
for de ning and studying the precepts of sensible defeasible reasoning.
We give a generalisation of KLM's precepts, and their arguments motivating
them, to the DL case. We also provide several preferential algorithms
for defeasible entailment in DLs; evaluate these algorithms, and the main
alternatives in the literature, against the agreed upon precepts; extensively
test the performance of these algorithms; and ultimately consolidate our implementation
in a software tool called Defeasible-Inference Platform (DIP).
We found some useful entailment regimes within the preferential context
that satisfy all the KLM properties, and some that have scalable performance
in real world ontologies even without extensive optimisation
- …