104,820 research outputs found

    Embedding Non-Ground Logic Programs into Autoepistemic Logic for Knowledge Base Combination

    Full text link
    In the context of the Semantic Web, several approaches to the combination of ontologies, given in terms of theories of classical first-order logic and rule bases, have been proposed. They either cast rules into classical logic or limit the interaction between rules and ontologies. Autoepistemic logic (AEL) is an attractive formalism which allows to overcome these limitations, by serving as a uniform host language to embed ontologies and nonmonotonic logic programs into it. For the latter, so far only the propositional setting has been considered. In this paper, we present three embeddings of normal and three embeddings of disjunctive non-ground logic programs under the stable model semantics into first-order AEL. While the embeddings all correspond with respect to objective ground atoms, differences arise when considering non-atomic formulas and combinations with first-order theories. We compare the embeddings with respect to stable expansions and autoepistemic consequences, considering the embeddings by themselves, as well as combinations with classical theories. Our results reveal differences and correspondences of the embeddings and provide useful guidance in the choice of a particular embedding for knowledge combination.Comment: 52 pages, submitte

    Logical Reduction of Metarules

    Get PDF
    International audienceMany forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times

    The evolutionary explanation: the limits of the desire theories of unpleasantness,

    Get PDF
    Several theorists have defended that unpleasantness can be explained by appealing to (intrinsic, simultaneous, de re) desires for certain experiences not to be occurring. In a nutshell, experiences are unpleasant because we do not want them, and not vice versa. A common criticism for this approach takes the form of a Euthyphro dilemma. Even if there is a solution for this criticism, I argue that this type of approach is limited in two important ways. It cannot provide an explanation for: i) the motivation, from a psychological conscious point of view, nor ii) a non-instrumental justification, for having the relevant desires. The lack of these explanations is relevant since these are precisely the type clarifications that we would expect from a theory about unpleasantness

    Consequentializing Moral Dilemmas

    Get PDF
    The aim of the consequentializing project is to show that, for every plausible ethical theory, there is a version of consequentialism that is extensionally equivalent to it. One challenge this project faces is that there are common-sense ethical theories that posit moral dilemmas. There has been some speculation about how the consequentializers should react to these theories, but so far there has not been a systematic treatment of the topic. In this article, I show that there are at least five ways in which we can construct versions of consequentialism that are extensionally equivalent to the ethical theories that contain moral dilemmas. I argue that all these consequentializing strategies face a dilemma: either they must posit moral dilemmas in unintuitive cases or they must rely on unsupported assumptions about value, permissions, requirements, or options. I also consider this result's consequences for the consequentializing project

    Induction of Interpretable Possibilistic Logic Theories from Relational Data

    Full text link
    The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201
    corecore