187 research outputs found

    A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems

    Full text link
    We propose a set of compositional design patterns to describe a large variety of systems that combine statistical techniques from machine learning with symbolic techniques from knowledge representation. As in other areas of computer science (knowledge engineering, software engineering, ontology engineering, process mining and others), such design patterns help to systematize the literature, clarify which combinations of techniques serve which purposes, and encourage re-use of software components. We have validated our set of compositional design patterns against a large body of recent literature.Comment: 12 pages,55 reference

    Maybe Eventually? Towards Combining Temporal and Probabilistic Description Logics and Queries: Extended Version

    Get PDF
    We present some initial results on ontology-based query answering with description logic ontologies that may employ temporal and probabilistic operators on concepts and axioms. Speci_cally, we consider description logics extended with operators from linear temporal logic (LTL), as well as subjective probability operators, and an extended query language in which conjunctive queries can be combined using these operators. We first show some complexity results for the setting in which either only temporal operators or only probabilistic operators may be used, both in the ontology and in the query, and then show a 2ExpSpace lower bound for the setting in which both types of operators can be used together.This is an extended version of an article accepted at Description Logics 2019

    Recovering parasites from mummies and coprolites: An epidemiological approach

    Get PDF
    In the field of archaeological parasitology, researchers have long documented the distribution of parasites in archaeological time and space through the analysis of coprolites and human remains. This area of research defined the origin and migration of parasites through presence/absence studies. By the end of the 20th century, the field of pathoecology had emerged as researchers developed an interest in the ancient ecology of parasite transmission. Supporting studies were conducted to establish the relationships between parasites and humans, including cultural, subsistence, and ecological reconstructions. Parasite prevalence data were collected to infer the impact of parasitism on human health. In the last few decades, a paleoepidemiological approach has emerged with a focus on applying statistical techniques for quantification. The application of egg per gram (EPG) quantification methods provide data about parasites’ prevalence in ancient populations and also identify the pathological potential that parasitism presented in different time periods and geographic places. Herein, we compare the methods used in several laboratories for reporting parasite prevalence and EPG quantification. We present newer quantification methods to explore patterns of parasite overdispersion among ancient people. These new methods will be able to produce more realistic measures of parasite infections among people of the past. These measures allow researchers to compare epidemiological patterns in both ancient and modern populations

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Privacy-Preserving Ontology Publishing:: The Case of Quantified ABoxes w.r.t. a Static Cycle-Restricted EL TBox: Extended Version

    Get PDF
    We review our recent work on how to compute optimal repairs, optimal compliant anonymizations, and optimal safe anonymizations of ABoxes containing possibly anonymized individuals. The results can be used both to remove erroneous consequences from a knowledge base and to hide secret information before publication of the knowledge base, while keeping as much as possible of the original information.Updated on August 27, 2021. This is an extended version of an article accepted at DL 2021

    Atacama Cosmology Telescope: Weighing Distant Clusters with the Most Ancient Light

    Get PDF
    We use gravitational lensing of the cosmic microwave background (CMB) to measure the mass of the most distant blindly selected sample of galaxy clusters on which a lensing measurement has been performed to date. In CMB data from the the Atacama Cosmology Telescope and the Planck satellite, we detect the stacked lensing effect from 677 near-infrared-selected galaxy clusters from the Massive and Distant Clusters of WISE Survey (MaDCoWS), which have a mean redshift of ⟨z⟩ = 1.08. There are currently no representative optical weak lensing measurements of clusters that match the distance and average mass of this sample. We detect the lensing signal with a significance of 4.2σ. We model the signal with a halo model framework to find the mean mass of the population from which these clusters are drawn. Assuming that the clusters follow Navarro–Frenk–White (NFW) density profiles, we infer a mean mass of ⟨M_(500c)⟩ = (1.7±0.4)×10¹⁴M⊙. We consider systematic uncertainties from cluster redshift errors, centering errors, and the shape of the NFW profile. These are all smaller than 30% of our reported uncertainty. This work highlights the potential of CMB lensing to enable cosmological constraints from the abundance of distant clusters populating ever larger volumes of the observable universe, beyond the capabilities of optical weak lensing measurements

    Description Logics Go Second-Order -- Extending EL with Universally Quantified Concepts

    Full text link
    The study of Description Logics have been historically mostly focused on features that can be translated to decidable fragments of first-order logic. In this paper, we leave this restriction behind and look for useful and decidable extensions outside first-order logic. We introduce universally quantified concepts, which take the form of variables that can be replaced with arbitrary concepts, and define two semantics of this extension. A schema semantics allows replacements of concept variables only by concepts from a particular language, giving us axiom schemata similar to modal logics. A second-order semantics allows replacement of concept variables with arbitrary subsets of the domain, which is similar to quantified predicates in second-order logic. To study the proposed semantics, we focus on the extension of the description logic EL\mathcal{EL}. We show that for a useful fragment of the extension, the conclusions entailed by the different semantics coincide, allowing us to use classical EL\mathcal{EL} reasoning algorithms even for the second-order semantics. For a slightly smaller, but still useful, fragment, we were also able to show polynomial decidability of the extension. This fragment, in particular, can express a generalized form of role chain axioms, positive self restrictions, and some forms of (local) role-value-maps from KL-ONE, without requiring any additional constructors

    Privacy-Preserving Ontology Publishing for EL Instance Stores: Extended Version

    Get PDF
    We make a first step towards adapting an existing approach for privacypreserving publishing of linked data to Description Logic (DL) ontologies. We consider the case where both the knowledge about individuals and the privacy policies are expressed using concepts of the DL EL, which corresponds to the setting where the ontology is an EL instance store. We introduce the notions of compliance of a concept with a policy and of safety of a concept for a policy, and show how optimal compliant (safe) generalizations of a given EL concept can be computed. In addition, we investigate the complexity of the optimality problem

    Rational Defeasible Belief Change

    Get PDF
    We present a formal framework for modelling belief change within a non-monotonic reasoning system. Belief change and non-monotonic reasoning are two areas that are formally closely related, with recent attention being paid towards the analysis of belief change within a non-monotonic environ- ment. In this paper we consider the classical AGM belief change operators, contraction and revision, applied to a de- feasible setting in the style of Kraus, Lehmann, and Magidor. The investigation leads us to the formal characterisation of a number of classes of defeasible belief change operators. For the most interesting classes we need to consider the problem of iterated belief change, generalising the classical work of Darwiche and Pearl in the process. Our work involves be- lief change operators aimed at ensuring logical consistency, as well as the characterisation of analogous operators aimed at obtaining coherence—an important notion within the field of logic-based ontologies

    Graduality in Probabilistic Argumentation Frameworks

    Get PDF
    Gradual semantics are methods that evaluate overall strengths of individual arguments in graphs. In this paper, we investigate gradual semantics for extended frameworks in which probabilities are used to quantify the uncertainty about arguments and attacks belonging to the graph. We define the likelihoods of an argument’s possible strengths when facing uncertainty about the topology of the argumentation framework. We also define an approach to compare the strengths of arguments in this probabilistic setting. Finally, we propose a method to calculate the overall strength of each argument in the framework, and we evaluate this method against a set of principles
    corecore