21 research outputs found

    Completion-based computation of most specific concepts with limited role-depth for EL and Prob-EL⁰¹

    Get PDF
    In Description Logics the reasoning service most specific concept (msc) constructs a concept description that generalizes an ABox individual into a concept description. For the Description Logic EL the msc may not exist, if computed with respect to general EL-TBoxes or cyclic ABoxes. However, it is still possible to find a concept description that is the msc up to a fixed role-depth, i.e. with respect to a maximal nesting of quantifiers. In this report we present a practical approach for computing the roledepth bounded msc, based on the polynomial-time completion algorithm for EL. We extend these methods to Prob-EL⁰¹c , which is a probabilistic variant of EL. Together with the companion report [9] this report devises computation methods for the bottom-up construction of knowledge bases for EL and Prob-EL⁰¹c

    Completion-based computation of least common subsumers with limited role-depth for EL and Prob-EL⁰¹

    Get PDF
    The least common subsumer (lcs) w.r.t general EL-TBoxes does not need to exists in general due to cyclic axioms. In this report we present an algorithm for computing role-depth bounded EL-lcs based on the completion algorithm for EL. We extend this computation algorithm to a recently introduced probabilistic variant of EL: Prob-EL⁰¹

    Formal Concept Analysis Methods for Description Logics

    Get PDF
    This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets

    Reasoning-Supported Quality Assurance for Knowledge Bases

    Get PDF
    The increasing application of ontology reuse and automated knowledge acquisition tools in ontology engineering brings about a shift of development efforts from knowledge modeling towards quality assurance. Despite the high practical importance, there has been a substantial lack of support for ensuring semantic accuracy and conciseness. In this thesis, we make a significant step forward in ontology engineering by developing a support for two such essential quality assurance activities

    Constructing and Extending Description Logic Ontologies using Methods of Formal Concept Analysis

    Get PDF
    Description Logic (abbrv. DL) belongs to the field of knowledge representation and reasoning. DL researchers have developed a large family of logic-based languages, so-called description logics (abbrv. DLs). These logics allow their users to explicitly represent knowledge as ontologies, which are finite sets of (human- and machine-readable) axioms, and provide them with automated inference services to derive implicit knowledge. The landscape of decidability and computational complexity of common reasoning tasks for various description logics has been explored in large parts: there is always a trade-off between expressibility and reasoning costs. It is therefore not surprising that DLs are nowadays applied in a large variety of domains: agriculture, astronomy, biology, defense, education, energy management, geography, geoscience, medicine, oceanography, and oil and gas. Furthermore, the most notable success of DLs is that these constitute the logical underpinning of the Web Ontology Language (abbrv. OWL) in the Semantic Web. Formal Concept Analysis (abbrv. FCA) is a subfield of lattice theory that allows to analyze data-sets that can be represented as formal contexts. Put simply, such a formal context binds a set of objects to a set of attributes by specifying which objects have which attributes. There are two major techniques that can be applied in various ways for purposes of conceptual clustering, data mining, machine learning, knowledge management, knowledge visualization, etc. On the one hand, it is possible to describe the hierarchical structure of such a data-set in form of a formal concept lattice. On the other hand, the theory of implications (dependencies between attributes) valid in a given formal context can be axiomatized in a sound and complete manner by the so-called canonical base, which furthermore contains a minimal number of implications w.r.t. the properties of soundness and completeness. In spite of the different notions used in FCA and in DLs, there has been a very fruitful interaction between these two research areas. My thesis continues this line of research and, more specifically, I will describe how methods from FCA can be used to support the automatic construction and extension of DL ontologies from data

    Learning General Concept Inclusions in Probabilistic Description Logics

    Get PDF
    Probabilistic interpretations consist of a set of interpretations with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilised to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatisation of a base of GCIs is well-known

    A Machine Learning Approach for Optimizing Heuristic Decision-making in OWL Reasoners

    Get PDF
    Description Logics (DLs) are formalisms for representing knowledge bases of application domains. TheWeb Ontology Language (OWL) is a syntactic variant of a very expressive description logic. OWL reasoners can infer implied information from OWL ontologies. The performance of OWL reasoners can be severely affected by situations that require decision-making over many alternatives. Such a non-deterministic behavior is often controlled by heuristics that are based on insufficient information. This thesis proposes a novel OWL reasoning approach that applies machine learning (ML) to implement pragmatic and optimal decision-making strategies in such situations. Disjunctions occurring in ontologies are one source of non deterministic actions in reasoners. We propose two ML-based approaches to reduce the non-determinism caused by dealing with disjunctions. The first approach is restricted to propositional description logic while the second one can deal with standard description logic. The first approach builds a logistic regression classifier that chooses a proper branching heuristic for an input ontology. Branching heuristics are first developed to help Propositional Satisfiability (SAT) based solvers with making decisions about which branch to pick in each branching level. The second approach is the developed version of the first approach. An SVM (Support Vector Machine) classier is designed to select an appropriate expansion-ordering heuristic for an input ontology. The built-in heuristics are designed for expansion ordering of satisfiability testing in OWL reasoners. They determine the order for branches in search trees. Both of the above approaches speed up our ML-based reasoner by up to two orders of magnitude in comparison to the non-ML reasoner. Another source of non-deterministic actions is the order in which tableau rules should be applied. On average, our ML-based approach that is an SVM classifier achieves a speedup of two orders of magnitude when compared to the most expensive rule ordering of the non-ML reasoner

    Standard and Non-standard reasoning in Description Logics

    Get PDF
    The present work deals with Description Logics (DLs), a class of knowledge representation formalisms used to represent and reason about classes of individuals and relations between such classes in a formally well-defined way. We provide novel results in three main directions. (1) Tractable reasoning revisited: in the 1990s, DL research has largely answered the question for practically relevant yet tractable DL formalisms in the negative. Due to novel application domains, especially the Life Sciences, and a surprising tractability result by Baader, we have re-visited this question, this time looking in a new direction: general terminologies (TBoxes) and extensions thereof defined over the DL EL and extensions thereof. As main positive result, we devise EL++(D)-CBoxes as a tractable DL formalism with optimal expressivity in the sense that every additional standard DL constructor, every extension of the TBox formalism, or every more powerful concrete domain, makes reasoning intractable. (2) Non-standard inferences for knowledge maintenance: non-standard inferences, such as matching, can support domain experts in maintaining DL knowledge bases in a structured and well-defined way. In order to extend their availability and promote their use, the present work extends the state of the art of non-standard inferences both w.r.t. theory and implementation. Our main results are implementations and performance evaluations of known matching algorithms for the DLs ALE and ALN, optimal non-deterministic polynomial time algorithms for matching under acyclic side conditions in ALN and sublanguages, and optimal algorithms for matching w.r.t. cyclic (and hybrid) EL-TBoxes. (3) Non-standard inferences over general concept inclusion (GCI) axioms: the utility of GCIs in modern DL knowledge bases and the relevance of non-standard inferences to knowledge maintenance naturally motivate the question for tractable DL formalism in which both can be provided. As main result, we propose hybrid EL-TBoxes as a solution to this hitherto open question

    Probabilistic description logics for subjective uncertainty

    Get PDF
    We propose a family of probabilistic description logics (DLs) that are derived in a principled way from Halpern's probabilistic first-order logic. The resulting probabilistic DLs have a two-dimensional semantics similar to temporal DLs and are well-suited for representing subjective probabilities. We carry out a detailed study of reasoning in the new family of logics, concentrating on probabilistic extensions of the DLs ALC and EL, and showing that the complexity ranges from PTime via ExpTime and 2ExpTime to undecidable
    corecore