5 research outputs found

    Predicting the understandability of OWL inferences

    Get PDF
    In this paper, we describe a method for predicting the understandability level of inferences with OWL. Specifically, we present a model for measuring the understandability of a multiple-step inference based on the measurement of the understandability of individual inference steps. We also present an evaluation study which confirms that our model works relatively well for two-step inferences with OWL. This model has been applied in our research on generating accessible explanations for an entailment of OWL ontologies, to determine the most understandable inference among alternatives, from which the final explanation is generated

    Parallelizing Description Logic Reasoning

    Get PDF
    Description Logic has become one of the primary knowledge representation and reasoning methodologies during the last twenty years. A lot of areas are benefiting from description logic based technologies. Description logic reasoning algorithms and a number of optimization techniques for them play an important role and have been intensively researched. However, few of them have been systematically investigated in a concurrency context in spite of multi-processor computing facilities growing up. Meanwhile, semantic web, an application domain of description logic, is producing vast knowledge data on the Internet, which needs to be dealt with by using scalable solutions. This situation requires description logic reasoners to be endowed with reasoning scalability. This research introduced concurrent computing in two aspects: classification, and tableau-based description logic reasoning. Classification is a core description logic reasoning service. Over more than two decades many research efforts have been devoted to optimizing classification. Those classification optimization algorithms have shown their pragmatic effectiveness for sequential processing. However, as concurrent computing becomes widely available, new classification algorithms that are well suited to parallelization need to be developed. This need is further supported by the observation that most available OWL reasoners, which are usually based on tableau reasoning, can only utilize a single processor. Such an inadequacy often leads users working in ontology development to frustration, especially if their ontologies are complex and require long processing times. Classification service finds out all named concept subsumption relationships entailed in a knowledge base. Each subsumption test enrolls two concepts and is independent of the others. At most n^2 subsumption tests are needed for a knowledge base which contains n concepts. As the first contribution of this research, we developed an algorithm and a corresponding architecture showing that reasoning scalability can be gained by using concurrent computing. Further, this research investigated how concurrent computing can increase performance of tableau-based description logic reasoning algorithms. Tableau-based description logic reasoning decides a problem by constructing an AND-OR tree. Before this research, some research has shown the effectiveness of parallelizing processing disjunction branches of a tableau expansion tree. Our research has shown how reasoning scalability can be gained by processing conjunction branches of a tableau expansion tree. In addition, this research developed an algorithm, merge classification, that uses a divide and conquer strategy for parallelizing classification. This method applies concurrent computing to the more efficient classification algorithm, top-search & bottom-search, which has been adopted as a standard procedure for classification. Reasoning scalability can be observed in a number of real world cases by using this algorithm

    Context Reasoning for Role-Based Models

    Get PDF
    In a modern world software systems are literally everywhere. These should cope with very complex scenarios including the ability of context-awareness and self-adaptability. The concept of roles provide the means to model such complex, context-dependent systems. In role-based systems, the relational and context-dependent properties of objects are transferred into the roles that the object plays in a certain context. However, even if the domain can be expressed in a well-structured and modular way, role-based models can still be hard to comprehend due to the sophisticated semantics of roles, contexts and different constraints. Hence, unintended implications or inconsistencies may be overlooked. A feasible logical formalism is required here. In this setting Description Logics (DLs) fit very well as a starting point for further considerations since as a decidable fragment of first-order logic they have both an underlying formal semantics and decidable reasoning problems. DLs are a well-understood family of knowledge representation formalisms which allow to represent application domains in a well-structured way by DL-concepts, i.e. unary predicates, and DL-roles, i.e. binary predicates. However, classical DLs lack expressive power to formalise contextual knowledge which is crucial for formalising role-based systems. We investigate a novel family of contextualised description logics that is capable of expressing contextual knowledge and preserves decidability even in the presence of rigid DL-roles, i.e. relational structures that are context-independent. For these contextualised description logics we thoroughly analyse the complexity of the consistency problem. Furthermore, we present a mapping algorithm that allows for an automated translation from a formal role-based model, namely a Compartment Role Object Model (CROM), into a contextualised DL ontology. We prove the semantical correctness and provide ideas how features extending CROM can be expressed in our contextualised DLs. As final step for a completely automated analysis of role-based models, we investigate a practical reasoning algorithm and implement the first reasoner that can process contextual ontologies

    Ontology-based transformation of natural language queries into SPARQL queries by evolutionary algorithms

    Get PDF
    In dieser Arbeit wird ein ontologiegetriebenes evolutionäres Lernsystem für natürlichsprachliche Abfragen von RDF-Graphen vorgestellt. Das lernende System beantwortet die Anfrage nicht selbst, sondern generiert eine SPARQL-Abfrage gegen die Datenbank. Zu diesem Zweck wird das Evolutionary Dataflow Agents Framework eingeführt, ein allgemeines Lernsystem, das auf der Grundlage evolutionärer Algorithmen Agenten erzeugt, die lernen, ein Problem zu lösen. Die Hauptidee des Frameworks ist es, Probleme zu unterstützen, die einen mittelgroßen Suchraum (Anwendungsfall: Analyse von natürlichsprachlichen Abfragen) von streng formal strukturierten Lösungen (Anwendungsfall: Synthese von Datenbankabfragen) mit eher lokalen klassischen strukturellen und algorithmischen Aspekten kombinieren. Dabei kombinieren die Agenten lokale algorithmische Funktionalität von Knoten mit einem flexiblen Datenfluss zwischen den Knoten zu einem globalen Problemlösungsprozess. Grob gesagt gibt es Knoten, die Informationsfragmente generieren, indem sie Eingabedaten und/oder frühere Fragmente kombinieren, oft unter Verwendung von auf Heuristik basierenden Vermutungen. Andere Knoten kombinieren, sammeln und reduzieren solche Fragmente auf mögliche Lösungen und grenzen diese auf die endgültige Lösung ein. Zu diesem Zweck werden die Informationen von den Agenten weitergegeben. Die Konfiguration dieser Agenten, welche Knoten sie kombinieren und wohin genau die Daten fließen, ist Gegenstand des Lernens. Das Training beginnt mit einfachen Agenten, die - wie in Lern-Frameworks üblich - eine Reihe von Aufgaben lösen und dafür bewertet werden. Da die erzeugten Antworten in der Regel komplexe Strukturen aufweisen, setzt das Framework einen neuartigen feinkörnigen energiebasierten Bewertungs- und Auswahlschritt ein. Die ausgewählten Agenten bilden dann die Grundlage für die Population der nächsten Runde. Die Evolution wird wie üblich durch Mutationen und Agentenfusion gewährleistet. Als Anwendungsfall wurde EvolNLQ implementiert, ein System zur Beantwortung natürlichsprachlicher Abfragen gegen RDF-Datenbanken. Hierfür wird die zugrundeliegende Ontologie medatata (extern) algorithmisch vorverarbeitet. Für die Agenten werden geeignete Datenelementtypen und Knotentypen definiert, die die Prozesse der Sprachanalyse und der Anfragesynthese in mehr oder weniger elementare Operationen zerlegen. Die "Größe" der Operationen wird bestimmt durch die Grenze zwischen Berechnungen, d.h. rein algorithmischen Schritten (implementiert in einzelnen mächtigen Knoten) und einfachen heuristischen Schritten (ebenfalls realisiert durch einfache Knoten), und freiem Datenfluss, der beliebige Verkettungen und Verzweigungskonfigurationen der Agenten erlaubt. EvolNLQ wird mit einigen anderen Ansätzen verglichen und zeigt konkurrenzfähige Ergebnisse.In this thesis an ontology-driven evolutionary learning system for natural language querying of RDF graphs is presented. The learning system itself does not answer the query, but generates a SPARQL query against the database. For this purpose, the Evolutionary Dataflow Agents framework, a general learning framework is introduced that, based on evolutionary algorithms, creates agents that learn to solve a problem. The main idea of the framework is to support problems that combine a medium-sized search space (use case: analysis of natural language queries) of strictly, formally structured solutions (use case: synthesis of database queries), with rather local classical structural and algorithmic aspects. For this, the agents combine local algorithmic functionality of nodes with a flexible dataflow between the nodes to a global problem solving process. Roughly, there are nodes that generate informational fragments by combining input data and/or earlier fragments, often using heuristics-based guessing. Other nodes combine, collect, and reduce such fragments towards possible solutions, and narrowing these towards the unique final solution. For this, informational items are floating through the agents. The configuration of these agents, what nodes they combine, and where exactly the data items are flowing, is subject to learning. The training starts with simple agents, which –as usual in learning frameworks– solve a set of tasks, and are evaluated for it. Since the produced answers usually have complex structures answers, the framework employs a novel fine-grained energy-based evaluation and selection step. The selected agents then are the basis for the population of the next round. Evolution is provided as usual by mutations and agent fusion. As a use case, EvolNLQ has been implemented, a system for answering natural language queries against RDF databases. For this, the underlying ontology medatata is (externally) algorithmically preprocessed. For the agents, appropriate data item types and node types are defined that break down the processes of language analysis and query synthesis into more or less elementary operations. The "size" of operations is determined by the border between computations, i.e., purely algorithmic steps (implemented in individual powerful nodes) and simple heuristic steps (also realized by simple nodes), and free dataflow allowing for arbitrary chaining and branching configurations of the agents. EvolNLQ is compared with some other approaches, showing competitive results.2022-01-2

    A Hypertableau Calculus for SHIQ

    No full text
    Abstract. We present a novel reasoning calculus for the Description Logic SHIQ. In order to reduce the nondeterminism due to general inclusion axioms, we base our calculus on hypertableau and hyperresolution calculi, which we extend with a blocking condition to ensure termination. To prevent the calculus from generating large models, we introduce “anywhere ” pairwise blocking. Our preliminary implementation shows significant performance improvements on several well-known ontologies. To the best of our knowledge, our reasoner is currently the only one that can classify the original version of the GALEN terminology.
    corecore