63 research outputs found
Towards ensuring Satisfiability of Merged Ontology
AbstractThe last decade has seen researchers developing efficient algorithms for the mapping and merging of ontologies to meet the demands of interoperability between heterogeneous and distributed information systems. But, still state-of-the-art ontology mapping and merging systems is semi-automatic that reduces the burden of manual creation and maintenance of mappings, and need human intervention for their validation. The contribution presented in this paper makes human intervention one step more down by automatically identifying semantic inconsistencies in the early stages of ontology merging. Our methodology detects inconsistencies based on structural mismatches that occur due to conflicts among the set of Generalized Concept Inclusions, and Disjoint Relations due to the differences between disjoint partitions in the local heterogeneous ontologies. We present novel methodologies to detect and repair semantic inconsistencies from the list of initial mappings. This results in global merged ontology free from ‘circulatory error in class/property hierarchy’, „common class/instance between disjoint classes error’, ‘redundancy of subclass/subproperty relations’, ‘redundancy of disjoint relations’ and other types of „semantic inconsistency’ errors. In this way, our methodology saves time and cost of traversing local ontologies for the validation of mappings, improves performance by producing only consistent accurate mappings, and reduces the user dependability for ensuring the satisfiability and consistency of merged ontology. The experiments show that the newer approach with automatic inconsistency detection yields a significantly higher precision
Ontology Evaluation
Ontology evaluation is the task of measuring the quality of an ontology. It enables us to answer the following main question: How to assess the quality of an ontology for the Web? In this thesis a theoretical framework and several methods breathing life into the framework are presented. The application to the above scenarios is explored, and the theoretical foundations are thoroughly grounded in the practical usage of the emerging Semantic Web
Capture and Maintenance of Constraints in Engineering Design
The thesis investigates two domains, initially the kite domain and then part of a more demanding Rolls-Royce domain (jet engine design). Four main types of refinement rules that use the associated application conditions and domain ontology to support the maintenance of constraints are proposed. The refinement rules have been implemented in ConEditor and the extended system is known as ConEditor+. With the help of ConEditor+, the thesis demonstrates that an explicit representation of application conditions together with the corresponding constraints and the domain ontology can be used to detect inconsistencies, redundancy, subsumption and fusion, reduce the number of spurious inconsistencies and prevent the identification of inappropriate refinements of redundancy, subsumption and fusion between pairs of constraints.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Recommended from our members
Minimizing conservativity violations in ontology alignments: algorithms and evaluation
In order to enable interoperability between ontology-based systems, ontology matching techniques have been proposed. However, when the generated mappings lead to undesired logical consequences, their usefulness may be diminished. In this paper, we present an approach to detect and minimize the violations of the so-called conservativity principle where novel subsumption entailments between named concepts in one of the input ontologies are considered as unwanted. The practical applicability of the proposed approach is experimentally demonstrated on the datasets from the Ontology Alignment Evaluation Initiative
Automatic extraction of facts, relations, and entities for web-scale knowledge base population
Equipping machines with knowledge, through the construction of machinereadable knowledge bases, presents a key asset for semantic search, machine translation, question answering, and other formidable challenges in artificial intelligence. However, human knowledge predominantly resides in books and other natural language text forms. This means that knowledge bases must be extracted and synthesized from natural language text. When the source of text is the Web, extraction methods must cope with ambiguity, noise, scale, and updates. The goal of this dissertation is to develop knowledge base population methods that address the afore mentioned characteristics of Web text. The dissertation makes three contributions. The first contribution is a method for mining high-quality facts at scale, through distributed constraint reasoning and a pattern representation model that is robust against noisy patterns. The second contribution is a method for mining a large comprehensive collection of relation types beyond those commonly found in existing knowledge bases. The third contribution is a method for extracting facts from dynamic Web sources such as news articles and social media where one of the key challenges is the constant emergence of new entities. All methods have been evaluated through experiments involving Web-scale text collections.Maschinenlesbare Wissensbasen sind ein zentraler Baustein für semantische Suche, maschinelles Übersetzen, automatisches Beantworten von Fragen und andere komplexe Fragestellungen der Künstlichen Intelligenz. Allerdings findet man menschliches Wissen bis dato überwiegend in Büchern und anderen natürlichsprachigen Texten. Das hat zur Folge, dass Wissensbasen durch automatische Extraktion aus Texten erstellt werden müssen. Bei Texten aus dem Web müssen Extraktionsmethoden mit einem hohen Maß an Mehrdeutigkeit und Rauschen sowie mit sehr großen Datenvolumina und häufiger Aktualisierung zurechtkommen. Das Ziel dieser Dissertation ist, Methoden zu entwickeln, die die automatische Erstellung von Wissensbasen unter den zuvor genannten Unwägbarkeiten von Texten aus dem Web ermöglichen. Die Dissertation leistet dazu drei Beiträge. Der erste Beitrag ist ein skalierbar verteiltes Verfahren, das die effiziente Extraktion hochwertiger Fakten unterstützt, indem logische Inferenzen mit robuster Textmustererkennung kombiniert werden. Der zweite Beitrag der Arbeit ist eine Methodik zur automatischen Konstruktion einer umfassenden Sammlung typisierter Relationen, die weit über die in existierenden Wissensbasen bekannten Relationen hinausgeht. Der dritte Beitrag ist ein neuartiges Verfahren zur Extraktion von Fakten aus dynamischen Webinhalten wie Nachrichtenartikeln und sozialen Medien. Insbesondere werden Lösungen vorgestellt zur Erkennung und Registrierung neuer Entitäten, die bislang in keiner Wissenbasis enthalten sind. Alle Verfahren wurden durch umfassende Experimente auf großen Text und Webkorpora evaluiert
Reasoning-Supported Quality Assurance for Knowledge Bases
The increasing application of ontology reuse and automated knowledge acquisition tools in ontology engineering brings about a shift of development efforts from knowledge modeling towards quality assurance. Despite the high practical importance, there has been a substantial lack of support for ensuring semantic accuracy and conciseness. In this thesis, we make a significant step forward in ontology engineering by developing a support for two such essential quality assurance activities
- …