156 research outputs found
DeepOnto: A Python Package for Ontology Engineering with Deep Learning
Applying deep learning techniques, particularly language models (LMs), in
ontology engineering has raised widespread attention. However, deep learning
frameworks like PyTorch and Tensorflow are predominantly developed for Python
programming, while widely-used ontology APIs, such as the OWL API and Jena, are
primarily Java-based. To facilitate seamless integration of these frameworks
and APIs, we present Deeponto, a Python package designed for ontology
engineering. The package encompasses a core ontology processing module founded
on the widely-recognised and reliable OWL API, encapsulating its fundamental
features in a more "Pythonic" manner and extending its capabilities to include
other essential components including reasoning, verbalisation, normalisation,
projection, and more. Building on this module, Deeponto offers a suite of
tools, resources, and algorithms that support various ontology engineering
tasks, such as ontology alignment and completion, by harnessing deep learning
methodologies, primarily pre-trained LMs. In this paper, we also demonstrate
the practical utility of Deeponto through two use-cases: the Digital Health
Coaching in Samsung Research UK and the Bio-ML track of the Ontology Alignment
Evaluation Initiative (OAEI).Comment: under review at Semantic Web Journa
Contextualized Structural Self-supervised Learning for Ontology Matching
Ontology matching (OM) entails the identification of semantic relationships
between concepts within two or more knowledge graphs (KGs) and serves as a
critical step in integrating KGs from various sources. Recent advancements in
deep OM models have harnessed the power of transformer-based language models
and the advantages of knowledge graph embedding. Nevertheless, these OM models
still face persistent challenges, such as a lack of reference alignments,
runtime latency, and unexplored different graph structures within an end-to-end
framework. In this study, we introduce a novel self-supervised learning OM
framework with input ontologies, called LaKERMap. This framework capitalizes on
the contextual and structural information of concepts by integrating implicit
knowledge into transformers. Specifically, we aim to capture multiple
structural contexts, encompassing both local and global interactions, by
employing distinct training objectives. To assess our methods, we utilize the
Bio-ML datasets and tasks. The findings from our innovative approach reveal
that LaKERMap surpasses state-of-the-art systems in terms of alignment quality
and inference time. Our models and codes are available here:
https://github.com/ellenzhuwang/lakermap
VersaMatch : ontology matching with weak supervision
Ontology matching is crucial to data integration for across-silo data sharing and has been mainly addressed with heuristic and machine learning (ML) methods. While heuristic methods are often inflexible and hard to extend to new domains, ML methods rely on substantial and hard to obtain amounts of labeled training data. To overcome these limitations, we propose VersaMatch, a flexible, weakly-supervised ontology matching system. VersaMatch employs various weak supervision sources, such as heuristic rules, pattern matching, and external knowledge bases, to produce labels from a large amount of unlabeled data for training a discriminative ML model. For prediction, VersaMatch develops a novel ensemble model combining the weak supervision sources with the discriminative model to support generalization while retaining a high precision. Our ensemble method boosts end model performance by 4 points compared to a traditional weak-supervision baseline. In addition, compared to state-of-the-art ontology matchers, VersaMatch achieves an overall 4-point performance improvement in F1 score across 26 ontology combinations from different domains. For recently released, in-the-wild datasets, VersaMatch beats the next best matchers by 9 points in F1. Furthermore, its core weak-supervision logic can easily be improved by adding more knowledge sources and collecting more unlabeled data for training
Truveta Mapper: A Zero-shot Ontology Alignment Framework
In this paper, a new perspective is suggested for unsupervised Ontology
Matching (OM) or Ontology Alignment (OA) by treating it as a translation task.
Ontologies are represented as graphs, and the translation is performed from a
node in the source ontology graph to a path in the target ontology graph. The
proposed framework, Truveta Mapper (TM), leverages a multi-task
sequence-to-sequence transformer model to perform alignment across multiple
ontologies in a zero-shot, unified and end-to-end manner. Multi-tasking enables
the model to implicitly learn the relationship between different ontologies via
transfer-learning without requiring any explicit cross-ontology manually
labeled data. This also enables the formulated framework to outperform existing
solutions for both runtime latency and alignment quality. The model is
pre-trained and fine-tuned only on publicly available text corpus and
inner-ontologies data. The proposed solution outperforms state-of-the-art
approaches, Edit-Similarity, LogMap, AML, BERTMap, and the recently presented
new OM frameworks in Ontology Alignment Evaluation Initiative (OAEI22), offers
log-linear complexity in contrast to quadratic in the existing end-to-end
methods, and overall makes the OM task efficient and more straightforward
without much post-processing involving mapping extension or mapping repair
Recommended from our members
Results of the Ontology Alignment Evaluation Initiative 2023
The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities. The OAEI 2023 campaign offered 15 tracks and was attended by 16 participants. This paper is an overall presentation of that campaign
Recommended from our members
LogMap Family Participation in the OAEI 2023
We present the participation of LogMap and its variants in the OAEI 2023 campaign. The LogMap project started in January 2011 with the objective of developing a scalable and logic-based ontology matching system
The British Geological Survey Rock Classification Scheme, its representation as linked data, and a comparison with some other lithology vocabularies
Controlled vocabularies are critical to constructing FAIR (findable, accessible, interoperable, re-useable) data. One of the most widely required, yet complex, vocabularies in earth science is for rock and sediment type, or ‘lithology’. Since 1999 the British Geological Survey has used its own Rock Classification Scheme in many of its workflows and products including the national digital geological map. This scheme pre-dates others that have been published, and is deeply embedded in BGS’ processes. By publishing this classification scheme now as a Simple Knowledge Organisation System (SKOS) machine-readable informal ontology, we make it available for ourselves and third parties to use in modern semantic applications, and we open the future possibility of using the tools SKOS provides to align our scheme with other published schemes. These include the IUGS-CGI Simple Lithology Scheme, the European Commission INSPIRE Lithology Code List, the Queensland Geological Survey Lithotype Scheme, the USGS Lithologic Classification of Geologic Map Units, and Mindat.org. The BGS lithology classification was initially based on four narrative reports that can be downloaded from the BGS website, although it has been added to subsequently. The classification is almost entirely mono-hierarchical in nature and includes 3454 currently valid concepts in a classification 11 levels deep. It includes igneous rocks and sediments, metamorphic rocks, sediments and sedimentary rocks, and superficial deposits including anthropogenic deposits. The SKOS informal ontology built on it is stored in a triplestore and the triples are updated nightly by extracting from a relational database where the ontology is maintained. Bulk downloads and version history are available on github. The RCS concepts themselves are used in other BGS linked data, namely the Lexicon of Named Rock Units and the linked data representation of the 1:625 000 scale geological map of the UK. Comparing the RCS with the other published lithology schemes, all are broadly similar but show characteristics that reveal the interests and requirements of the groups that developed them, in terms of their level of detail both overall and in constituent parts. It should be possible to align the RCS with the other classifications, and future work will focus on automated mechanisms to do this, and possibly on constructing a formal ontology for the RCS
Uncertainty in Automated Ontology Matching: Lessons Learned from an Empirical Experimentation
Data integration is considered a classic research field and a pressing need
within the information science community. Ontologies play a critical role in
such a process by providing well-consolidated support to link and semantically
integrate datasets via interoperability. This paper approaches data integration
from an application perspective, looking at techniques based on ontology
matching. An ontology-based process may only be considered adequate by assuming
manual matching of different sources of information. However, since the
approach becomes unrealistic once the system scales up, automation of the
matching process becomes a compelling need. Therefore, we have conducted
experiments on actual data with the support of existing tools for automatic
ontology matching from the scientific community. Even considering a relatively
simple case study (i.e., the spatio-temporal alignment of global indicators),
outcomes clearly show significant uncertainty resulting from errors and
inaccuracies along the automated matching process. More concretely, this paper
aims to test on real-world data a bottom-up knowledge-building approach,
discuss the lessons learned from the experimental results of the case study,
and draw conclusions about uncertainty and uncertainty management in an
automated ontology matching process. While the most common evaluation metrics
clearly demonstrate the unreliability of fully automated matching solutions,
properly designed semi-supervised approaches seem to be mature for a more
generalized application
A Data-driven Approach to Large Knowledge Graph Matching
In the last decade, a remarkable number of open Knowledge Graphs (KGs) were developed, such as DBpedia, NELL, and YAGO. While some of such KGs are curated via crowdsourcing platforms, others are semi-automatically constructed. This has resulted in a significant degree of semantic heterogeneity and overlapping facts. KGs are highly complementary; thus, mapping them can benefit intelligent applications that require integrating different KGs such as recommendation systems, query answering, and semantic web navigation.
Although the problem of ontology matching has been investigated and a significant number of systems have been developed, the challenges of mapping large-scale KGs remain significant. KG matching has been a topic of interest in the Semantic Web community since it has been introduced to the Ontology Alignment Evaluation Initiative (OAEI) in 2018. Nonetheless, a major limitation of the current benchmarks is their lack of representation of real-world KGs. This work also highlights a number of limitations with current matching methods, such as: (i) they are highly dependent on string-based similarity measures, and (ii) they are primarily built to handle well-formed ontologies. These features make them unsuitable for large, (semi/fully) automatically constructed KGs with hundreds of classes and millions of instances. Another limitation of current work is the lack of benchmark datasets that represent the challenging task of matching real-world KGs.
This work addresses the limitation of the current datasets by first introducing two gold standard datasets for matching the schema of large, automatically constructed, less-well-structured KGs based on common KGs such as NELL, DBpedia, and Wikidata. We believe that the datasets which we make public in this work make the largest domain-independent benchmarks for matching KG classes. As many state-of-the-art methods are not suitable for matching large-scale and cross-domain KGs that often suffer from highly imbalanced class distribution, recent studies have revisited instance-based matching techniques in addressing this task. This is because such large KGs often lack a well-defined structure and descriptive metadata about their classes, but contain numerous class instances. Therefore, inspired by the role of instances in KGs, we propose a hybrid matching approach. Our method composes an instance-based matcher that casts the schema-matching process as a text classification task by exploiting instances of KG classes, and a string-based matcher. Our method is domain-independent and is able to handle KG classes with imbalanced populations. Further, we show that incorporating an instance-based approach with the appropriate data balancing strategy results in significant results in matching large and common KG classes
Recommended from our members
A Simple Standard for Sharing Ontological Mappings (SSSOM)
Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec
- …