42,637 research outputs found
Framework for Enhanced Ontology Alignment using BERT-Based
This framework combines a few approaches to improve ontology alignment by using the data mining method with BERT. The method utilizes data mining techniques to identify the optimal characteristics for picking the data attributes of instances to match ontologies. Furthermore, this framework was developed to improve current precision and recall measures for ontology matching techniques. Since knowledge integration began, the main requirement for ontology alignment has always been syntactic and structural matching. This article presents a new approach that employs advanced methods like data mining and BERT embeddings to produce more expansive and contextually aware ontology alignment. The proposed system exploits contextual representation of BERT, semantic understanding, feature extraction, and pattern recognition through data mining techniques. The objective is to combine data-driven insights with semantic representation advantages to enhance accuracy and efficiency in the ontology alignment process. The evaluation conducted using annotated datasets as well as traditional approaches demonstrates how effective and adaptable, according to domains, our proposed framework is across several domains
Dealing with uncertain entities in ontology alignment using rough sets
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Ontology alignment facilitates exchange of knowledge among heterogeneous data sources. Many approaches to ontology alignment use multiple similarity measures to map entities between ontologies. However, it remains a key challenge in dealing with uncertain entities for which the employed ontology alignment measures produce conflicting results on similarity of the mapped entities. This paper presents OARS, a rough-set based approach to ontology alignment which achieves a high degree of accuracy in situations where uncertainty arises because of the conflicting results generated by different similarity measures. OARS employs a combinational approach and considers both lexical and structural similarity measures. OARS is extensively evaluated with the benchmark ontologies of the ontology alignment evaluation initiative (OAEI) 2010, and performs best in the aspect of recall in comparison with a number of alignment systems while generating a comparable performance in precision
Comparison of ontology alignment systems across single matching task via the McNemar's test
Ontology alignment is widely-used to find the correspondences between
different ontologies in diverse fields.After discovering the alignments,several
performance scores are available to evaluate them.The scores typically require
the identified alignment and a reference containing the underlying actual
correspondences of the given ontologies.The current trend in the alignment
evaluation is to put forward a new score(e.g., precision, weighted precision,
etc.)and to compare various alignments by juxtaposing the obtained scores.
However,it is substantially provocative to select one measure among others for
comparison.On top of that, claiming if one system has a better performance than
one another cannot be substantiated solely by comparing two scalars.In this
paper,we propose the statistical procedures which enable us to theoretically
favor one system over one another.The McNemar's test is the statistical means
by which the comparison of two ontology alignment systems over one matching
task is drawn.The test applies to a 2x2 contingency table which can be
constructed in two different ways based on the alignments,each of which has
their own merits/pitfalls.The ways of the contingency table construction and
various apposite statistics from the McNemar's test are elaborated in minute
detail.In the case of having more than two alignment systems for comparison,
the family-wise error rate is expected to happen. Thus, the ways of preventing
such an error are also discussed.A directed graph visualizes the outcome of the
McNemar's test in the presence of multiple alignment systems.From this graph,
it is readily understood if one system is better than one another or if their
differences are imperceptible.The proposed statistical methodologies are
applied to the systems participated in the OAEI 2016 anatomy track, and also
compares several well-known similarity metrics for the same matching problem
A Large Scale Dataset for the Evaluation of Ontology Matching Systems
Recently, the number of ontology matching techniques and systems has increased significantly. This makes the issue of their evaluation and comparison more severe. One of the challenges of the ontology matching evaluation is in building large scale evaluation datasets. In fact, the number of possible correspondences between two ontologies grows quadratically with respect to the numbers of entities in these ontologies. This often makes the manual construction of the evaluation datasets demanding to the point of being infeasible for large scale matching tasks. In this paper we present an ontology matching evaluation dataset composed of thousands of matching tasks, called TaxME2. It was built semi-automatically out of the Google, Yahoo and Looksmart web directories. We evaluated TaxME2 by exploiting the results of almost two dozen of state of the art ontology matching systems. The experiments indicate that the dataset possesses the desired key properties, namely it is error-free, incremental, discriminative, monotonic, and hard for the state of the art ontology matching systems. The paper has been accepted for publication in "The Knowledge Engineering Review", Cambridge Universty Press (ISSN: 0269-8889, EISSN: 1469-8005)
Dividing the Ontology Alignment Task with Semantic Embeddings and Logic-based Modules
Large ontologies still pose serious challenges to state-of-the-art ontology alignment systems. In this paper we present an approach that combines a neural embedding model and logic-based modules to accurately divide an input ontology matching task into smaller and more tractable matching (sub)tasks. We have conducted a comprehensive evaluation using the datasets of the Ontology Alignment Evaluation Initiative. The results are encouraging and suggest that the proposed method is adequate in practice and can be integrated within the workflow of systems unable to cope with very large ontologies
Evaluating the semantic web: a task-based approach
The increased availability of online knowledge has led to the design of several algorithms that solve a variety of tasks by harvesting the Semantic Web, i.e. by dynamically selecting and exploring a multitude of online ontologies. Our hypothesis is that the performance of such novel algorithms implicity provides an insight into the quality of the used ontologies and thus opens the way to a task-based evaluation of the Semantic Web. We have investigated this hypothesis by studying the lessons learnt about online ontologies when used to solve three tasks: ontology matching, folksonomy enrichment, and word sense disambiguation. Our analysis leads to a suit of conclusions about the status of the Semantic Web, which highlight a number of strengths and weaknesses of the semantic information available online and complement the findings of other analysis of the Semantic Web landscape
SANA NetGO: A combinatorial approach to using Gene Ontology (GO) terms to score network alignments
Gene Ontology (GO) terms are frequently used to score alignments between
protein-protein interaction (PPI) networks. Methods exist to measure the GO
similarity between two proteins in isolation, but pairs of proteins in a
network alignment are not isolated: each pairing is implicitly dependent upon
every other pairing via the alignment itself. Current methods fail to take into
account the frequency of GO terms across the networks, and attempt to account
for common GO terms in an ad hoc fashion by imposing arbitrary rules on when to
"allow" GO terms based on their location in the GO hierarchy, rather than using
readily available frequency information in the PPI networks themselves. Here we
develop a new measure, NetGO, that naturally weighs infrequent, informative GO
terms more heavily than frequent, less informative GO terms, without requiring
arbitrary cutoffs. In particular, NetGO down-weights the score of frequent GO
terms according to their frequency in the networks being aligned. This is a
global measure applicable only to alignments, independent of pairwise GO
measures, in the same sense that the edge-based EC or S3 scores are global
measures of topological similarity independent of pairwise topological
similarities. We demonstrate the superiority of NetGO by creating alignments of
predetermined quality based on homologous pairs of nodes and show that NetGO
correlates with alignment quality much better than any existing GO-based
alignment measures. We also demonstrate that NetGO provides a measure of
taxonomic similarity between species, consistent with existing taxonomic
measures--a feature not shared with existing GO-based network alignment
measures. Finally, we re-score alignments produced by almost a dozen aligners
from a previous study and show that NetGO does a better job than existing
measures at separating good alignments from bad ones
Introducing fuzzy trust for managing belief conflict over semantic web data
Interpreting Semantic Web Data by different human experts can end up in scenarios, where each expert comes up with different and conflicting ideas what a concept can mean and how they relate to other concepts. Software agents that operate on the Semantic Web have to deal with similar scenarios where the interpretation of Semantic Web data that describes the heterogeneous sources becomes contradicting. One such application area of the Semantic Web is ontology mapping where different similarities have to be combined into a more reliable and coherent view, which might easily become unreliable if the conflicting
beliefs in similarities are not managed effectively between the different agents. In this paper we propose a solution for managing this conflict by introducing trust between the mapping agents based on the fuzzy voting model
Fair Evaluation of Global Network Aligners
Biological network alignment identifies topologically and functionally
conserved regions between networks of different species. It encompasses two
algorithmic steps: node cost function (NCF), which measures similarities
between nodes in different networks, and alignment strategy (AS), which uses
these similarities to rapidly identify high-scoring alignments. Different
methods use both different NCFs and different ASs. Thus, it is unclear whether
the superiority of a method comes from its NCF, its AS, or both. We already
showed on MI-GRAAL and IsoRankN that combining NCF of one method and AS of
another method can lead to a new superior method. Here, we evaluate MI-GRAAL
against newer GHOST to potentially further improve alignment quality. Also, we
approach several important questions that have not been asked systematically
thus far. First, we ask how much of the node similarity information in NCF
should come from sequence data compared to topology data. Existing methods
determine this more-less arbitrarily, which could affect the resulting
alignment(s). Second, when topology is used in NCF, we ask how large the size
of the neighborhoods of the compared nodes should be. Existing methods assume
that larger neighborhood sizes are better.
We find that MI-GRAAL's NCF is superior to GHOST's NCF, while the performance
of the methods' ASs is data-dependent. Thus, the combination of MI-GRAAL's NCF
and GHOST's AS could be a new superior method for certain data. Also, which
amount of sequence information is used within NCF does not affect alignment
quality, while the inclusion of topological information is crucial. Finally,
larger neighborhood sizes are preferred, but often, it is the second largest
size that is superior, and using this size would decrease computational
complexity.
Together, our results give several general recommendations for a fair
evaluation of network alignment methods.Comment: 19 pages. 10 figures. Presented at the 2014 ISMB Conference, July
13-15, Boston, M
- …