177,068 research outputs found
Ontology selection: ontology evaluation on the real Semantic Web
The increasing number of ontologies on the Web and the appearance of large scale ontology repositories has brought the topic of ontology selection in the focus of the semantic web research agenda. Our view is that ontology evaluation is core to ontology selection and that, because ontology selection is performed in an open Web environment, it brings new challenges to ontology evaluation.
Unfortunately, current research regards ontology selection and evaluation as two separate topics. Our goal in this paper is to explore how these two tasks relate. In particular, we are interested to get a better understanding of the ontology selection task and filter out the challenges that it brings to ontology evaluation. We discuss requirements posed by the open Web environment on ontology selection, we overview existing work on selection and point out future directions. Our major conclusion is that, even if selection methods still need further development, they have already brought novel approaches to ontology evaluatio
A Large Scale Dataset for the Evaluation of Ontology Matching Systems
Recently, the number of ontology matching techniques and systems has increased significantly. This makes the issue of their evaluation and comparison more severe. One of the challenges of the ontology matching evaluation is in building large scale evaluation datasets. In fact, the number of possible correspondences between two ontologies grows quadratically with respect to the numbers of entities in these ontologies. This often makes the manual construction of the evaluation datasets demanding to the point of being infeasible for large scale matching tasks. In this paper we present an ontology matching evaluation dataset composed of thousands of matching tasks, called TaxME2. It was built semi-automatically out of the Google, Yahoo and Looksmart web directories. We evaluated TaxME2 by exploiting the results of almost two dozen of state of the art ontology matching systems. The experiments indicate that the dataset possesses the desired key properties, namely it is error-free, incremental, discriminative, monotonic, and hard for the state of the art ontology matching systems. The paper has been accepted for publication in "The Knowledge Engineering Review", Cambridge Universty Press (ISSN: 0269-8889, EISSN: 1469-8005)
Data driven ontology evaluation
The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the 'fit' between an ontology and a domain of knowledge. We consider a number of methods for measuring this 'fit' and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology
The SEALS Yardsticks for Ontology Management
This paper describes the rst SEALS evaluation campaign
over ontology engineering tools (i.e., the SEALS Yardsticks for Ontology Management). It presents the dierent evaluation scenarios dened to evaluate the conformance, interoperability and scalability of these tools, and the test data used in these scenarios
Recommended from our members
Results of the ontology alignment evaluation initiative 2019
The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity (from simple thesauri to expressive OWL ontologies) and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus). The OAEI 2019 campaign offered 11 tracks with 29 test cases, and was attended by 20 participants. This paper is an overall presentation of that campaign
Dividing the Ontology Alignment Task with Semantic Embeddings and Logic-based Modules
Large ontologies still pose serious challenges to state-of-the-art ontology alignment systems. In this paper we present an approach that combines a neural embedding model and logic-based modules to accurately divide an input ontology matching task into smaller and more tractable matching (sub)tasks. We have conducted a comprehensive evaluation using the datasets of the Ontology Alignment Evaluation Initiative. The results are encouraging and suggest that the proposed method is adequate in practice and can be integrated within the workflow of systems unable to cope with very large ontologies
Pruning-based identification of domain ontologies
We present a novel approach of extracting a domain ontology from large-scale thesauri. Concepts are identified to be relevant for a domain based on their frequent occurrence in domain texts. The approach allows to bootstrap the ontology engineering process from given legacy thesauri and identifies an initial domain ontology that may easily be refined by experts in a later stage. We present a thorough evaluation of the results obtained in building a biosecurity ontology for the UN FAO AOS project
- …