31,407 research outputs found

    Some Ideas and Examples to Evaluate Ontologies

    Get PDF
    The lack of methods for evaluating ontologies in laboratories can be an obstacle to their use in companies. This paper presents a set of emerging ideas in evaluation of ontologies useful for: (1) ontologies developers in the lab, as a foundation from which to perform technical evaluations; (2) end users of ontologies in companies, as a point of departure in the search for the best ontology for their systems; and (3) future research, as a basis upon which to perform progressive and disciplined investigations in this area. After briefly exploring some general questions such as: why, what, when, how and where to evaluate; who evaluates; and, what to evaluate against, we focus on the definition of a set of criteria useful in the evaluation process. Finally, we use some of these criteria in the evaluation of the Bibliographic-Data [5] ontology

    Unsupervised Terminological Ontology Learning based on Hierarchical Topic Modeling

    Full text link
    In this paper, we present hierarchical relationbased latent Dirichlet allocation (hrLDA), a data-driven hierarchical topic model for extracting terminological ontologies from a large number of heterogeneous documents. In contrast to traditional topic models, hrLDA relies on noun phrases instead of unigrams, considers syntax and document structures, and enriches topic hierarchies with topic relations. Through a series of experiments, we demonstrate the superiority of hrLDA over existing topic models, especially for building hierarchies. Furthermore, we illustrate the robustness of hrLDA in the settings of noisy data sets, which are likely to occur in many practical scenarios. Our ontology evaluation results show that ontologies extracted from hrLDA are very competitive with the ontologies created by domain experts

    MultiFarm: A benchmark for multilingual ontology matching

    Full text link
    In this paper we present the MultiFarm dataset, which has been designed as a benchmark for multilingual ontology matching. The MultiFarm dataset is composed of a set of ontologies translated in different languages and the corresponding alignments between these ontologies. It is based on the OntoFarm dataset, which has been used successfully for several years in the Ontology Alignment Evaluation Initiative (OAEI). By translating the ontologies of the OntoFarm dataset into eight different languages – Chinese, Czech, Dutch, French, German, Portuguese, Russian, and Spanish – we created a comprehensive set of realistic test cases. Based on these test cases, it is possible to evaluate and compare the performance of matching approaches with a special focus on multilingualism

    Extracting ontologies from software documentation: a semi-automatic method and its evaluation

    Get PDF
    Rich and generic ontologies about web service functionalities are a prerequisite for performing complex reasoning tasks with web service descriptions. However, their acquisition is timeconsuming and conditioned by the small number of web services available in certain domains. As a solution, we describe a semiautomatic method to extract such ontologies from software documentation, motivated by the observation that web services reflect the functionality of their underlying implementation. Further, we report on fine-tuning the extraction process by using a multi-stage evaluation method

    A Large Scale Dataset for the Evaluation of Ontology Matching Systems

    Get PDF
    Recently, the number of ontology matching techniques and systems has increased significantly. This makes the issue of their evaluation and comparison more severe. One of the challenges of the ontology matching evaluation is in building large scale evaluation datasets. In fact, the number of possible correspondences between two ontologies grows quadratically with respect to the numbers of entities in these ontologies. This often makes the manual construction of the evaluation datasets demanding to the point of being infeasible for large scale matching tasks. In this paper we present an ontology matching evaluation dataset composed of thousands of matching tasks, called TaxME2. It was built semi-automatically out of the Google, Yahoo and Looksmart web directories. We evaluated TaxME2 by exploiting the results of almost two dozen of state of the art ontology matching systems. The experiments indicate that the dataset possesses the desired key properties, namely it is error-free, incremental, discriminative, monotonic, and hard for the state of the art ontology matching systems. The paper has been accepted for publication in "The Knowledge Engineering Review", Cambridge Universty Press (ISSN: 0269-8889, EISSN: 1469-8005)

    Evaluating the semantic web: a task-based approach

    Get PDF
    The increased availability of online knowledge has led to the design of several algorithms that solve a variety of tasks by harvesting the Semantic Web, i.e. by dynamically selecting and exploring a multitude of online ontologies. Our hypothesis is that the performance of such novel algorithms implicity provides an insight into the quality of the used ontologies and thus opens the way to a task-based evaluation of the Semantic Web. We have investigated this hypothesis by studying the lessons learnt about online ontologies when used to solve three tasks: ontology matching, folksonomy enrichment, and word sense disambiguation. Our analysis leads to a suit of conclusions about the status of the Semantic Web, which highlight a number of strengths and weaknesses of the semantic information available online and complement the findings of other analysis of the Semantic Web landscape

    An Analysis of Service Ontologies

    Get PDF
    Services are increasingly shaping the world’s economic activity. Service provision and consumption have been profiting from advances in ICT, but the decentralization and heterogeneity of the involved service entities still pose engineering challenges. One of these challenges is to achieve semantic interoperability among these autonomous entities. Semantic web technology aims at addressing this challenge on a large scale, and has matured over the last years. This is evident from the various efforts reported in the literature in which service knowledge is represented in terms of ontologies developed either in individual research projects or in standardization bodies. This paper aims at analyzing the most relevant service ontologies available today for their suitability to cope with the service semantic interoperability challenge. We take the vision of the Internet of Services (IoS) as our motivation to identify the requirements for service ontologies. We adopt a formal approach to ontology design and evaluation in our analysis. We start by defining informal competency questions derived from a motivating scenario, and we identify relevant concepts and properties in service ontologies that match the formal ontological representation of these questions. We analyze the service ontologies with our concepts and questions, so that each ontology is positioned and evaluated according to its utility. The gaps we identify as the result of our analysis provide an indication of open challenges and future work
    • …
    corecore