1,359 research outputs found

    Data driven ontology evaluation

    Get PDF
    The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the 'fit' between an ontology and a domain of knowledge. We consider a number of methods for measuring this 'fit' and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology

    Ontology selection: ontology evaluation on the real Semantic Web

    Get PDF
    The increasing number of ontologies on the Web and the appearance of large scale ontology repositories has brought the topic of ontology selection in the focus of the semantic web research agenda. Our view is that ontology evaluation is core to ontology selection and that, because ontology selection is performed in an open Web environment, it brings new challenges to ontology evaluation. Unfortunately, current research regards ontology selection and evaluation as two separate topics. Our goal in this paper is to explore how these two tasks relate. In particular, we are interested to get a better understanding of the ontology selection task and filter out the challenges that it brings to ontology evaluation. We discuss requirements posed by the open Web environment on ontology selection, we overview existing work on selection and point out future directions. Our major conclusion is that, even if selection methods still need further development, they have already brought novel approaches to ontology evaluatio

    Benefits of Natural Language Techniques in Ontology Evaluation : the OOPS! Case

    Get PDF
    National audienceNatural language techniques play an important role in Ontology Engineering. Developing ontologies in a manual fashion is a complex and time consuming process, which implies the participation of domain experts and ontology engineers to build and evaluate them. Natural language techniques traditionally help to (semi)automatically build ontologies and to populate them. However, the general trends for evaluating ontologies are mainly expert reviewing, evaluating quality dimensions and criteria, and evaluating against existing ontologies and set of common errors. That is, the use of natural language techniques in ontology evaluation is not widely spread. Thus, in this paper we aim at the use of natural language techniques during the ontology evaluation process. In particular, we propose a first attempt towards a language-based enhancement of the pitfall detection process within the ontology evaluation tool OOPS!

    Unsupervised Terminological Ontology Learning based on Hierarchical Topic Modeling

    Full text link
    In this paper, we present hierarchical relationbased latent Dirichlet allocation (hrLDA), a data-driven hierarchical topic model for extracting terminological ontologies from a large number of heterogeneous documents. In contrast to traditional topic models, hrLDA relies on noun phrases instead of unigrams, considers syntax and document structures, and enriches topic hierarchies with topic relations. Through a series of experiments, we demonstrate the superiority of hrLDA over existing topic models, especially for building hierarchies. Furthermore, we illustrate the robustness of hrLDA in the settings of noisy data sets, which are likely to occur in many practical scenarios. Our ontology evaluation results show that ontologies extracted from hrLDA are very competitive with the ontologies created by domain experts

    Improving Ontology Recommendation and Reuse in WebCORE by Collaborative Assessments

    Get PDF
    In this work, we present an extension of CORE [8], a tool for Collaborative Ontology Reuse and Evaluation. The system receives an informal description of a specific semantic domain and determines which ontologies from a repository are the most appropriate to describe the given domain. For this task, the environment is divided into three modules. The first component receives the problem description as a set of terms, and allows the user to refine and enlarge it using WordNet. The second module applies multiple automatic criteria to evaluate the ontologies of the repository, and determines which ones fit best the problem description. A ranked list of ontologies is returned for each criterion, and the lists are combined by means of rank fusion techniques. Finally, the third component uses manual user evaluations in order to incorporate a human, collaborative assessment of the ontologies. The new version of the system incorporates several novelties, such as its implementation as a web application; the incorporation of a NLP module to manage the problem definitions; modifications on the automatic ontology retrieval strategies; and a collaborative framework to find potential relevant terms according to previous user queries. Finally, we present some early experiments on ontology retrieval and evaluation, showing the benefits of our system

    A Double Classification of Common Pitfalls in Ontologies

    Get PDF
    The application of methodologies for building ontologies has improved the ontology quality. However, such a quality is not totally guaranteed because of the difficulties involved in ontology modelling. These difficulties are related to the inclusion of anomalies or worst practices in the modelling. In this context, our aim in this paper is twofold: (1) to provide a catalogue of common worst practices, which we call pitfalls, and (2) to present a double classification of such pitfalls. These two products will serve in the ontology development in two ways: (a) to avoid the appearance of pitfalls in the ontology modelling, and (b) to evaluate and correct ontologies to improve their quality

    The evaluation of ontologies: quality, reuse and social factors

    Get PDF
    Finding a “good” or the “right” ontology is a growing challenge in the ontology domain, where one of the main aims is to share and reuse existing semantics and knowledge. Before reusing an ontology, knowledge engineers not only have to find a set of appropriate ontologies for their search query, but they should also be able to evaluate those ontologies according to different internal and external criteria. Therefore, ontology evaluation is at the heart of ontology selection and has received a considerable amount of attention in the literature.Despite the importance of ontology evaluation and selection and the widespread research on these topics, there are still many unanswered questions and challenges when it comes to evaluating and selecting ontologies for reuse. Most of the evaluation metrics and frameworks in the literature are mainly based on a limited set of internal characteristics, e.g., content and structure of ontologies and ignore how they are used and evaluated by communities. This thesis aimed to investigate the notion of quality and reusability in the ontology domain and to explore and identify the set of metrics that can affect the process of ontology evaluation and selection for reuse. [Continues.

    OntoKBEval : a support tool for OWL ontology evaluation

    Get PDF
    The Support Tool for OWL Ontology Evaluation (OntoKBEval) has been developed to apply Description Logics reasoning to ontology evaluation by deriving information from knowledge bases. The principal objective is to evaluate ontologies and to present results using a user-friendly visualized interface to users. OntoKBEval offers hierarchical diagrams describing the structure of OWL-DL ontologies divided into the description logics view of TBoxes and ABoxes. Furthermore, corresponding detailed information is offered for these structures to guide further evaluation directions. The three main methods for ontology evaluation are: (i) quick-view ontology evaluation (providing a keyword search for named concepts); (ii) general ontology evaluation (performing a more comprehensive TBox- and ABox-based evaluation); (iii) multi-file ontology evaluation (facilitating multiple OWL ontology evaluation by offering basic TBox and ABox information). The implementation relies on the OWL-DL reasoner RacerPro to support reasoning functionalitie

    Ontology Evaluation

    Full text link
    The evaluation of ontologies is an emerging field. At present, a deep core of preliminary ideas and guidelines for evaluation ontologies is missing. This paper presents a brief summary of previous works on evaluation ontologies and the criteria (consistency, completeness, conciseness, expandability and sensitiveness) used to evaluate and assess ontologies. ..

    Ontology Evaluation

    Get PDF
    Ontology evaluation is the task of measuring the quality of an ontology. It enables us to answer the following main question: How to assess the quality of an ontology for the Web? In this thesis a theoretical framework and several methods breathing life into the framework are presented. The application to the above scenarios is explored, and the theoretical foundations are thoroughly grounded in the practical usage of the emerging Semantic Web
    corecore