9,553 research outputs found

    Overview of methodologies for building ontologies

    Get PDF
    A few research groups are now proposing a series of steps and methodologies for developing ontologies. However, mainly due to the fact that Ontological Engineering is still a relatively immature discipline, each work group employs its own methodology. Our goal is to present the most representative methodologies used in ontology development and to perform an analysis of such methodologies against the same framework of reference. So, the goal of this paper is not to provide new insights about methodologies, but to put it all in one place and help people to select which methodology to use

    Some Ideas and Examples to Evaluate Ontologies

    Get PDF
    The lack of methods for evaluating ontologies in laboratories can be an obstacle to their use in companies. This paper presents a set of emerging ideas in evaluation of ontologies useful for: (1) ontologies developers in the lab, as a foundation from which to perform technical evaluations; (2) end users of ontologies in companies, as a point of departure in the search for the best ontology for their systems; and (3) future research, as a basis upon which to perform progressive and disciplined investigations in this area. After briefly exploring some general questions such as: why, what, when, how and where to evaluate; who evaluates; and, what to evaluate against, we focus on the definition of a set of criteria useful in the evaluation process. Finally, we use some of these criteria in the evaluation of the Bibliographic-Data [5] ontology

    Applied Evaluative Informetrics: Part 1

    Full text link
    This manuscript is a preprint version of Part 1 (General Introduction and Synopsis) of the book Applied Evaluative Informetrics, to be published by Springer in the summer of 2017. This book presents an introduction to the field of applied evaluative informetrics, and is written for interested scholars and students from all domains of science and scholarship. It sketches the field's history, recent achievements, and its potential and limits. It explains the notion of multi-dimensional research performance, and discusses the pros and cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In addition, it presents quantitative research assessment as an evaluation science, and focuses on the role of extra-informetric factors in the development of indicators, and on the policy context of their application. It also discusses the way forward, both for users and for developers of informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General Introduction and Synopsis) of a book entitled Applied Evaluative Bibliometrics, to be published by Springer in the summer of 201

    Benchmarking Semantic Web Technology

    Get PDF
    This paper summarises the research problem that the PhD thesis I am currently writing addresses. It presents an overview of the thesis, its goals taking into account the deciencies of the State of the Art, the approach followed, and the work performed for the thesis since its beginning in 2004

    Initial specification of the evaluation tasks "Use cases to bridge validation and benchmarking" PROMISE Deliverable 2.1

    Get PDF
    Evaluation of multimedia and multilingual information access systems needs to be performed from a usage oriented perspective. This document outlines use cases from the three use case domains of the PROMISE project and gives some initial pointers to how their respective characteristics can be extrapolated to determine and guide evaluation activities, both with respect to benchmarking and to validation of the usage hypotheses. The use cases will be developed further during the course of the evaluation activities and workshops projected to occur in coming CLEF conferences

    From Knowledge Based Systems to Knowledge Sharing Technology: Evaluation and Assessment

    Get PDF
    There is no set of general guidelines to evaluate Knowledge Sharing Technology, specific ideas to evaluate user-independent ontologies in their own whole life cycle, whether their definitions are reused by KBS or they are shared among software agents. Instead of starting from the beginning, this paper discusses similarities and differences between knowledge bases and ontologies. The idea is to learn from Knowledge Base Systems' evaluation and assessment by picking up some successful ideas and adapting them to the domain of the ontologies. We also learn from its mistakes by avoiding them. The paper also describes how different agents that use ontologies with different aims have different concerns in the evaluation and assessment processes. Definitions of the terms: evaluation, verification, validation and assessment in the knowledge sharing domain are also given

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Infraestructura tecnológica de servicios semánticos para la Web Semántica

    Get PDF
    This project aims at creating a network of distributed interoperable semantic services for building more complex ones. These services will be available in semantic Web service libraries, so that they can be invoked by other systems (e.g., semantic portals, software agents, etc.). Thus, to accomplish this objective, the project proposes: a) To create specific technology for developing and composing Semantic Web Services. b) To migrate the WebODE ontology development workbench to this new distributed interoperable semantic service architecture. c) To develop new semantic services (ontology learning, ontology mappings, incremental ontology evaluation, and ontology evolution). d) To develop technological support that eases semantic portal interoperability, using Web services and Semantic Web Services. The project results will be open source, so as to improve their technological transfer. The quality of these results is ensured by a benchmarking process. Keywords: Ontologies and Semantic We

    Aplicación de la Inteligencia Competitiva y el Benchmarking de nuevas teorías para el desarrollo de un Plan Estratégico y Sostenible para la Industria Naval

    Get PDF
    Since their beginning, companies establish procedures to observe their competitors. Methods for obtaining this kind of information have evolved with the internet era; a plethora of tools is nowadays available for this job. As a consequence, a new problem has emerged: documentary noise, keeping companies from being able to process and benefit from the huge amount of information gathered. Strategic planning mainly relies on obtaining environmental knowledge, so companies need help on dealing with this documentary noise; technological surveillance and benchmarking are preferred methodologies to achieve this objective, coping with data produced by automatic internet tools like search engines and others. Qualified results of better nature are produced by bringing new theories on information gathering and processing intoboth tools. This article exposes empirical results on the application of a demonstrative technological surveillance system based on different R&D management structures, relying on benchmarking indicators for the naval and aeronautics industries.Desde su inicio, las empresas establecen procedimientos para observar a sus competidores. Los métodos para obtener este tipo de información han evolucionado con la era del internet; una gran cantidad de herramientas está disponible en la actualidad para esta tarea. En consecuencia, ha surgido un nuevo problema: ruido documental, que evita que las empresas procesen y se beneficien de la gran cantidad de información recolectada. La planeación estratégica principalmente se apoya en el conocimiento ambiental obtenido, así que las empresas necesitan ayuda para tratar con este ruido documental; la vigilancia tecnológica y el benchmarking son metodologías preferidas para lograr este objetivo, y hacerfrente a los datos producidos por herramientas automáticas del internet como motores de búsqueda y otras. Este artículo expone resultados empíricos acerca de la aplicación de un sistema demostrativo de vigilancia tecnológica basado en diferentes estructuras de gestión de I&D, confiando en indicadores de benchmarking para las industrias navales y aeronáuticas
    corecore