17 research outputs found

    Ontology translation approaches for interoperability: A case study with Protege-2000 and WebODE

    Full text link
    We describe four ontology translation approaches that can be used to exchange ontologies between ontology tools and/or ontology languages. These approaches are analysed with regard to two main features: how they preserve the ontology semantics after the translation process (aka semantic or consequence preservation) and how they allow final users and ontology-based applications to understand the resulting ontology in the target format (aka pragmatic preservation). These approaches are illustrated with practical examples that show how they can be applied to achieve interoperability between the ontology tools Protege-2000 and WebODE

    Challenges in Capitalizing Knowledge in Innovative Product Design Process.

    Get PDF
    Capitalizing on company’s knowledge is increasingly being recognized in a private organizations environment since managing knowledge productivity is considered a source of competitive advantage. In this paper we present a generalization of GAMETH framework, that play an important role in identifying crucial knowledge used and created in innovative product design process. Thus, we have developed a method based on three phases. In the first phase, we have used GAMETH to identify the set of “reference knowledge”. During the second phase, decision rules are inferred, through rough sets theory, from decision assignments provided by the decision maker(s). In the third phase, a multicriteria classification of “potential crucial knowledge” is performed on the basis of the decision rules that have been collectively identified by the decision maker(s).Dominance Rough set approach; Decision rules; Multi- criteria classification; crucial knowledge; Knowledge Capitalizing;

    A Fuzzy-Based Inference Mechanism of Trust for Improved Social Recommenders

    Get PDF
    This paper presents a stochastic model based on Monte Carlo simulation techniques for measuring the performance of recommenders. A general procedure to assess the accuracy of recommendation predictions is presented and implemented in a typical case study where input parameters are treated as random values and recommender errors are estimated using sensitive analysis. The results obtained are presented and a new perspective to the evaluation and assessment of recommender systems is discussed

    Generalizing GAMETH: Inference rule procedure..

    Get PDF
    In this paper we present a generalisation of GAMETH framework, that play an important role in identifying crucial knowledge. Thus, we have developed a method based on three phases. In the first phase, we have used GAMETH to identify the set of “reference knowledge”. During the second phase, decision rules are inferred, through rough sets theory, from decision assignments provided by the decision maker(s). In the third phase, a multicriteria classification of “potential crucial knowledge” is performed on the basis of the decision rules that have been collectively identified by the decision maker(s).Knowledge Management; Knowledge Capitalizing; Managing knowledge; crucial knowledge;

    KATS: A Knowledge Acquisition Tool Based on Electronic Document Processing

    Get PDF
    This paper describes a particular knowledge acquisition tool for the construction and maintenance of the knowledge model of an intelligent system for emergency management in the field of hydrology. This tool has been developed following an innovative approach directed to end-users non familiarized in computer oriented terminology. According to this approach, the tool is conceived as a document processor specialized in a particular domain (hydrology) in such a way that the whole knowledge model is viewed by the user as an electronic document. The paper first describes the characteristics of the knowledge model of the intelligent system and summarizes the problems that we found during the development and maintenance of such type of model. Then, the paper describes the KATS tool, a software application that we have designed to help in this task to be used by users who are not experts in computer programming. Finally, the paper shows a comparison between KATS and other approaches for knowledge acquisition

    L'analyse Formelle de Concepts au service de la construction et l'enrichissement d'une ontologie

    Get PDF
    National audienceDans cet article, nous proposons une mĂŠthodologie appelĂŠe PACTOLE "Property And Class Caracterisation from Text to OntoLogy Enrichment" qui permet de construire une ontologie dans un domaine spĂŠcifique et pour une ap- plication donnĂŠe. PACTOLE fusionne et combine diffĂŠrentes ressources Ă  l'aide de l'Analyse Formelle de Concepts (AFC) et de son extension l'Analyse Rela- tionnelle de Concepts (ARC). Les expressions produites par AFC/ARC sont re- prĂŠsentĂŠes en expressions d'une Logique de Descriptions LD (ici FLE) puis implĂŠmentĂŠes en OWL. Il est ensuite possible de raisonner sur ces expressions. Cette mĂŠthodologie est appliquĂŠe au domaine de l'astronomie. Nous montrons aussi comment nous avons formalisĂŠ et rĂŠpondu Ă  certaines questions que se posent les astronomes

    Mining Meaning from Wikipedia

    Get PDF
    Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.Comment: An extensive survey of re-using information in Wikipedia in natural language processing, information retrieval and extraction and ontology building. Accepted for publication in International Journal of Human-Computer Studie

    Facilitating Ontology Reuse Using User-Based Ontology Evaluation

    Get PDF
    corecore