4,875 research outputs found

    Fuzzy ontologies in semantic similarity measures

    Get PDF
    © 2016 IEEE. Ontologies are a fundamental part of the development of short text semantic similarity measures. The most known ontology used within the field was developed from the lexical database known as WordNet which is used as a semantic resource for determining word similarity using the semantic distance between words. The original WordNet does not include in its hierarchy fuzzy words - those which are subjective to humans and often context dependent. The recent development of fuzzy semantic similarity measures requires research into the development of different ontological structures which are suitable for the representation of fuzzy categories of words where quantification of words is undertaken by human participations. This paper proposes two different fuzzy ontology structures which are based on a human quantified scale for a collection of fuzzy words across six fuzzy categories. The methodology of ontology creation utilizes human participants to populate fuzzy categories and quantify fuzzy words. Each ontology is evaluated within a known fuzzy semantic similarity measure and experiments are conducted using human participants and two benchmark fuzzy word datasets. Correlations with human similarity ratings show only one ontological structure was naturally representative of human perceptions of fuzzy words

    Using Fuzzy Set Similarity in Sentence Similarity Measures

    Get PDF
    Sentence similarity measures the similarity between two blocks of text. A semantic similarity measure between individual pairs of words, each taken from the two blocks of text, has been used in STASIS. Word similarity is measured based on the distance between the words in the WordNet ontology. If the vague words, referred to as fuzzy words, are not found in WordNet, their semantic similarity cannot be used in the sentence similarity measure. FAST and FUSE transform these vague words into fuzzy set representations, type-1 and type-2 respectively, to create ontological structures where the same semantic similarity measure used in WordNet can then be used. This paper investigates eliminating the process of building an ontology with the fuzzy words and instead directly using fuzzy set similarity measures between the fuzzy words in the task of sentence similarity measurement. Their performance is evaluated based on their correlation with human judgments of sentence similarity. In addition, statistical tests showed there is not any significant difference in the sentence similarity values produced using fuzzy set similarity measures between fuzzy sets representing fuzzy words and using FAST semantic similarity within ontologies representing fuzzy words

    On the similarity relation within fuzzy ontology components

    Get PDF
    Ontology reuse is an important research issue. Ontology merging, integration, mapping, alignment and versioning are some of its subprocesses. A considerable research work has been conducted on them. One common issue to these subprocesses is the problem of defining similarity relations among ontologies components. Crisp ontologies become less suitable in all domains in which the concepts to be represented have vague, uncertain and imprecise definitions. Fuzzy ontologies are developed to cope with these aspects. They are equally concerned with the problem of ontology reuse. Defining similarity relations within fuzzy context may be realized basing on the linguistic similarity among ontologies components or may be deduced from their intentional definitions. The latter approach needs to be dealt with differently in crisp and fuzzy ontologies. This is the scope of this paper.ou

    Introducing fuzzy trust for managing belief conflict over semantic web data

    Get PDF
    Interpreting Semantic Web Data by different human experts can end up in scenarios, where each expert comes up with different and conflicting ideas what a concept can mean and how they relate to other concepts. Software agents that operate on the Semantic Web have to deal with similar scenarios where the interpretation of Semantic Web data that describes the heterogeneous sources becomes contradicting. One such application area of the Semantic Web is ontology mapping where different similarities have to be combined into a more reliable and coherent view, which might easily become unreliable if the conflicting beliefs in similarities are not managed effectively between the different agents. In this paper we propose a solution for managing this conflict by introducing trust between the mapping agents based on the fuzzy voting model

    A Survey of Volunteered Open Geo-Knowledge Bases in the Semantic Web

    Full text link
    Over the past decade, rapid advances in web technologies, coupled with innovative models of spatial data collection and consumption, have generated a robust growth in geo-referenced information, resulting in spatial information overload. Increasing 'geographic intelligence' in traditional text-based information retrieval has become a prominent approach to respond to this issue and to fulfill users' spatial information needs. Numerous efforts in the Semantic Geospatial Web, Volunteered Geographic Information (VGI), and the Linking Open Data initiative have converged in a constellation of open knowledge bases, freely available online. In this article, we survey these open knowledge bases, focusing on their geospatial dimension. Particular attention is devoted to the crucial issue of the quality of geo-knowledge bases, as well as of crowdsourced data. A new knowledge base, the OpenStreetMap Semantic Network, is outlined as our contribution to this area. Research directions in information integration and Geographic Information Retrieval (GIR) are then reviewed, with a critical discussion of their current limitations and future prospects

    Dealing with uncertain entities in ontology alignment using rough sets

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Ontology alignment facilitates exchange of knowledge among heterogeneous data sources. Many approaches to ontology alignment use multiple similarity measures to map entities between ontologies. However, it remains a key challenge in dealing with uncertain entities for which the employed ontology alignment measures produce conflicting results on similarity of the mapped entities. This paper presents OARS, a rough-set based approach to ontology alignment which achieves a high degree of accuracy in situations where uncertainty arises because of the conflicting results generated by different similarity measures. OARS employs a combinational approach and considers both lexical and structural similarity measures. OARS is extensively evaluated with the benchmark ontologies of the ontology alignment evaluation initiative (OAEI) 2010, and performs best in the aspect of recall in comparison with a number of alignment systems while generating a comparable performance in precision

    Probabilistic latent semantic analysis as a potential method for integrating spatial data concepts

    Get PDF
    In this paper we explore the use of Probabilistic Latent Semantic Analysis (PLSA) as a method for quantifying semantic differences between land cover classes. The results are promising, revealing ‘hidden’ or not easily discernible data concepts. PLSA provides a ‘bottom up’ approach to interoperability problems for users in the face of ‘top down’ solutions provided by formal ontologies. We note the potential for a meta-problem of how to interpret the concepts and the need for further research to reconcile the top-down and bottom-up approaches
    • 

    corecore