35 research outputs found

    Alinhamento de vocabulário de domínio utilizando os sistemas AML e LogMap

    Get PDF
    Introduction: In the context of the Semantic Web, interoperability among heterogeneous ontologies is a challenge due to several factors, among which semantic ambiguity and redundancy stand out. To overcome these challenges, systems and algorithms are adopted to align different ontologies. In this study, it is understood that controlled vocabularies are a particular form of ontology. Objective: to obtain a vocabulary resulting from the alignment and fusion of the Vocabularies Scientific Domains and Scientific Areas of the Foundation for Science and Technology, - FCT, European Science Vocabulary - EuroSciVoc and United Nations Educational, Scientific and Cultural Organization - UNESCO nomenclature for fields of Science and Technology, in the Computing Sciences domain, to be used in the IViSSEM project. Methodology: literature review on systems/algorithms for ontology alignment, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses - PRISMA methodology; alignment of the three vocabularies; and validation of the resulting vocabulary by means of a Delphi study. Results: we proceeded to analyze the 25 ontology alignment systems and variants that participated in at least one track of the Ontology Alignment Evaluation Initiative competition between 2018 and 2019. From these systems, Agreement Maker Light and Log Map were selected to perform the alignment of the three vocabularies, making a cut to the area of Computer Science. Conclusion: The vocabulary was obtained from Agreement Maker Light for having presented a better performance. At the end, a vocabulary with 98 terms was obtained in the Computer Science domain to be adopted by the IViSSEM project. The alignment resulted from the vocabularies used by FCT (Portugal), with the one adopted by the European Union (EuroSciVoc) and another one from the domain of Science & Technology (UNESCO). This result is beneficial to other universities and projects, as well as to FCT itself.Introdução: No contexto da Web Semântica, a interoperabilidade entre ontologias heterogêneas é um desafio devido a diversos fatores entre os quais se destacam a ambiguidade e a redundância semântica. Para superar tais desafios, adota-se sistemas e algoritmos para alinhamento de diferentes ontologias. Neste estudo, entende-se que vocabulários controlados são uma forma particular de ontologias. Objetivo: obter um vocabulário resultante do alinhamento e fusão dos vocabulários Domínios Científicos e Áreas Científicas da Fundação para Ciência e Tecnologia, - FCT, European Science Vocabulary - EuroSciVoc e Organização das Nações Unidas para a Educação, a Ciência e a Cultura - UNESCO nomenclature for fields of Science and Technology, no domínio Ciências da Computação, para ser usado no âmbito do projeto IViSSEM. Metodologia: revisão da literatura sobre sistemas/algoritmos para alinhamento de ontologias, utilizando a metodologia Preferred Reporting Items for Systematic Reviews and Meta-Analyses - PRISMA; alinhamento dos três vocabulários; e validação do vocabulário resultante por meio do estudo Delphi. Resultados: procedeu-se à análise dos 25 sistemas de alinhamento de ontologias e variantes que participaram de pelo menos uma track da competição Ontology Alignment Evaluation Iniciative entre 2018 e 2019. Destes sistemas foram selecionados Agreement Maker Light e LogMap para realizar o alinhamento dos três vocabulários, fazendo um recorte para a área da Ciência da Computação. Conclusão: O vocabulário foi obtido a partir do Agreement Maker Light por ter apresentado uma melhor performance. Ao final foi obtido o vocabulário, com 98 termos, no domínio da Ciência da Computação a ser adotado pelo projeto IViSSEM. O alinhamento resultou dos vocabulários utilizados pela FCT (Portugal), com o adotado pela União Europeia (EuroSciVoc) e outro do domínio da Ciência&Tecnologia (UNESCO). Esse resultado é proveitoso para outras universidades e projetos, bem como para a própria FCT

    Results of the Ontology Alignment Evaluation Initiative 2021

    Get PDF
    The Ontology Alignment Evaluation Initiative (OAEI) aims at comparing ontology matching systems on precisely defined test cases. These test cases can be based on ontologies of different levels of complexity and use different evaluation modalities (e.g., blind evaluation, open evaluation, or consensus). The OAEI 2021 campaign offered 13 tracks and was attended by 21 participants. This paper is an overall presentation of that campaig

    Scalable Data Integration for Linked Data

    Get PDF
    Linked Data describes an extensive set of structured but heterogeneous datasources where entities are connected by formal semantic descriptions. In thevision of the Semantic Web, these semantic links are extended towards theWorld Wide Web to provide as much machine-readable data as possible forsearch queries. The resulting connections allow an automatic evaluation to findnew insights into the data. Identifying these semantic connections betweentwo data sources with automatic approaches is called link discovery. We derivecommon requirements and a generic link discovery workflow based on similaritiesbetween entity properties and associated properties of ontology concepts. Mostof the existing link discovery approaches disregard the fact that in times ofBig Data, an increasing volume of data sources poses new demands on linkdiscovery. In particular, the problem of complex and time-consuming linkdetermination escalates with an increasing number of intersecting data sources.To overcome the restriction of pairwise linking of entities, holistic clusteringapproaches are needed to link equivalent entities of multiple data sources toconstruct integrated knowledge bases. In this context, the focus on efficiencyand scalability is essential. For example, reusing existing links or backgroundinformation can help to avoid redundant calculations. However, when dealingwith multiple data sources, additional data quality problems must also be dealtwith. This dissertation addresses these comprehensive challenges by designingholistic linking and clustering approaches that enable reuse of existing links.Unlike previous systems, we execute the complete data integration workflowvia a distributed processing system. At first, the LinkLion portal will beintroduced to provide existing links for new applications. These links act asa basis for a physical data integration process to create a unified representationfor equivalent entities from many data sources. We then propose a holisticclustering approach to form consolidated clusters for same real-world entitiesfrom many different sources. At the same time, we exploit the semantic typeof entities to improve the quality of the result. The process identifies errorsin existing links and can find numerous additional links. Additionally, theentity clustering has to react to the high dynamics of the data. In particular,this requires scalable approaches for continuously growing data sources withmany entities as well as additional new sources. Previous entity clusteringapproaches are mostly static, focusing on the one-time linking and clustering ofentities from few sources. Therefore, we propose and evaluate new approaches for incremental entity clustering that supports the continuous addition of newentities and data sources. To cope with the ever-increasing number of LinkedData sources, efficient and scalable methods based on distributed processingsystems are required. Thus we propose distributed holistic approaches to linkmany data sources based on a clustering of entities that represent the samereal-world object. The implementation is realized on Apache Flink. In contrastto previous approaches, we utilize efficiency-enhancing optimizations for bothdistributed static and dynamic clustering. An extensive comparative evaluationof the proposed approaches with various distributed clustering strategies showshigh effectiveness for datasets from multiple domains as well as scalability on amulti-machine Apache Flink cluster

    A gold standard dataset for large knowledge graphs matching

    Get PDF
    In the last decade, a remarkable number of Knowledge Graphs (KGs) were developed, such as DBpedia, NELL and Google knowledge graph. These KGs are the core of many web-based applications such as query answering and semantic web navigation. The majority of these KGs are semi-automatically constructed, which has resulted in a significant degree of heterogeneity. KGs are highly complementary; thus, mapping them can benefit intelligent applications that require integrating different KGs such as recommendation systems and search engines. Although the problem of ontology matching has been investigated and a significant number of systems have been developed, the challenges of mapping large-scale KGs remain significant. In 2018, OAEI has introduced a specific track for KG matching systems. Nonetheless, a major limitation of the current benchmark is their lack of representation of real-world KGs. In this work we introduce a gold standard dataset for matching the schema of large, automatically constructed, less-well structured KGs based on DBpedia and NELL. We evaluate OAEI's various participating systems on this dataset, and show that matching large-scale and domain independent KGs is a more challenging task. We believe that the dataset which we make public in this work makes the largest domain-independent gold standard dataset for matching KG classes
    corecore