28 research outputs found

    (Linked) Data Quality Assessment: An Ontological Approach

    Get PDF
    The effective functioning of data-intensive applications usually requires that the dataset should be of high quality. The quality depends on the task they will be used for. However, it is possible to identify task-independent data quality dimensions which are solely related to data themselves and can be extracted with the help of rule mining/pattern mining. In order to assess and improve data quality, we propose an ontological approach to report data quality violated triples. Our goal is to provide data stakeholders with a set of methods and techniques to guide them in assessing and improving data qualit

    QUAL : A Provenance-Aware Quality Model

    Get PDF
    The research described here is supported by the award made by the RCUK Digital Economy program to the dot.rural Digital Economy Hub; award reference: EP/G066051/1.Peer reviewedPostprin

    KGMM -- A Maturity Model for Scholarly Knowledge Graphs based on Intertwined Human-Machine Collaboration

    Full text link
    Knowledge Graphs (KG) have gained increasing importance in science, business and society in the last years. However, most knowledge graphs were either extracted or compiled from existing sources. There are only relatively few examples where knowledge graphs were genuinely created by an intertwined human-machine collaboration. Also, since the quality of data and knowledge graphs is of paramount importance, a number of data quality assessment models have been proposed. However, they do not take the specific aspects of intertwined human-machine curated knowledge graphs into account. In this work, we propose a graded maturity model for scholarly knowledge graphs (KGMM), which specifically focuses on aspects related to the joint, evolutionary curation of knowledge graphs for digital libraries. Our model comprises 5 maturity stages with 20 quality measures. We demonstrate the implementation of our model in a large scale scholarly knowledge graph curation effort.Comment: Accepted as a full paper at the ICADL 2022: International Conference on Asian Digital Libraries 202

    An approach for measuring rdf data completeness

    Get PDF
    International audienc

    Interactive multidimensional modeling of linked data for exploratory OLAP

    Get PDF
    Exploratory OLAP aims at coupling the precision and detail of corporate data with the information wealth of LOD. While some techniques to create, publish, and query RDF cubes are already available, little has been said about how to contextualize these cubes with situational data in an on-demand fashion. In this paper we describe an approach, called iMOLD, that enables non-technical users to enrich an RDF cube with multidimensional knowledge by discovering aggregation hierarchies in LOD. This is done through a user-guided process that recognizes in the LOD the recurring modeling patterns that express roll-up relationships between RDF concepts, then translates these patterns into aggregation hierarchies to enrich the RDF cube. Two families of aggregation patterns are identified, based on associations and generalization respectively, and the algorithms for recognizing them are described. To evaluate iMOLD in terms of efficiency and effectiveness we compare it with a related approach in the literature, we propose a case study based on DBpedia, and we discuss the results of a test made with real users.Peer ReviewedPostprint (author's final draft

    Окружење за анализу и оцену квалитета великих и повезаних података

    Get PDF
    Linking and publishing data in the Linked Open Data format increases the interoperability and discoverability of resources over the Web. To accomplish this, the process comprises several design decisions, based on the Linked Data principles that, on one hand, recommend to use standards for the representation and the access to data on the Web, and on the other hand to set hyperlinks between data from different sources. Despite the efforts of the World Wide Web Consortium (W3C), being the main international standards organization for the World Wide Web, there is no one tailored formula for publishing data as Linked Data. In addition, the quality of the published Linked Open Data (LOD) is a fundamental issue, and it is yet to be thoroughly managed and considered. In this doctoral thesis, the main objective is to design and implement a novel framework for selecting, analyzing, converting, interlinking, and publishing data from diverse sources, simultaneously paying great attention to quality assessment throughout all steps and modules of the framework. The goal is to examine whether and to what extent are the Semantic Web technologies applicable for merging data from different sources and enabling end-users to obtain additional information that was not available in individual datasets, in addition to the integration into the Semantic Web community space. Additionally, the Ph.D. thesis intends to validate the applicability of the process in the specific and demanding use case, i.e. for creating and publishing an Arabic Linked Drug Dataset, based on open drug datasets from selected Arabic countries and to discuss the quality issues observed in the linked data life-cycle. To that end, in this doctoral thesis, a Semantic Data Lake was established in the pharmaceutical domain that allows further integration and developing different business services on top of the integrated data sources. Through data representation in an open machine-readable format, the approach offers an optimum solution for information and data dissemination for building domain-specific applications, and to enrich and gain value from the original dataset. This thesis showcases how the pharmaceutical domain benefits from the evolving research trends for building competitive advantages. However, as it is elaborated in this thesis, a better understanding of the specifics of the Arabic language is required to extend linked data technologies utilization in targeted Arabic organizations.Повезивање и објављивање података у формату "Повезани отворени подаци" (енг. Linked Open Data) повећава интероперабилност и могућности за претраживање ресурса преко Web-а. Процес је заснован на Linked Data принципима (W3C, 2006) који са једне стране елаборира стандарде за представљање и приступ подацима на Wебу (RDF, OWL, SPARQL), а са друге стране, принципи сугеришу коришћење хипервеза између података из различитих извора. Упркос напорима W3C конзорцијума (W3C је главна међународна организација за стандарде за Web-у), не постоји јединствена формула за имплементацију процеса објављивање података у Linked Data формату. Узимајући у обзир да је квалитет објављених повезаних отворених података одлучујући за будући развој Web-а, у овој докторској дисертацији, главни циљ је (1) дизајн и имплементација иновативног оквира за избор, анализу, конверзију, међусобно повезивање и објављивање података из различитих извора и (2) анализа примена овог приступа у фармацeутском домену. Предложена докторска дисертација детаљно истражује питање квалитета великих и повезаних екосистема података (енг. Linked Data Ecosystems), узимајући у обзир могућност поновног коришћења отворених података. Рад је мотивисан потребом да се омогући истраживачима из арапских земаља да употребом семантичких веб технологија повежу своје податке са отвореним подацима, као нпр. DBpedia-јом. Циљ је да се испита да ли отворени подаци из Арапских земаља омогућавају крајњим корисницима да добију додатне информације које нису доступне у појединачним скуповима података, поред интеграције у семантички Wеб простор. Докторска дисертација предлаже методологију за развој апликације за рад са повезаним (Linked) подацима и имплементира софтверско решење које омогућује претраживање консолидованог скупа података о лековима из изабраних арапских земаља. Консолидовани скуп података је имплементиран у облику Семантичког језера података (енг. Semantic Data Lake). Ова теза показује како фармацеутска индустрија има користи од примене иновативних технологија и истраживачких трендова из области семантичких технологија. Међутим, како је елаборирано у овој тези, потребно је боље разумевање специфичности арапског језика за имплементацију Linked Data алата и њухову примену са подацима из Арапских земаља

    Building Semantic Knowledge Graphs from (Semi-)Structured Data: A Review

    Get PDF
    Knowledge graphs have, for the past decade, been a hot topic both in public and private domains, typically used for large-scale integration and analysis of data using graph-based data models. One of the central concepts in this area is the Semantic Web, with the vision of providing a well-defined meaning to information and services on the Web through a set of standards. Particularly, linked data and ontologies have been quite essential for data sharing, discovery, integration, and reuse. In this paper, we provide a systematic literature review on knowledge graph creation from structured and semi-structured data sources using Semantic Web technologies. The review takes into account four prominent publication venues, namely, Extended Semantic Web Conference, International Semantic Web Conference, Journal of Web Semantics, and Semantic Web Journal. The review highlights the tools, methods, types of data sources, ontologies, and publication methods, together with the challenges, limitations, and lessons learned in the knowledge graph creation processes.publishedVersio

    Linking open data and the crowd for real-time passenger information

    Get PDF
    The research described here was supported by the award made by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1.Peer reviewedPostprin

    Bridging DBpedia Categories and DL-Concept Definitions using Formal Concept Analysis

    Get PDF
    International audienceThe popularization and quick growth of Linked Open Data (LOD) has led to challenging aspects regarding quality assessment and data exploration of the RDF triples that shape the LOD cloud. Particularly, we are interested in the completeness of data and its potential to provide concept definitions in terms of necessary and sufficient conditions. In this work we propose a novel technique based on Formal Concept Analysis which organizes RDF data into a concept lattice. This allows the discovery of implications, which are used to automatically detect missing information and then to complete RDF data

    A Quality Assessment Approach for Evolving Knowledge Bases

    Get PDF
    Knowledge bases are nowadays essential components for any task that requires automation with some degrees of intelligence.Assessing the quality of a Knowledge Base (KB) is a complex task as it often means measuring the quality of structured information, ontologies and vocabularies, and queryable endpoints. Popular knowledge bases such as DBpedia, YAGO2, and Wikidata have chosen the RDF data model to represent their data due to its capabilities for semantically rich knowledge representation. Despite its advantages, there are challenges in using RDF data model, for example, data quality assessment and validation. In thispaper, we present a novel knowledge base quality assessment approach that relies on evolution analysis. The proposed approachuses data profiling on consecutive knowledge base releases to compute quality measures that allow detecting quality issues. Our quality characteristics are based on the KB evolution analysis and we used high-level change detection for measurement functions. In particular, we propose four quality characteristics: Persistency, Historical Persistency, Consistency, and Completeness.Persistency and historical persistency measures concern the degree of changes and lifespan of any entity type. Consistency andcompleteness measures identify properties with incomplete information and contradictory facts. The approach has been assessed both quantitatively and qualitatively on a series of releases from two knowledge bases, eleven releases of DBpedia and eight releases of 3cixty. The capability of Persistency and Consistency characteristics to detect quality issues varies significantly between the two case studies. Persistency measure gives observational results for evolving KBs. It is highly effective in case of KBwith periodic updates such as 3cixty KB. The Completeness characteristic is extremely effective and was able to achieve 95%precision in error detection for both use cases. The measures are based on simple statistical operations that make the solution both flexible and scalabl
    corecore