118 research outputs found

    Reconciling Information in DBpedia through a Question Answering System

    Get PDF
    Results obtained querying language-specific DBpedia chapters SPARQL endpoints for the same query can be related by several heterogenous relations, or contain an inconsistent set of information about the same topic. To overcome this issue in question answering systems over language-specific DBpedia chapters, we propose the RADAR framework for information reconciliation. Starting from a categorization of the possible relations among the resulting instances, such framework: (i) classifies such relations, (ii) reconciles the obtained information using argumentation theory, (iii) ranks the alternative results depending on the confidence of the source in case of inconsistencies, and (iv) explains the reasons underlying the proposed ranking

    Fusing Automatically Extracted Annotations for the Semantic Web

    Get PDF
    This research focuses on the problem of semantic data fusion. Although various solutions have been developed in the research communities focusing on databases and formal logic, the choice of an appropriate algorithm is non-trivial because the performance of each algorithm and its optimal configuration parameters depend on the type of data, to which the algorithm is applied. In order to be reusable, the fusion system must be able to select appropriate techniques and use them in combination. Moreover, because of the varying reliability of data sources and algorithms performing fusion subtasks, uncertainty is an inherent feature of semantically annotated data and has to be taken into account by the fusion system. Finally, the issue of schema heterogeneity can have a negative impact on the fusion performance. To address these issues, we propose KnoFuss: an architecture for Semantic Web data integration based on the principles of problem-solving methods. Algorithms dealing with different fusion subtasks are represented as components of a modular architecture, and their capabilities are described formally. This allows the architecture to select appropriate methods and configure them depending on the processed data. In order to handle uncertainty, we propose a novel algorithm based on the Dempster-Shafer belief propagation. KnoFuss employs this algorithm to reason about uncertain data and method results in order to refine the fused knowledge base. Tests show that these solutions lead to improved fusion performance. Finally, we addressed the problem of data fusion in the presence of schema heterogeneity. We extended the KnoFuss framework to exploit results of automatic schema alignment tools and proposed our own schema matching algorithm aimed at facilitating data fusion in the Linked Data environment. We conducted experiments with this approach and obtained a substantial improvement in performance in comparison with public data repositories

    Using ontologies: understanding the user experience

    Get PDF
    Drawing on 118 responses to a survey of ontology use, this paper describes the experiences of those who create and use ontologies. Responses to questions about language and tool use illustrate the dominant position of OWL and provide information about the OWL profiles and particular Description Logic features used. The paper suggests that further research is required into the difficulties experienced with OWL constructs, and with modelling in OWL. The survey also reports on the use of ontology visualization software, finding that the importance of visualization to ontology users varies considerably. This is also an area which requires further investigation. The use of ontology patterns is examined, drawing on further input from a follow-up study devoted exclusively to this topic. Evidence suggests that pattern creation and use are frequently informal processes and there is a need for improved tools. A classification of ontology users into four groups is suggested. It is proposed that the categorisation of users and user behaviour should be taken into account when designing ontology tools and methodologies. This should enable rigorous, user-specific use cases

    Automated Knowledge Base Quality Assessment and Validation based on Evolution Analysis

    Get PDF
    In recent years, numerous efforts have been put towards sharing Knowledge Bases (KB) in the Linked Open Data (LOD) cloud. These KBs are being used for various tasks, including performing data analytics or building question answering systems. Such KBs evolve continuously: their data (instances) and schemas can be updated, extended, revised and refactored. However, unlike in more controlled types of knowledge bases, the evolution of KBs exposed in the LOD cloud is usually unrestrained, what may cause data to suffer from a variety of quality issues, both at a semantic level and at a pragmatic level. This situation affects negatively data stakeholders – consumers, curators, etc. –. Data quality is commonly related to the perception of the fitness for use, for a certain application or use case. Therefore, ensuring the quality of the data of a knowledge base that evolves is vital. Since data is derived from autonomous, evolving, and increasingly large data providers, it is impractical to do manual data curation, and at the same time, it is very challenging to do a continuous automatic assessment of data quality. Ensuring the quality of a KB is a non-trivial task since they are based on a combination of structured information supported by models, ontologies, and vocabularies, as well as queryable endpoints, links, and mappings. Thus, in this thesis, we explored two main areas in assessing KB quality: (i) quality assessment using KB evolution analysis, and (ii) validation using machine learning models. The evolution of a KB can be analyzed using fine-grained “change” detection at low-level or using “dynamics” of a dataset at high-level. In this thesis, we present a novel knowledge base quality assessment approach using evolution analysis. The proposed approach uses data profiling on consecutive knowledge base releases to compute quality measures that allow detecting quality issues. However, the first step in building the quality assessment approach was to identify the quality characteristics. Using high-level change detection as measurement functions, in this thesis we present four quality characteristics: Persistency, Historical Persistency, Consistency and Completeness. Persistency and historical persistency measures concern the degree of changes and lifespan of any entity type. Consistency and completeness measures identify properties with incomplete information and contradictory facts. The approach has been assessed both quantitatively and qualitatively on a series of releases from two knowledge bases, eleven releases of DBpedia and eight releases of 3cixty Nice. However, high-level changes, being coarse-grained, cannot capture all possible quality issues. In this context, we present a validation strategy whose rationale is twofold. First, using manual validation from qualitative analysis to identify causes of quality issues. Then, use RDF data profiling information to generate integrity constraints. The validation approach relies on the idea of inducing RDF shape by exploiting SHALL constraint components. In particular, this approach will learn, what are the integrity constraints that can be applied to a large KB by instructing a process of statistical analysis, which is followed by a learning model. We illustrate the performance of our validation approach by using five learning models over three sub-tasks, namely minimum cardinality, maximum cardinality, and range constraint. The techniques of quality assessment and validation developed during this work are automatic and can be applied to different knowledge bases independently of the domain. Furthermore, the measures are based on simple statistical operations that make the solution both flexible and scalable

    Revisiting Urban Dynamics through Social Urban Data:

    Get PDF
    The study of dynamic spatial and social phenomena in cities has evolved rapidly in the recent years, yielding new insights into urban dynamics. This evolution is strongly related to the emergence of new sources of data for cities (e.g. sensors, mobile phones, online social media etc.), which have potential to capture dimensions of social and geographic systems that are difficult to detect in traditional urban data (e.g. census data). However, as the available sources increase in number, the produced datasets increase in diversity. Besides heterogeneity, emerging social urban data are also characterized by multidimensionality. The latter means that the information they contain may simultaneously address spatial, social, temporal, and topical attributes of people and places. Therefore, integration and geospatial (statistical) analysis of multidimensional data remain a challenge. The question which, then, arises is how to integrate heterogeneous and multidimensional social urban data into the analysis of human activity dynamics in cities? To address the above challenge, this thesis proposes the design of a framework of novel methods and tools for the integration, visualization, and exploratory analysis of large-scale and heterogeneous social urban data to facilitate the understanding of urban dynamics. The research focuses particularly on the spatiotemporal dynamics of human activity in cities, as inferred from different sources of social urban data. The main objective is to provide new means to enable the incorporation of heterogeneous social urban data into city analytics, and to explore the influence of emerging data sources on the understanding of cities and their dynamics.  In mitigating the various heterogeneities, a methodology for the transformation of heterogeneous data for cities into multidimensional linked urban data is, therefore, designed. The methodology follows an ontology-based data integration approach and accommodates a variety of semantic (web) and linked data technologies. A use case of data interlinkage is used as a demonstrator of the proposed methodology. The use case employs nine real-world large-scale spatiotemporal data sets from three public transportation organizations, covering the entire public transport network of the city of Athens, Greece.  To further encourage the consumption of linked urban data by planners and policy-makers, a set of webbased tools for the visual representation of ontologies and linked data is designed and developed. The tools – comprising the OSMoSys framework – provide graphical user interfaces for the visual representation, browsing, and interactive exploration of both ontologies and linked urban data.   After introducing methods and tools for data integration, visual exploration of linked urban data, and derivation of various attributes of people and places from different social urban data, it is examined how they can all be combined into a single platform. To achieve this, a novel web-based system (coined SocialGlass) for the visualization and exploratory analysis of human activity dynamics is designed. The system combines data from various geo-enabled social media (i.e. Twitter, Instagram, Sina Weibo) and LBSNs (i.e. Foursquare), sensor networks (i.e. GPS trackers, Wi-Fi cameras), and conventional socioeconomic urban records, but also has the potential to employ custom datasets from other sources. A real-world case study is used as a demonstrator of the capacities of the proposed web-based system in the study of urban dynamics. The case study explores the potential impact of a city-scale event (i.e. the Amsterdam Light festival 2015) on the activity and movement patterns of different social categories (i.e. residents, non-residents, foreign tourists), as compared to their daily and hourly routines in the periods  before and after the event. The aim of the case study is twofold. First, to assess the potential and limitations of the proposed system and, second, to investigate how different sources of social urban data could influence the understanding of urban dynamics. The contribution of this doctoral thesis is the design and development of a framework of novel methods and tools that enables the fusion of heterogeneous multidimensional data for cities. The framework could foster planners, researchers, and policy makers to capitalize on the new possibilities given by emerging social urban data. Having a deep understanding of the spatiotemporal dynamics of cities and, especially of the activity and movement behavior of people, is expected to play a crucial role in addressing the challenges of rapid urbanization. Overall, the framework proposed by this research has potential to open avenues of quantitative explorations of urban dynamics, contributing to the development of a new science of cities
    • …
    corecore