3,620 research outputs found

    Data quality evaluation through data quality rules and data provenance.

    Get PDF
    The application and exploitation of large amounts of data play an ever-increasing role in today’s research, government, and economy. Data understanding and decision making heavily rely on high quality data; therefore, in many different contexts, it is important to assess the quality of a dataset in order to determine if it is suitable to be used for a specific purpose. Moreover, as the access to and the exchange of datasets have become easier and more frequent, and as scientists increasingly use the World Wide Web to share scientific data, there is a growing need to know the provenance of a dataset (i.e., information about the processes and data sources that lead to its creation) in order to evaluate its trustworthiness. In this work, data quality rules and data provenance are used to evaluate the quality of datasets. Concerning the first topic, the applied solution consists in the identification of types of data constraints that can be useful as data quality rules and in the development of a software tool to evaluate a dataset on the basis of a set of rules expressed in the XML markup language. We selected some of the data constraints and dependencies already considered in the data quality field, but we also used order dependencies and existence constraints as quality rules. In addition, we developed some algorithms to discover the types of dependencies used in the tool. To deal with the provenance of data, the Open Provenance Model (OPM) was adopted, an experimental query language for querying OPM graphs stored in a relational database was implemented, and an approach to design OPM graphs was proposed

    First IJCAI International Workshop on Graph Structures for Knowledge Representation and Reasoning (GKR@IJCAI'09)

    Get PDF
    International audienceThe development of effective techniques for knowledge representation and reasoning (KRR) is a crucial aspect of successful intelligent systems. Different representation paradigms, as well as their use in dedicated reasoning systems, have been extensively studied in the past. Nevertheless, new challenges, problems, and issues have emerged in the context of knowledge representation in Artificial Intelligence (AI), involving the logical manipulation of increasingly large information sets (see for example Semantic Web, BioInformatics and so on). Improvements in storage capacity and performance of computing infrastructure have also affected the nature of KRR systems, shifting their focus towards representational power and execution performance. Therefore, KRR research is faced with a challenge of developing knowledge representation structures optimized for large scale reasoning. This new generation of KRR systems includes graph-based knowledge representation formalisms such as Bayesian Networks (BNs), Semantic Networks (SNs), Conceptual Graphs (CGs), Formal Concept Analysis (FCA), CPnets, GAI-nets, all of which have been successfully used in a number of applications. The goal of this workshop is to bring together the researchers involved in the development and application of graph-based knowledge representation formalisms and reasoning techniques

    Inferring Anomalies from Data using Bayesian Networks

    Get PDF
    Existing studies on data mining has largely focused on the design of measures and algorithms to identify outliers in large and high dimensional categorical and numeric databases. However, not much stress has been given on the interestingness of the reported outlier. One way to ascertain interestingness and usefulness of the reported outlier is by making use of domain knowledge. In this thesis, we present measures to discover outliers based on background knowledge, represented by a Bayesian network. Using causal relationships between attributes encoded in the Bayesian framework, we demonstrate that meaningful outliers, i.e., outliers which encode important or new information are those which violate causal relationships encoded in the model. Depending upon nature of data, several approaches are proposed to identify and explain anomalies using Bayesian knowledge. Outliers are often identified as data points which are ``rare'', ''isolated'', or ''far away from their nearest neighbors''. We show that these characteristics may not be an accurate way of describing interesting outliers. Through a critical analysis on several existing outlier detection techniques, we show why there is a mismatch between outliers as entities described by these characteristics and ``real'' outliers as identified using Bayesian approach. We show that the Bayesian approaches presented in this thesis has better accuracy in mining genuine outliers while, keeping a low false positive rate as compared to traditional outlier detection techniques

    Discovery and application of data dependencies

    Get PDF
    Orientador: Prof. Dr. Eduardo Cunha de AlmeidaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 08/09/2020Inclui referências: p. 126-140Área de concentração: Ciência da ComputaçãoResumo: D ependências de dados (ou, simplesmente, dependências) têm um papel fundamental em muitos aspectos do gerenciam ento de dados. Em consequência, pesquisas recentes têm desenvolvido contribuições para im portante problem as relacionados à dependências. Esta tese traz contribuições que abrangem dois desses problemas. O prim eiro problem a diz respeito à descoberta de dependências com alto poder de expressividade. O objetivo é substituir o projeto m anual de dependências, o qual é sujeito a erros, por um algoritmo capaz de descobrir dependências a partir de dados apenas. N esta tese, estudamos a descoberta de restrições de negação, um tipo de dependência que contorna muitos problemas relacionados ao poder de expressividade de depêndencias. As restrições de negação têm poder de expressividade suficiente para generalizar outros tipos importantes de dependências, e expressar com plexas regras de negócios. No entanto, sua descoberta é com putacionalm ente difícil, pois possui um espaço de busca m aior do que o espaço de busca visto na descoberta de dependências mais simples. Esta tese apresenta novas técnicas na forma de um algoritmo para a descoberta de restrições de negação. Avaliamos o projeto de nosso algoritmo em uma variedade de cenários: conjuntos de dados reais e sintéticos; e núm eros variáveis de registros e colunas. N ossa avaliação m ostra que, em com paração com soluções do estado da arte, nosso algoritmo m elhora significativamente a eficiência da descoberta de restrição de negação em term os de tempo de execução. O segundo problem a diz respeito à aplicação de dependências no gerenciam ento de dados. Primeiro, estudamos a aplicação de dependências na melhoraria da consistência de dados, um aspecto crítico da qualidade dos dados. Uma m aneira comum de m odelar inconsistências é identificando violações de dependências. N esse contexto, esta tese apresenta um m étodo que estende nosso algoritm o para a descoberta de restrições de negação de form a que ele possa retornar resultados confiáveis, m esm o que o algoritm o execute sobre dados contendo alguns registros inconsistentes. M ostram os que é possível extrair evidências dos conjuntos de dados para descobrir restrições de negação que se mantêm aproximadamente. Nossa avaliação mostra que nosso método retorna dependências de negação que podem identificar, com boa precisão e recuperação, inconsistências no conjunto de dados de entrada. Esta tese traz mais um a contribuição no que diz respeito à aplicação de dependências para m elhorar a consistência de dados. Ela apresenta um sistem a para detectar violações de dependências de form a eficiente. Realizam os um a extensa avaliação de nosso sistem a usando comparações com várias abordagens; dados do mundo real e sintéticos; e vários tipos de restrições de negação. Mostramos que os sistemas de gerenciamento de banco de dados comerciais testados com eçam a apresentar baixo desem penho para conjuntos de dados relativam ente pequenos e alguns tipos de restrições de negação. Nosso sistema, por sua vez, apresenta execuções até três ordens de magnitude mais rápidas do que as de outras soluções relacionadas, especialmente para conjuntos de dados maiores e um grande número de violações identificadas. N ossa contribuição final diz respeito à aplicação de dependências na otim ização de consultas. Em particular, esta tese apresenta um sistema para a descoberta automática e seleção de dependências funcionais que potencialmente melhoram a execução de consultas. Nosso sistema com bina representações das dependências funcionais descobertas em um conjunto de dados com representações extraídas de cargas de trabalho de consulta. Essa com binação direciona a seleção de dependências funcionais que podem produzir reescritas de consulta para as consultas de entrada. N ossa avaliação experim ental m ostra que nosso sistem a seleciona dependências funcionais relevantes que podem ajudar na redução do tempo de resposta geral de consultas. Palavras-chave: Perfilamento de dados. Qualidade de dados. Limpeza de dados. Depenência de dados. Execução de consulta.Abstract: Data dependencies (or dependencies, for short) have a fundamental role in many facets of data management. As a result, recent research has been continually driving contributions to central problem s in connection w ith dependencies. This thesis makes contributions that reach two of these problems. The first problem regards the discovery of dependencies of high expressive power. The goal is to replace the error-prone process of m anual design of dependencies with an algorithm capable of discovering dependencies using only data. In this thesis, we study the discovery of denial constraints, a type of dependency that circumvents many expressiveness drawbacks. Denial constraints have enough expressive pow er to generalize other im portant types of dependencies and to express com plex business rules. However, their discovery is com putationally hard since it regards a search space that is bigger than the search space seen in the discovery of sim pler dependencies. This thesis introduces novel algorithm ic techniques in the form of an algorithm for the discovery of denial constraints. We evaluate the design of our algorithm in a variety of scenarios: real and synthetic datasets; and a varying num ber of records and columns. Our evaluation shows that, com pared to state-of-the-art solutions, our algorithm significantly improves the efficiency of denial constraint discovery in terms of runtime. The second problem concerns the application of dependencies in data management. We first study the application of dependencies for improving data consistency, a critical aspect of data quality. A com m on way to m odel data inconsistencies is by identifying violations of dependencies. in that context, this thesis presents a m ethod that extends our algorithm for the discovery of denial constraints such that it can return reliable results even if the algorithm runs on data containing some inconsistent records. A central insight is that it is possible to extract evidence from datasets to discover denial constraints that alm ost hold in the dataset. Our evaluation shows that our method returns denial dependencies that can identify, with good precision and recall, inconsistencies in the input dataset. This thesis makes one m ore contribution regarding the application of dependencies for im proving data consistency. it presents a system for detecting violations of dependencies efficiently. We perform an extensive evaluation of our system that includes comparisons with several different approaches; real-world and synthetic data; and various kinds of denial constraints. We show that the tested com m ercial database m anagem ent systems start underperform ing for relatively small datasets and production dependencies in the form of denial constraints. Our system, in turn, is up to three orders-of-m agnitude faster than related solutions, especially for larger datasets and massive numbers of identified violations. Our final contribution regards the application of dependencies in query optimization. In particular, this thesis presents a system for the automatic discovery and selection of functional dependencies that potentially improve query executions. Our system combines representations from the functional dependencies discovered in a dataset with representations of the query workloads that run for that dataset. This combination guides the selection of functional dependencies that can produce query rewritings for the incoming queries. Our experimental evaluation shows that our system selects relevant functional dependencies, which can help in reducing the overall query response time. Keywords: D ata profiling. D ata quality. D ata cleaning. D ata dependencies. Query execution

    Database repairs with answer set programming

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaIntegrity constraints play an important part in database design. They are what allow databases to store accurate information, since they impose some properties that must always hold. However, none of the existing Database Management Systems allows the specification of new integrity constraints if the information stored is already violating these new integrity constraints. In this dissertation, we developed DRSys, an application that allows the user to specify integrity constraints that he wishes to enforce in the database. If the database becomes inconsistent with respect to such integrity constraints, DRSys returns to the user possible ways to restore consistency, by inserting or deleting tuples into/from the original database, creating a new consistent database, a database repair. Also, since we are dealing with databases, we want to change as little information as possible, so DRSys offers the user two distinct minimality criteria when repairing the database: minimality under set inclusion or minimality under cardinality of operations. We approached the database repairing problem by using the capacity of problem solving offered by Answer Set Programming (ASP), which benefits from the simple specification of problems, and the existence of “Solvers” that solve those problems in an efficient manner. DRSys is a database repair application that was built on top of the database management system PostgreSQL. Furthermore, we developed a graphical user interface, to aid the user in the whole process of defining new integrity constraints and in the process of database repairing. We evaluate the performance and scalability of DRSys, by presenting several tests in different situations, exploring particular features of it as well, in order to understand the scalability of DRSys
    corecore