6 research outputs found

    Discovery Agent : An Interactive Approach for the Discovery of Inclusion Dependencies

    Get PDF
    The information integration problem is a hard yet important problem in the field of databases. The goal of information integration is to provide unified views on diverse data among several resources. This subject has been studied for a long time. The integration can be performed using several ways. Schema integration using inclusion dependency constraints is one of them. The problem of discovering inclusion dependencies among input relations is NP-complete in terms of the number of attributes. Two significant algorithms address this problem: FIND2 by Andreas Koeller and Zigzag by Fabien De Marchi. Both algorithms discover inclusion dependencies among input relations on small scale databases having relatively few attributes. Because of the data discrepancy, they do not scale well with higher numbers of attributes. We propose an approach of incorporating human intelligence into the algorithmic discovery of inclusion dependencies. To use human intelligence, we design an agent called the discovery agent, to provide a communication bridge between an algorithm and a user. The discovery agent demonstrates the progress of the discovery process and provides sufficient user controls to govern the discovery process into the right direction. In this thesis, we present a prototype of the discovery agent based upon the FIND2 algorithm, which utilizes most of the phase-wise behavior of the algorithm and demonstrate how human observer and algorithm work together to achieve higher performance and better output accuracy. The goal of the discovery agent is to make the discovery process truly interactive between system and user as well as to produce the desired and accurate result. The discovery agent can deliver an applicable and feasible approximation of an NP-complete problem with the help of suitable algorithm and appropriate human expertise

    Application of Definability to Query Answering over Knowledge Bases

    Get PDF
    Answering object queries (i.e. instance retrieval) is a central task in ontology based data access (OBDA). Performing this task involves reasoning with respect to a knowledge base K (i.e. ontology) over some description logic (DL) dialect L. As the expressive power of L grows, so does the complexity of reasoning with respect to K. Therefore, eliminating the need to reason with respect to a knowledge base K is desirable. In this work, we propose an optimization to improve performance of answering object queries by eliminating the need to reason with respect to the knowledge base and, instead, utilizing cached query results when possible. In particular given a DL dialect L, an object query C over some knowledge base K and a set of cached query results S={S1, ..., Sn} obtained from evaluating past queries, we rewrite C into an equivalent query D, that can be evaluated with respect to an empty knowledge base, using cached query results S' = {Si1, ..., Sim}, where S' is a subset of S. The new query D is an interpolant for the original query C with respect to K and S. To find D, we leverage a tool for enumerating interpolants of a given sentence with respect to some theory. We describe a procedure that maps a knowledge base K, expressed in terms of a description logic dialect of first order logic, and object query C into an equivalent theory and query that are input into the interpolant enumerating tool, and resulting interpolants into an object query D that can be evaluated over an empty knowledge base. We show the efficacy of our approach through experimental evaluation on a Lehigh University Benchmark (LUBM) data set, as well as on a synthetic data set, LUBMMOD, that we created by augmenting an LUBM ontology with additional axioms

    Definition of cross-domain indexes and ordering functions in relational algebra and its usage in relational database management systems

    Get PDF
    In this thesis, a mathematical model that describes a “Unique Constraint Domain” is defined. Following, the “Ordered Unique Constraint Domain” is also mathematically defined. With those definitions, a cross-domain ordering is also defined. Then it is shown that relationships between tables in a Relational Database Management System can be defined in other forms than the usual ways, using cross-domain indexes, based in cross-domain ordering. It is shown that all foreign keys in a database can be transformed in indexes with the benefit of speeding data access. It is also shown that this technique is consistent with actual modeling techniques. It is shown how the index structure, with indexes defined as functions, can provide support for relationship roles. In addition, it is also shown how this can provide support for more than two tables in one relationship and for supporting special sorting order. The addition of a mathematical function to a relation that could sort that relation, demonstrating that the closure property of relations are still kept, shows that this mathematical model can be used as extension of the base relational model. Next, it is shown that with this new technique, commercial database engines should not degrade performance because all supporting structures are already present and, in some cases, a better performance might be achieved. Code for a prototype based in a Commercial Database Engine has been added, as an annex, to show how this new technique can be used

    Discovery and application of data dependencies

    Get PDF
    Orientador: Prof. Dr. Eduardo Cunha de AlmeidaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 08/09/2020Inclui referências: p. 126-140Área de concentração: Ciência da ComputaçãoResumo: D ependências de dados (ou, simplesmente, dependências) têm um papel fundamental em muitos aspectos do gerenciam ento de dados. Em consequência, pesquisas recentes têm desenvolvido contribuições para im portante problem as relacionados à dependências. Esta tese traz contribuições que abrangem dois desses problemas. O prim eiro problem a diz respeito à descoberta de dependências com alto poder de expressividade. O objetivo é substituir o projeto m anual de dependências, o qual é sujeito a erros, por um algoritmo capaz de descobrir dependências a partir de dados apenas. N esta tese, estudamos a descoberta de restrições de negação, um tipo de dependência que contorna muitos problemas relacionados ao poder de expressividade de depêndencias. As restrições de negação têm poder de expressividade suficiente para generalizar outros tipos importantes de dependências, e expressar com plexas regras de negócios. No entanto, sua descoberta é com putacionalm ente difícil, pois possui um espaço de busca m aior do que o espaço de busca visto na descoberta de dependências mais simples. Esta tese apresenta novas técnicas na forma de um algoritmo para a descoberta de restrições de negação. Avaliamos o projeto de nosso algoritmo em uma variedade de cenários: conjuntos de dados reais e sintéticos; e núm eros variáveis de registros e colunas. N ossa avaliação m ostra que, em com paração com soluções do estado da arte, nosso algoritmo m elhora significativamente a eficiência da descoberta de restrição de negação em term os de tempo de execução. O segundo problem a diz respeito à aplicação de dependências no gerenciam ento de dados. Primeiro, estudamos a aplicação de dependências na melhoraria da consistência de dados, um aspecto crítico da qualidade dos dados. Uma m aneira comum de m odelar inconsistências é identificando violações de dependências. N esse contexto, esta tese apresenta um m étodo que estende nosso algoritm o para a descoberta de restrições de negação de form a que ele possa retornar resultados confiáveis, m esm o que o algoritm o execute sobre dados contendo alguns registros inconsistentes. M ostram os que é possível extrair evidências dos conjuntos de dados para descobrir restrições de negação que se mantêm aproximadamente. Nossa avaliação mostra que nosso método retorna dependências de negação que podem identificar, com boa precisão e recuperação, inconsistências no conjunto de dados de entrada. Esta tese traz mais um a contribuição no que diz respeito à aplicação de dependências para m elhorar a consistência de dados. Ela apresenta um sistem a para detectar violações de dependências de form a eficiente. Realizam os um a extensa avaliação de nosso sistem a usando comparações com várias abordagens; dados do mundo real e sintéticos; e vários tipos de restrições de negação. Mostramos que os sistemas de gerenciamento de banco de dados comerciais testados com eçam a apresentar baixo desem penho para conjuntos de dados relativam ente pequenos e alguns tipos de restrições de negação. Nosso sistema, por sua vez, apresenta execuções até três ordens de magnitude mais rápidas do que as de outras soluções relacionadas, especialmente para conjuntos de dados maiores e um grande número de violações identificadas. N ossa contribuição final diz respeito à aplicação de dependências na otim ização de consultas. Em particular, esta tese apresenta um sistema para a descoberta automática e seleção de dependências funcionais que potencialmente melhoram a execução de consultas. Nosso sistema com bina representações das dependências funcionais descobertas em um conjunto de dados com representações extraídas de cargas de trabalho de consulta. Essa com binação direciona a seleção de dependências funcionais que podem produzir reescritas de consulta para as consultas de entrada. N ossa avaliação experim ental m ostra que nosso sistem a seleciona dependências funcionais relevantes que podem ajudar na redução do tempo de resposta geral de consultas. Palavras-chave: Perfilamento de dados. Qualidade de dados. Limpeza de dados. Depenência de dados. Execução de consulta.Abstract: Data dependencies (or dependencies, for short) have a fundamental role in many facets of data management. As a result, recent research has been continually driving contributions to central problem s in connection w ith dependencies. This thesis makes contributions that reach two of these problems. The first problem regards the discovery of dependencies of high expressive power. The goal is to replace the error-prone process of m anual design of dependencies with an algorithm capable of discovering dependencies using only data. In this thesis, we study the discovery of denial constraints, a type of dependency that circumvents many expressiveness drawbacks. Denial constraints have enough expressive pow er to generalize other im portant types of dependencies and to express com plex business rules. However, their discovery is com putationally hard since it regards a search space that is bigger than the search space seen in the discovery of sim pler dependencies. This thesis introduces novel algorithm ic techniques in the form of an algorithm for the discovery of denial constraints. We evaluate the design of our algorithm in a variety of scenarios: real and synthetic datasets; and a varying num ber of records and columns. Our evaluation shows that, com pared to state-of-the-art solutions, our algorithm significantly improves the efficiency of denial constraint discovery in terms of runtime. The second problem concerns the application of dependencies in data management. We first study the application of dependencies for improving data consistency, a critical aspect of data quality. A com m on way to m odel data inconsistencies is by identifying violations of dependencies. in that context, this thesis presents a m ethod that extends our algorithm for the discovery of denial constraints such that it can return reliable results even if the algorithm runs on data containing some inconsistent records. A central insight is that it is possible to extract evidence from datasets to discover denial constraints that alm ost hold in the dataset. Our evaluation shows that our method returns denial dependencies that can identify, with good precision and recall, inconsistencies in the input dataset. This thesis makes one m ore contribution regarding the application of dependencies for im proving data consistency. it presents a system for detecting violations of dependencies efficiently. We perform an extensive evaluation of our system that includes comparisons with several different approaches; real-world and synthetic data; and various kinds of denial constraints. We show that the tested com m ercial database m anagem ent systems start underperform ing for relatively small datasets and production dependencies in the form of denial constraints. Our system, in turn, is up to three orders-of-m agnitude faster than related solutions, especially for larger datasets and massive numbers of identified violations. Our final contribution regards the application of dependencies in query optimization. In particular, this thesis presents a system for the automatic discovery and selection of functional dependencies that potentially improve query executions. Our system combines representations from the functional dependencies discovered in a dataset with representations of the query workloads that run for that dataset. This combination guides the selection of functional dependencies that can produce query rewritings for the incoming queries. Our experimental evaluation shows that our system selects relevant functional dependencies, which can help in reducing the overall query response time. Keywords: D ata profiling. D ata quality. D ata cleaning. D ata dependencies. Query execution

    Exploiting uniqueness in query optimization

    No full text
    corecore