8 research outputs found

    Determining the relative accuracy of attributes

    Get PDF

    Unsupervised String Transformation Learning for Entity Consolidation

    Full text link
    Data integration has been a long-standing challenge in data management with many applications. A key step in data integration is entity consolidation. It takes a collection of clusters of duplicate records as input and produces a single "golden record" for each cluster, which contains the canonical value for each attribute. Truth discovery and data fusion methods, as well as Master Data Management (MDM) systems, can be used for entity consolidation. However, to achieve better results, the variant values (i.e., values that are logically the same with different formats) in the clusters need to be consolidated before applying these methods. For this purpose, we propose a data-driven method to standardize the variant values based on two observations: (1) the variant values usually can be transformed to the same representation (e.g., "Mary Lee" and "Lee, Mary") and (2) the same transformation often appears repeatedly across different clusters (e.g., transpose the first and last name). Our approach first uses an unsupervised method to generate groups of value pairs that can be transformed in the same way (i.e., they share a transformation). Then the groups are presented to a human for verification and the approved ones are used to standardize the data. In a real-world dataset with 17,497 records, our method achieved 75% recall and 99.5% precision in standardizing variant values by asking a human 100 yes/no questions, which completely outperformed a state of the art data wrangling tool

    Improving data quality : data consistency, deduplication, currency and accuracy

    Get PDF
    Data quality is one of the key problems in data management. An unprecedented amount of data has been accumulated and has become a valuable asset of an organization. The value of the data relies greatly on its quality. However, data is often dirty in real life. It may be inconsistent, duplicated, stale, inaccurate or incomplete, which can reduce its usability and increase the cost of businesses. Consequently the need for improving data quality arises, which comprises of five central issues of improving data quality, namely, data consistency, data deduplication, data currency, data accuracy and information completeness. This thesis presents the results of our work on the first four issues with regards to data consistency, deduplication, currency and accuracy. The first part of the thesis investigates incremental verifications of data consistencies in distributed data. Given a distributed database D, a set S of conditional functional dependencies (CFDs), the set V of violations of the CFDs in D, and updates ΔD to D, it is to find, with minimum data shipment, changes ΔV to V in response to ΔD. Although the problems are intractable, we show that they are bounded: there exist algorithms to detect errors such that their computational cost and data shipment are both linear in the size of ΔD and ΔV, independent of the size of the database D. Such incremental algorithms are provided for both vertically and horizontally partitioned data, and we show that the algorithms are optimal. The second part of the thesis studies the interaction between record matching and data repairing. Record matching, the main technique underlying data deduplication, aims to identify tuples that refer to the same real-world object, and repairing is to make a database consistent by fixing errors in the data using constraints. These are treated as separate processes in most data cleaning systems, based on heuristic solutions. However, our studies show that repairing can effectively help us identify matches, and vice versa. To capture the interaction, a uniform framework that seamlessly unifies repairing and matching operations is proposed to clean a database based on integrity constraints, matching rules and master data. The third part of the thesis presents our study of finding certain fixes that are absolutely correct for data repairing. Data repairing methods based on integrity constraints are normally heuristic, and they may not find certain fixes. Worse still, they may even introduce new errors when attempting to repair the data, which may not work well when repairing critical data such as medical records, in which a seemingly minor error often has disastrous consequences. We propose a framework and an algorithm to find certain fixes, based on master data, a class of editing rules and user interactions. A prototype system is also developed. The fourth part of the thesis introduces inferring data currency and consistency for conflict resolution, where data currency aims to identify the current values of entities, and conflict resolution is to combine tuples that pertain to the same real-world entity into a single tuple and resolve conflicts, which is also an important issue for data deduplication. We show that data currency and consistency help each other in resolving conflicts. We study a number of associated fundamental problems, and develop an approach for conflict resolution by inferring data currency and consistency. The last part of the thesis reports our study of data accuracy on the longstanding relative accuracy problem which is to determine, given tuples t1 and t2 that refer to the same entity e, whether t1[A] is more accurate than t2[A], i.e., t1[A] is closer to the true value of the A attribute of e than t2[A]. We introduce a class of accuracy rules and an inference system with a chase procedure to deduce relative accuracy, and the related fundamental problems are studied. We also propose a framework and algorithms for inferring accurate values with users’ interaction

    DPIF: A framework for distinguishing unintentional quality problems from potential shilling attacks

    Full text link
    Copyright © 2019 Tech Science Press. Maliciously manufactured user profiles are often generated in batch for shilling attacks. These profiles may bring in a lot of quality problems but not worthy to be repaired. Since repairing data always be expensive, we need to scrutinize the data and pick out the data that really deserves to be repaired. In this paper, we focus on how to distinguish the unintentional data quality problems from the batch generated fake users for shilling attacks. A two-steps framework named DPIF is proposed for the distinguishment. Based on the framework, the metrics of homology and suspicious degree are proposed. The homology can be used to represent both the similarities of text and the data quality problems contained by different profiles. The suspicious degree can be used to identify potential attacks. The experiments on real-life data verified that the proposed framework and the corresponding metrics are effective

    Recognizing Determinism in Prioritized Repairing of Inconsistent Databases

    Get PDF
    Abstract. A repair of an inconsistent database is traditionally defined as a consistent database that differs from the inconsistent one in a "minimal way." As there are often reasons to prefer one repair over another, researchers have introduced and investigated the framework of preferred repairs, where a priority relation between facts is lifted towards a priority relation between consistent databases, and repairs are restricted to ones that are optimal in the lifted sense. In this paper we describe our recent results on the complexity of deciding whether the priority relation suffices to clean the database unambiguously, or in other words, whether there is exactly one optimal repair. In particular, we show that different conventional semantics of priority lifting entail highly different complexities

    Cleaning structured event logs: A graph repair approach

    Full text link

    AcCORD: um modelo colaborativo assíncrono para a reconciliação de dados

    Get PDF
    Reconciliation is the process of providing a consistent view of the data imported from different sources. Despite some efforts reported in the literature for providing data reconciliation solutions with asynchronous collaboration, the challenge of reconciling data when multiple users work asyn- chronously over local copies of the same imported data has received less attention. In this thesis we investigate this challenge. We propose AcCORD, an asynchronous collaborative data reconciliation model. It stores users’ integration decision in logs, called repositories. Repositories keep data prove- nance, that is, the operations applied to the data sources that led to the current state of the data. Each user has her own repository for storing the provenance. That is, whenever inconsistencies among im- ported sources are detected, the user may autonomously take decisions to solve them, and integration decisions that are locally executed are registered in her repository. Integration decisions are shared among collaborators by importing each other’s repositories. Since users may have different points of view, repositories may also be inconsistent. Therefore, AcCORD also introduces several policies that can be applied by different users in order to solve conflicts among repositories and reconcile their integration decisions. Depending on the applied policy, the final view of the imported sources may either be the same for all users, that is, a single integrated view, or result in distinct local views for each of them. Furthermore, AcCORD encompasses a decision integration propagation method, which is aimed to avoid that a user take inconsistent decisions over the same data conflict present in different sources, thus guaranteeing a more effective reconciliation process. AcCORD was validated through performance tests that investigated the proposed policies and through users’ interviews that investigated not only the proposed policies but also the quality of the multiuser reconciliation. The re- sults demonstrated the efficiency and efficacy of AcCORD, and highlighted its flexibility to generate a single integrated view or different local views. The interviews demonstrated different perceptions of the users with regard to the quality of the result provided by AcCORD, including aspects related to consistency, acceptability, correctness, time-saving and satisfaction.Reconciliação é o processo de prover uma visão consistente de dados provenientes de várias fontes de dados. Embora existam na literatura trabalhos voltados à proposta de soluções de reconciliação baseadas em colaboração assíncrona, o desafio de reconciliar dados quando vários usuários colaborativos trabalham de forma assíncrona sobre as mesmas co´pias locais de dados, compartilhando somente eventualmente as suas decisões de integração particulares, tem recebido menos atenção. Nesta tese de doutorado investiga-se esse desafio, por meio da proposta do modelo AcCORD (Asynchronous COllaborative data ReconcIliation moDel). AcCORD é um modelo colaborativo assíncrono para reconciliação de dados no qual as atualizações dos usuários são mantidas em um repositório de operações na forma de dados de procedência. Cada usuário tem o seu próprio repositório para armazenar a procedência e a sua própria cópia das fontes. Ou seja, quando inconsistências entre fontes importadas são detectadas, o usuário pode tomar decisões de integração para resolvê-las de maneira autônoma, e as atualizações que são executadas localmente são registradas em seu próprio repositório. As atualizações são compartilhadas entre colaboradores quando um usuário importa as operações dos repositórios dos demais usuários. Desde que diferentes usuários podem ter diferentes pontos de vista para resolver o mesmo conflito, seus repositórios podem estar inconsistentes. Assim, o modelo Ac- CORD também inclui a proposta de diferentes políticas de reconciliação multiusuário para resolver conflitos entre repositórios. Políticas distintas podem ser aplicadas por diferentes usuários para reconciliar as suas atualizações. Dependendo da política aplicada, a visão final das fontes importadas pode ser a mesma para todos os usuários, ou seja, um única visão global integrada, ou resultar em distintas visões locais para cada um deles. Adicionalmente, o modelo AcCORD também incorpora um método de propagação de decisões de integração, o qual tem como objetivo evitar que um usuário tome decisões inconsistentes a respeito de um mesmo conflito de dado presente em diferentes fontes, garantindo um processo de reconciliação multiusuário mais efetivo. O modelo AcCORD foi validado por meio de testes de desempenho que avaliaram as políticas propostas, e por entrevistas a usuários que avaliaram não somente as políticas propostas mas também a qualidade da reconciliação multiusuário. Os resultados obtidos demonstraram a eficiência e a eficácia do modelo proposto, além de sua flexibilidade para gerar uma visão integrada ou distintas visões locais. As entrevistas realizadas demonstraram diferentes percepções dos usuários quanto à qualidade do resultado provido pelo modelo AcCORD, incluindo aspectos relacionados à consistência, aceitabilidade, corretude, economia de tempo e satisfacão
    corecore