543 research outputs found
Rapidash: Efficient Constraint Discovery via Rapid Verification
Denial Constraint (DC) is a well-established formalism that captures a wide
range of integrity constraints commonly encountered, including candidate keys,
functional dependencies, and ordering constraints, among others. Given their
significance, there has been considerable research interest in achieving fast
verification and discovery of exact DCs within the database community. Despite
the significant advancements in the field, prior work exhibits notable
limitations when confronted with large-scale datasets. The current
state-of-the-art exact DC verification algorithm demonstrates a quadratic
(worst-case) time complexity relative to the dataset's number of rows. In the
context of DC discovery, existing methodologies rely on a two-step algorithm
that commences with an expensive data structure-building phase, often requiring
hours to complete even for datasets containing only a few million rows.
Consequently, users are left without any insights into the DCs that hold on
their dataset until this lengthy building phase concludes. In this paper, we
introduce Rapidash, a comprehensive framework for DC verification and
discovery. Our work makes a dual contribution. First, we establish a connection
between orthogonal range search and DC verification. We introduce a novel exact
DC verification algorithm that demonstrates near-linear time complexity,
representing a theoretical improvement over prior work. Second, we propose an
anytime DC discovery algorithm that leverages our novel verification algorithm
to gradually provide DCs to users, eliminating the need for the time-intensive
building phase observed in prior work. To validate the effectiveness of our
algorithms, we conduct extensive evaluations on four large-scale production
datasets. Our results reveal that our DC verification algorithm achieves up to
40 times faster performance compared to state-of-the-art approaches.Comment: comments and suggestions are welcome
Extracting and Cleaning RDF Data
The RDF data model has become a prevalent format to represent heterogeneous data because of its versatility. The capability of dismantling information from its native formats and representing it in triple format offers a simple yet powerful way of modelling data that is obtained from multiple sources. In addition, the triple format and schema constraints of the RDF model make the RDF data easy to process as labeled, directed graphs.
This graph representation of RDF data supports higher-level analytics by enabling querying using different techniques and querying languages, e.g., SPARQL. Anlaytics that require structured data are supported by transforming the graph data on-the-fly to populate the target schema that is needed for downstream analysis. These target schemas are defined by downstream applications according to their information need.
The flexibility of RDF data brings two main challenges. First, the extraction of RDF data is a complex task that may involve domain expertise about the information required to be extracted for different applications. Another significant aspect of analyzing RDF data is its quality, which depends on multiple factors including the reliability of data sources and the accuracy of the extraction systems. The quality of the analysis depends mainly on the quality of the underlying data. Therefore, evaluating and improving the quality of RDF data has a direct effect on the correctness of downstream analytics.
This work presents multiple approaches related to the extraction and quality evaluation of RDF data. To cope with the large amounts of data that needs to be extracted, we present DSTLR, a scalable framework to extract RDF triples from semi-structured and unstructured data sources. For rare entities that fall on the long tail of information, there may not be enough signals to support high-confidence extraction. Towards this problem, we present an approach to estimate property values for long tail entities. We also present multiple algorithms and approaches that focus on the quality of RDF data. These include discovering quality constraints from RDF data, and utilizing machine learning techniques to repair errors in RDF data
Discovery and application of data dependencies
Orientador: Prof. Dr. Eduardo Cunha de AlmeidaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 08/09/2020Inclui referências: p. 126-140Área de concentração: Ciência da ComputaçãoResumo: D ependências de dados (ou, simplesmente, dependências) têm um papel fundamental em muitos aspectos do gerenciam ento de dados. Em consequência, pesquisas recentes têm desenvolvido contribuições para im portante problem as relacionados à dependências. Esta tese traz contribuições que abrangem dois desses problemas. O prim eiro problem a diz respeito à descoberta de dependências com alto poder de expressividade. O objetivo é substituir o projeto m anual de dependências, o qual é sujeito a erros, por um algoritmo capaz de descobrir dependências a partir de dados apenas. N esta tese, estudamos a descoberta de restrições de negação, um tipo de dependência que contorna muitos problemas relacionados ao poder de expressividade de depêndencias. As restrições de negação têm poder de expressividade suficiente para generalizar outros tipos importantes de dependências, e expressar com plexas regras de negócios. No entanto, sua descoberta é com putacionalm ente difícil, pois possui um espaço de busca m aior do que o espaço de busca visto na descoberta de dependências mais simples. Esta tese apresenta novas técnicas na forma de um algoritmo para a descoberta de restrições de negação. Avaliamos o projeto de nosso algoritmo em uma variedade de cenários: conjuntos de dados reais e sintéticos; e núm eros variáveis de registros e colunas. N ossa avaliação m ostra que, em com paração com soluções do estado da arte, nosso algoritmo m elhora significativamente a eficiência da descoberta de restrição de negação em term os de tempo de execução. O segundo problem a diz respeito à aplicação de dependências no gerenciam ento de dados. Primeiro, estudamos a aplicação de dependências na melhoraria da consistência de dados, um aspecto crítico da qualidade dos dados. Uma m aneira comum de m odelar inconsistências é identificando violações de dependências. N esse contexto, esta tese apresenta um m étodo que estende nosso algoritm o para a descoberta de restrições de negação de form a que ele possa retornar resultados confiáveis, m esm o que o algoritm o execute sobre dados contendo alguns registros inconsistentes. M ostram os que é possível extrair evidências dos conjuntos de dados para descobrir restrições de negação que se mantêm aproximadamente. Nossa avaliação mostra que nosso método retorna dependências de negação que podem identificar, com boa precisão e recuperação, inconsistências no conjunto de dados de entrada. Esta tese traz mais um a contribuição no que diz respeito à aplicação de dependências para m elhorar a consistência de dados. Ela apresenta um sistem a para detectar violações de dependências de form a eficiente. Realizam os um a extensa avaliação de nosso sistem a usando comparações com várias abordagens; dados do mundo real e sintéticos; e vários tipos de restrições de negação. Mostramos que os sistemas de gerenciamento de banco de dados comerciais testados com eçam a apresentar baixo desem penho para conjuntos de dados relativam ente pequenos e alguns tipos de restrições de negação. Nosso sistema, por sua vez, apresenta execuções até três ordens de magnitude mais rápidas do que as de outras soluções relacionadas, especialmente para conjuntos de dados maiores e um grande número de violações identificadas. N ossa contribuição final diz respeito à aplicação de dependências na otim ização de consultas. Em particular, esta tese apresenta um sistema para a descoberta automática e seleção de dependências funcionais que potencialmente melhoram a execução de consultas. Nosso sistema com bina representações das dependências funcionais descobertas em um conjunto de dados com representações extraídas de cargas de trabalho de consulta. Essa com binação direciona a seleção de dependências funcionais que podem produzir reescritas de consulta para as consultas de entrada. N ossa avaliação experim ental m ostra que nosso sistem a seleciona dependências funcionais relevantes que podem ajudar na redução do tempo de resposta geral de consultas. Palavras-chave: Perfilamento de dados. Qualidade de dados. Limpeza de dados. Depenência de dados. Execução de consulta.Abstract: Data dependencies (or dependencies, for short) have a fundamental role in many facets of data management. As a result, recent research has been continually driving contributions to central problem s in connection w ith dependencies. This thesis makes contributions that reach two of these problems. The first problem regards the discovery of dependencies of high expressive power. The goal is to replace the error-prone process of m anual design of dependencies with an algorithm capable of discovering dependencies using only data. In this thesis, we study the discovery of denial constraints, a type of dependency that circumvents many expressiveness drawbacks. Denial constraints have enough expressive pow er to generalize other im portant types of dependencies and to express com plex business rules. However, their discovery is com putationally hard since it regards a search space that is bigger than the search space seen in the discovery of sim pler dependencies. This thesis introduces novel algorithm ic techniques in the form of an algorithm for the discovery of denial constraints. We evaluate the design of our algorithm in a variety of scenarios: real and synthetic datasets; and a varying num ber of records and columns. Our evaluation shows that, com pared to state-of-the-art solutions, our algorithm significantly improves the efficiency of denial constraint discovery in terms of runtime. The second problem concerns the application of dependencies in data management. We first study the application of dependencies for improving data consistency, a critical aspect of data quality. A com m on way to m odel data inconsistencies is by identifying violations of dependencies. in that context, this thesis presents a m ethod that extends our algorithm for the discovery of denial constraints such that it can return reliable results even if the algorithm runs on data containing some inconsistent records. A central insight is that it is possible to extract evidence from datasets to discover denial constraints that alm ost hold in the dataset. Our evaluation shows that our method returns denial dependencies that can identify, with good precision and recall, inconsistencies in the input dataset. This thesis makes one m ore contribution regarding the application of dependencies for im proving data consistency. it presents a system for detecting violations of dependencies efficiently. We perform an extensive evaluation of our system that includes comparisons with several different approaches; real-world and synthetic data; and various kinds of denial constraints. We show that the tested com m ercial database m anagem ent systems start underperform ing for relatively small datasets and production dependencies in the form of denial constraints. Our system, in turn, is up to three orders-of-m agnitude faster than related solutions, especially for larger datasets and massive numbers of identified violations. Our final contribution regards the application of dependencies in query optimization. In particular, this thesis presents a system for the automatic discovery and selection of functional dependencies that potentially improve query executions. Our system combines representations from the functional dependencies discovered in a dataset with representations of the query workloads that run for that dataset. This combination guides the selection of functional dependencies that can produce query rewritings for the incoming queries. Our experimental evaluation shows that our system selects relevant functional dependencies, which can help in reducing the overall query response time. Keywords: D ata profiling. D ata quality. D ata cleaning. D ata dependencies. Query execution
Debugging Machine Learning Pipelines
Machine learning tasks entail the use of complex computational pipelines to
reach quantitative and qualitative conclusions. If some of the activities in a
pipeline produce erroneous or uninformative outputs, the pipeline may fail or
produce incorrect results. Inferring the root cause of failures and unexpected
behavior is challenging, usually requiring much human thought, and is both
time-consuming and error-prone. We propose a new approach that makes use of
iteration and provenance to automatically infer the root causes and derive
succinct explanations of failures. Through a detailed experimental evaluation,
we assess the cost, precision, and recall of our approach compared to the state
of the art. Our source code and experimental data will be available for
reproducibility and enhancement.Comment: 10 page
Scalability aspects of data cleaning
Data cleaning has become one of the important pre-processing steps for many data science, data analytics, and machine learning applications. According to a survey by Gartner, more than 25% of the critical data in the world's top companies is flawed, which can result in economic losses amounting to trillions of dollars a year. Over the past few decades, several algorithms and tools have been developed to clean data. However, many of these solutions find it difficult to scale, as the amount of data has increased over time. For example, these solutions often involve a quadratic amount of tuple-pair comparisons or generation of all possible column combinations. Both these tasks can take days to finish if the dataset has millions of tuples or a few hundreds of columns, which is usually the case for real-world applications.
The data cleaning tasks often have a trade-off between the scalability and the quality of the solution. One can achieve scalability by performing fewer computations, but at the cost of a lower quality solution. Therefore, existing approaches exploit this trade-off when they need to scale to larger datasets, settling for a lower quality solution. Some approaches have considered re-thinking solutions from scratch to achieve scalability and high quality. However, re-designing these solutions from scratch is a daunting task as it would involve systematically analyzing the space of possible optimizations and then tuning the physical implementations for a specific computing framework, data size, and resources.
Another component in these solutions that becomes critical with the increasing data size is how this data is stored and fetched. As for smaller datasets, most of it can fit in-memory, so accessing it from a data store is not a bottleneck. However, for large datasets, these solutions need to constantly fetch and write the data to a data store. As observed in this dissertation, data cleaning tasks have a lifecycle-driven data access pattern, which are not suitable for traditional data stores, making these data stores a bottleneck when cleaning large datasets.
In this dissertation, we consider scalability as a first-class citizen for data cleaning tasks and propose that the scalable and high-quality solutions can be achieved by adopting the following three principles: 1) by having a new primitive-base re-writing of the existing algorithms that allows for efficient implementations for multiple computing frameworks, 2) by efficiently involving domain expert’s knowledge to reduce computation and improve quality, and 3) by using an adaptive data store that can transform the data layout based on the access pattern. We make contributions towards each of these principles. First, we present a set of primitive operations for discovering constraints from the data. These primitives facilitate re-writing efficient distributed implementations of the existing discovery algorithms. Next, we present a framework involving domain experts, for faster clustering selection for data de-duplication. This framework asks a bounded number of queries to a domain-expert and uses their response to select the best clustering with a high accuracy. Finally, we present an adaptive data store that can change the layout of the data based on the workload's access pattern, hence speeding-up the data cleaning tasks
Recommended from our members
Secure Remote Attestation for Safety-Critical Embedded and IoT Devices
In recent years, embedded and cyber-physical systems (CPS), under the guise of Internet-of-Things (IoT), have entered many aspects of daily life. Despite many benefits, this develop-ment also greatly expands the so-called attack surface and turns these newly computerizedgadgets into attractive attack targets. One key component in securing IoT devices is malwaredetection, which is typically attained with (secure) remote attestation. Remote attestationis a distinct security service that allows a trusted verifier to verify the internal state of aremote untrusted device. Remote attestation is especially relevant for low/medium-end em-bedded devices that are incapable of protecting themselves against malware infection. Assafety-critical IoT devices become commonplace, it is crucial for remote attestation not tointerfere with the device’s normal operations. In this dissertation, we identify major issues inreconciling remote attestation and safety-critical application needs. We show that existingattestation techniques require devices to perform uninterruptible (atomic) operations duringattestation. Such operations can be time-consuming and thus may be harmful to the device’ssafety-critical functionality. On the other hand, simply relaxing security requirements of re-mote attestation can lead to other vulnerabilities. To resolve this conflict, this dissertationpresents the design, implementation, and evaluation of several mitigation techniques. In par-ticular, we propose two light-weight techniques capable of providing interruptible attestationmodality. In contrast to traditional techniques, our proposed techniques allow interrupts tooccur during attestation while ensuring malware detection via shuffled memory traversals ormemory locking mechanisms. Another type of techniques pursued in this dissertation aimsto minimize the real-time computation overhead during attestation. We propose using peri-odic self-measurements to measure and record the device’s state, resulting in more flexiblescheduling of the attestation process and also in no real-time burden as part of its interactionwith verifier. This technique is particularly suitable for swarm settings with a potentiallylarge number of safety-critical devices. Finally, we develop a remote attestation HYDRAarchitecture, based on a formally verified component, and use it as a building block in ourproposed mitigation techniques. We believe that this architecture may be of independentinterest
Non-Invasive Fairness in Learning through the Lens of Data Drift
Machine Learning (ML) models are widely employed to drive many modern data
systems. While they are undeniably powerful tools, ML models often demonstrate
imbalanced performance and unfair behaviors. The root of this problem often
lies in the fact that different subpopulations commonly display divergent
trends: as a learning algorithm tries to identify trends in the data, it
naturally favors the trends of the majority groups, leading to a model that
performs poorly and unfairly for minority populations. Our goal is to improve
the fairness and trustworthiness of ML models by applying only non-invasive
interventions, i.e., without altering the data or the learning algorithm. We
use a simple but key insight: the divergence of trends between different
populations, and, consecutively, between a learned model and minority
populations, is analogous to data drift, which indicates the poor conformance
between parts of the data and the trained model. We explore two strategies
(model-splitting and reweighing) to resolve this drift, aiming to improve the
overall conformance of models to the underlying data. Both our methods
introduce novel ways to employ the recently-proposed data profiling primitive
of Conformance Constraints. Our experimental evaluation over 7 real-world
datasets shows that both DifFair and ConFair improve the fairness of ML models.
We demonstrate scenarios where DifFair has an edge, though ConFair has the
greatest practical impact and outperforms other baselines. Moreover, as a
model-agnostic technique, ConFair stays robust when used against different
models than the ones on which the weights have been learned, which is not the
case for other state of the art
Information Security Analysis and Auditing of IEC61850 Automated Substations
This thesis is about issues related to the security of electric substations automated by IEC61850, an Ethernet (IEEE 802.3) based protocol. It is about a comprehen sive security analysis and development of a viable method of auditing the security of this protocol. The security analysis focuses on the possible threats to an electric substation based on the possible motives of an attacker. Existing methods and met rics for assessing the security of computer networks are explored and examined for suitability of use with IEC61850. Existing methods and metrics focus on conven tional computers used in computer networks which are fundamentally different from Intelligent Electronic Devices (IED’s) of substations in terms of technical composition and functionality. Hence, there is a need to develop a new method of assessing the security of such devices. The security analysis is then used to derive a new metric scheme to assess the security of IED’s that use IEC61850. This metric scheme is then tested out in a sample audit on a real IEC61850 network and compared with two other commonly used security metrics. The results show that the new metric is good in assessing the security of IED’s themselves. Further analysis on IED security is done by conducting simulated cyber attacks. The results are then used to develop an Intrusion Detection System (IDS) to guard against such attacks. The temporal risk of intrusion on an electric substation is also evaluated
- …