10 research outputs found

    A SAT-based System for Consistent Query Answering

    Full text link
    An inconsistent database is a database that violates one or more integrity constraints, such as functional dependencies. Consistent Query Answering is a rigorous and principled approach to the semantics of queries posed against inconsistent databases. The consistent answers to a query on an inconsistent database is the intersection of the answers to the query on every repair, i.e., on every consistent database that differs from the given inconsistent one in a minimal way. Computing the consistent answers of a fixed conjunctive query on a given inconsistent database can be a coNP-hard problem, even though every fixed conjunctive query is efficiently computable on a given consistent database. We designed, implemented, and evaluated CAvSAT, a SAT-based system for consistent query answering. CAvSAT leverages a set of natural reductions from the complement of consistent query answering to SAT and to Weighted MaxSAT. The system is capable of handling unions of conjunctive queries and arbitrary denial constraints, which include functional dependencies as a special case. We report results from experiments evaluating CAvSAT on both synthetic and real-world databases. These results provide evidence that a SAT-based approach can give rise to a comprehensive and scalable system for consistent query answering.Comment: 25 pages including appendix, to appear in the 22nd International Conference on Theory and Applications of Satisfiability Testin

    CASTOR end-to-end monitoring

    No full text
    With the start of Large Hadron Collider approaching, storage and management of raw event data, as well as reconstruction and analysis data, is of crucial importance for the researchers. The CERN Advanced STORage system (CASTOR) is a hierarchical system developed at CERN, used to store physics production files and user files. CASTOR, as one of the essential software tools used by the LHC experiments, has to provide reliable services for storing and managing data. Monitoring of this complicated system is mandatory in order to assure its stable operation and improve its future performance. This paper presents the new monitoring system of CASTOR which provides operation and user request specific metrics. This system is build around a dedicated, optimized database schema. The schema is populated by PL/SQL procedures, which process a stream of incoming raw metadata from different CASTOR components, initially collected by the Distributed Logging Facility (DLF). A web interface has been developed for the visualization of the monitoring data. The different histograms and plots are created using PHP scripts which query the monitoring databas

    Reinforcement Learning for Data Preparation with Active Reward Learning

    No full text
    International audienceDatacleaninganddatapreparationhavebeenlong-standingchallenges in data science to avoid incorrect results, biases, and misleading conclusions ob- tained from “dirty” data. For a given dataset and data analytics task, a plethora of data preprocessing techniques and alternative data cleaning strategies are avail- able, but they may lead to dramatically different outputs with unequal result quality performances. For adequate data preparation, the users generally do not know how to start with or which methods to use. Most current work can be classified into two categories: 1) they propose new data cleaning algorithms specific to certain types of data anomalies usually considered in isolation and without a “pipeline vision” of the entire data preprocessing strategy; 2) they develop automated machine learning approaches (AutoML) that can optimize the hyper- parameters of a considered ML model with a list of by-default preprocessing methods. We argue that more efforts should be devoted to proposing a principled and adaptive data preparation approach to help and learn from the user for selecting the optimal sequence of data preparation tasks to obtain the best quality performance of the final result. In this paper, we extend Learn2Clean, a method based on Q-Learning, a model-free reinforcement learning technique that selects, for a given data set, a given ML model, and a pre-selected quality performance metric, the optimal sequence of tasks for preprocessing the data such that the quality metric is maximized. We will discuss some new results of Learn2Clean for semi-automating data preparation with “the human in the loop” using active reward learning and Q-learning

    Predictive Data Transformation Suggestions in Grafterizer Using Machine Learning

    No full text
    Data preprocessing is a crucial step in data analysis. A substantial amount of time is spent on data transformation tasks such as data formatting, modification, extraction, and enrichment, typically making it more convenient for users to work with systems that can recommend most relevant transformations for a given dataset. In this paper, we propose an approach for generating relevant data transformation suggestions for tabular data preprocessing using machine learning (specifically, the Random Forest algorithm). The approach is implemented for Grafterizer, a Web-based framework for tabular data cleaning and transformation, and evaluated through a usability study

    Querying and Learning in Probabilistic Databases

    No full text
    Abstract. Probabilistic Databases (PDBs) lie at the expressive inter-section of databases, first-order logic, and probability theory. PDBs em-ploy logical deduction rules to process Select-Project-Join (SPJ) queries, which form the basis for a variety of declarative query languages such as Datalog, Relational Algebra, and SQL. They employ logical consistency constraints to resolve data inconsistencies, and they represent query an-swers via logical lineage formulas (aka.“data provenance”) to trace the dependencies between these answers and the input tuples that led to their derivation. While the literature on PDBs dates back to more than 25 years of research, only fairly recently the key role of lineage for es-tablishing a closed and complete representation model of relational op-erations over this kind of probabilistic data was discovered. Although PDBs benefit from their efficient and scalable database infrastructures for data storage and indexing, they couple the data computation with probabilistic inference, the latter of which remains a #P-hard problem also in the context of PDBs. In this chapter, we provide a review on the key concepts of PDBs with a particular focus on our own recent research results related to this field. We highlight a number of ongoing research challenges related to PDBs, and we keep referring to an information extraction (IE) scenario as a running application to manage uncertain and temporal facts obtained from IE techniques directly inside a PDB setting
    corecore