5 research outputs found

    Static Analysis of Partial Referential Integrity for Better Quality SQL Data

    Get PDF
    Referential integrity ensures the consistency of data between database relations. The SQL standard proposes different semantics to deal with partial information under referential integrity. Simple semantics neglects tuples with nulls, and enjoys built-in support by commercial database systems. Partial semantics does check tuples with nulls, but does not enjoy built-in support. We investigate this mismatch between the SQL standard and real database systems. Indeed, insight is gained into the trade-off between cleaner data under partial semantics and the efficiency of checking simple semantics. The cost for referential integrity checking is evaluated for various dataset sizes, indexing structures and degrees of cleanliness. While the cost of partial semantics exceeds that of simple semantics, their performance trends follow similar patterns under growing database sizes. Applying multiple index structures and exploiting appropriate validation mechanisms increase the efficiency of checking partial semantics

    Evaluating the Semantic and Representational Consistency of Interconnected Structured and Unstructured Data

    Get PDF
    In this paper we present research in progress that has the aim of developing a set of data quality metrics for two aspects of the dimension of consistency, the semantic and representational aspects. In the literature metrics for these two aspects are relatively unexplored, especially in comparison with the data integrity aspect. Our goal is to apply these data quality metrics to interconnected structured and unstructured data. Because of the prevalence of unstructured data in organizations today, many strive for “content convergence” by interconnecting structured and unstructured data. The literature offers few data quality metrics for this type of data, despite the growing recognition of its potential value. We are developing our metrics in the context of data mining, and evaluating their utility using data mining outcomes in an economic context. If our metric development is successful, a well-defined economic utility function for data quality metrics can be of direct use to managers making decisions

    Evaluation of Impact of Data Quality on Clustering with Syntactic Cluster Validity Methods

    Get PDF
    In this research the influence of four most commonly used data quality dimensions (accuracy, completeness, consistency and timeliness) on clustering outcomes was studied. Statistical significant negative effect of low data quality levels on results of different clustering algorithms was demonstrated. The relationship between Data Quality concepts and clustering concepts were constructed and some recommendations on usage of clustering algorithms with respect to data quality level were made

    Augmenting data warehousing architectures with hadoop

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementAs the volume of available data increases exponentially, traditional data warehouses struggle to transform this data into actionable knowledge. Data strategies that include the creation and maintenance of data warehouses have a lot to gain by incorporating technologies from the Big Data’s spectrum. Hadoop, as a transformation tool, can add a theoretical infinite dimension of data processing, feeding transformed information into traditional data warehouses that ultimately will retain their value as central components in organizations’ decision support systems. This study explores the potentialities of Hadoop as a data transformation tool in the setting of a traditional data warehouse environment. Hadoop’s execution model, which is oriented for distributed parallel processing, offers great capabilities when the amounts of data to be processed require the infrastructure to expand. Horizontal scalability, which is a key aspect in a Hadoop cluster, will allow for proportional growth in processing power as the volume of data increases. Through the use of a Hive on Tez, in a Hadoop cluster, this study transforms television viewing events, extracted from Ericsson’s Mediaroom Internet Protocol Television infrastructure, into pertinent audience metrics, like Rating, Reach and Share. These measurements are then made available in a traditional data warehouse, supported by a traditional Relational Database Management System, where they are presented through a set of reports. The main contribution of this research is a proposed augmented data warehouse architecture where the traditional ETL layer is replaced by a Hadoop cluster, running Hive on Tez, with the purpose of performing the heaviest transformations that convert raw data into actionable information. Through a typification of the SQL statements, responsible for the data transformation processes, we were able to understand that Hadoop, and its distributed processing model, delivers outstanding performance results associated with the analytical layer, namely in the aggregation of large data sets. Ultimately, we demonstrate, empirically, the performance gains that can be extracted from Hadoop, in comparison to an RDBMS, regarding speed, storage usage and scalability potential, and suggest how this can be used to evolve data warehouses into the age of Big Data

    Referential integrity quality metrics

    No full text
    Referential integrity is an essential global constraint in a relational database, that maintains it in a complete and consistent state. In this work, we assume the database may violate referential integrity and relations may be denormalized. We propose a set of quality metrics, defined at four granularity levels: database, relation, attribute and value, that measure referential completeness and consistency. Quality metrics are efficiently computed with standard SQL queries, that incorporate two query optimizations: left outer joins on foreign keys and early foreign key grouping. Experiments evaluate our proposed metrics and SQL query optimizations on real and synthetic databases, showing they can help in detecting and explaining referential errors. (C) 2007 Elsevier B.V. All rights reserved
    corecore