2,625,787 research outputs found

    Corporate data quality management in context

    Get PDF
    Presently, we are well aware that poor quality data is costing large amounts of money to corporations all over the world. Nevertheless, little research has been done about the way Organizations are dealing with data quality management and the strategies they are using. This work aims to find some answers to the following questions: which business drivers motivate the organizations to engage in a data quality management initiative?, how do they implement data quality management? and which objectives have been achieved, so far? Due to the kind of research questions involved, a decision was made to adopt the use of multiple exploratory case studies as research strategy [32]. The case studies were developed in a telecommunications company (MyTelecom), a public bank (PublicBank) and in the central bank (CentralBank) of one European Union Country. The results show that the main drivers to data quality (DQ) initiatives were the reduction in non quality costs, risk management, mergers, and the improvement of the company's image among its customers, those aspects being in line with literature [7, 8, 20]. The commercial corporations (MyTelecom and PublicBank) began their DQ projects with customer data, this being in accordance with literature [18], while CentralBank, which mainly works with analytical systems, began with data source metadata characterization and reuse. None of the organizations uses a formal DQ methodology, but they are using tools for data profiling, standardization and cleaning. PublicBank and CentralBank are working towards a Corporate Data Policy, aligned with their Business Policy, which is not the case of MyTelecom. The findings enabled us to prepare a first draft of a "Data Governance strategic impact grid", adapted from Nolan& MacFarlan IT Governance strategic impact grid [17], this framework needing further empirical support

    The Data Quality Concept of Accuracy in the Context of Public Use Data Sets

    Get PDF
    Like other data quality dimensions, the concept of accuracy is often adopted to characterise a particular data set. However, its common specification basically refers to statistical properties of estimators, which can hardly be proved by means of a single survey at hand. This ambiguity can be resolved by assigning 'accuracy' to survey processes that are known to affect these properties. In this contribution, we consider the sub-process of imputation as one important step in setting up a data set and argue that the so called 'hit-rate' criterion, that is intended to measure the accuracy of a data set by some distance function of 'true' but unobserved and imputed values, is neither required nor desirable. In contrast, the so-called 'inference' criterion allows for valid inferences based on a suitably completed data set under rather general conditions. The underlying theoretical concepts are illustrated by means of a simulation study. It is emphasised that the same principal arguments apply to other survey processes that introduce uncertainty into an edited data set.Survey Quality, Survey Processes, Accuracy, Assessment of Imputation Methods, Multiple Imputation

    Data Quality in the Context of Longitudinal Research Studies

    Get PDF
    This paper discusses the concept of data quality in the context of longitudinal research. By deconstructing quality assurance process and data collection strategies through a case study of the “Croatian Birth Cohort Study“, we try to define causes and sources of poor data quality in the context of longitudinal studies. Besides the problems discussed throughout the known literature (panel conditioning, sample attrition, recall bias, temporal and financial demands), we introduce singlesource problems, multi-source problems, security problems, design questionnaire problems and QA workflow problems as important aspects in the domain of the possible sources of errors. Additionaly we propose models for eliminating the errors through prevention and detection in order to improve data quality

    An intelligent linked data quality dashboard

    Get PDF
    This paper describes a new intelligent, data-driven dashboard for linked data quality assessment. The development goal was to assist data quality engineers to interpret data quality problems found when evaluating a dataset us-ing a metrics-based data quality assessment. This required construction of a graph linking the problematic things identified in the data, the assessment metrics and the source data. This context and supporting user interfaces help the user to un-derstand data quality problems. An analysis widget also helped the user identify the root cause multiple problems. This supported the user in identification and prioritization of the problems that need to be fixed and to improve data quality. The dashboard was shown to be useful for users to clean data. A user evaluation was performed with both expert and novice data quality engineers

    Introduction

    Get PDF
    BACKGROUND: National quality registries (NQRs) purportedly facilitate quality improvement, while neither the extent nor the mechanisms of such a relationship are fully known. The aim of this case study is to describe the experiences of local stakeholders to determine those elements that facilitate and hinder clinical quality improvement in relation to participation in a well-known and established NQR on stroke in Sweden. METHODS: A strategic sample was drawn of 8 hospitals in 4 county councils, representing a variety of settings and outcomes according to the NQR's criteria. Semi-structured telephone interviews were conducted with 25 managers, physicians in charge of the Riks-Stroke, and registered nurses registering local data at the hospitals. Interviews, including aspects of barriers and facilitators within the NQR and the local context, were analysed with content analysis. RESULTS: An NQR can provide vital aspects for facilitating evidence-based practice, for example, local data drawn from national guidelines which can be used for comparisons over time within the organisation or with other hospitals. Major effort is required to ensure that data entries are accurate and valid, and thus the trustworthiness of local data output competes with resources needed for everyday clinical stroke care and quality improvement initiatives. Local stakeholders with knowledge of and interest in both the medical area (in this case stroke) and quality improvement can apply the NQR data to effectively initiate, carry out, and evaluate quality improvement, if supported by managers and co-workers, a common stroke care process and an operational management system that embraces and engages with the NQR data. CONCLUSION: While quality registries are assumed to support adherence to evidence-based guidelines around the world, this study proposes that a NQR can facilitate improvement of care but neither the registry itself nor the reporting of data initiates quality improvement. Rather, the local and general evidence provided by the NQR must be considered relevant and must be applied in the local context. Further, the quality improvement process needs to be facilitated by stakeholders collaborating within and outside the context, who know how to initiate, perform, and evaluate quality improvement, and who have the resources to do so
    corecore