3,785,519 research outputs found

    Lessons Learned in Public Reporting: Physician Buy-In Is Key to Success

    Get PDF
    Shares lessons from Aligning Forces for Quality communities about securing physicians' support for public reporting of quality performance data, including the need to allow them to review their data and give easy access to improvement tools and resources

    Making stillbirths count, making numbers talk - issues in data collection for stillbirths.

    Get PDF
    BACKGROUND: Stillbirths need to count. They constitute the majority of the world's perinatal deaths and yet, they are largely invisible. Simply counting stillbirths is only the first step in analysis and prevention. From a public health perspective, there is a need for information on timing and circumstances of death, associated conditions and underlying causes, and availability and quality of care. This information will guide efforts to prevent stillbirths and improve quality of care. DISCUSSION: In this report, we assess how different definitions and limits in registration affect data capture, and we discuss the specific challenges of stillbirth registration, with emphasis on implementation. We identify what data need to be captured, we suggest a dataset to cover core needs in registration and analysis of the different categories of stillbirths with causes and quality indicators, and we illustrate the experience in stillbirth registration from different cultural settings. Finally, we point out gaps that need attention in the International Classification of Diseases and review the qualities of alternative systems that have been tested in low- and middle-income settings. SUMMARY: Obtaining high-quality data will require consistent definitions for stillbirths, systematic population-based registration, better tools for surveys and verbal autopsies, capacity building and training in procedures to identify causes of death, locally adapted quality indicators, improved classification systems, and effective registration and reporting systems

    An intelligent linked data quality dashboard

    Get PDF
    This paper describes a new intelligent, data-driven dashboard for linked data quality assessment. The development goal was to assist data quality engineers to interpret data quality problems found when evaluating a dataset us-ing a metrics-based data quality assessment. This required construction of a graph linking the problematic things identified in the data, the assessment metrics and the source data. This context and supporting user interfaces help the user to un-derstand data quality problems. An analysis widget also helped the user identify the root cause multiple problems. This supported the user in identification and prioritization of the problems that need to be fixed and to improve data quality. The dashboard was shown to be useful for users to clean data. A user evaluation was performed with both expert and novice data quality engineers

    POIESIS: A tool for quality-aware ETL process redesign

    Get PDF
    We present a tool, called POIESIS, for automatic ETL process enhancement. ETL processes are essential data-centric activities in modern business intelligence environments and they need to be examined through a viewpoint that concerns their quality characteristics (e.g., data quality, performance, manageability) in the era of Big Data. POIESIS responds to this need by providing a user-centered environment for quality-aware analysis and redesign of ETL flows. It generates thousands of alternative flows by adding flow patterns to the initial flow, in varying positions and combinations, thus creating alternative design options in a multidimensional space of different quality attributes. Through the demonstration of POIESIS we introduce the tool's capabilities and highlight its efficiency, usability and modifiability, thanks to its polymorphic design. © 2015, Copyright is with the authors.Peer ReviewedPostprint (published version

    Bridging the biodiversity data gaps: Recommendations to meet users’ data needs

    Get PDF
    A strong case has been made for freely available, high quality data on species occurrence, in order to track changes in biodiversity. However, one of the main issues surrounding the provision of such data is that sources vary in quality, scope, and accuracy. Therefore publishers of such data must face the challenge of maximizing quality, utility and breadth of data coverage, in order to make such data useful to users. Here, we report a number of recommendations that stem from a content need assessment survey conducted by the Global Biodiversity Information Facility (GBIF). Through this survey, we aimed to distil the main user needs regarding biodiversity data. We find a broad range of recommendations from the survey respondents, principally concerning issues such as data quality, bias, and coverage, and extending ease of access. We recommend a candidate set of actions for the GBIF that fall into three classes: 1) addressing data gaps, data volume, and data quality, 2) aggregating new kinds of data for new applications, and 3) promoting ease-of-use and providing incentives for wider use. Addressing the challenge of providing high quality primary biodiversity data can potentially serve the needs of many international biodiversity initiatives, including the new 2020 biodiversity targets of the Convention on Biological Diversity, the emerging global biodiversity observation network (GEO BON), and the new Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES)

    New advances in aircraft MRO services: data mining enhancement

    Get PDF
    Aircraft Maintenance, Repair and Overhaul (MRO) agencies rely largely on row-data based quotation systems to select the best suppliers for the customers (airlines). The data quantity and quality becomes a key issue to determining the success of an MRO job, since we need to ensure we achieve cost and quality benchmarks. This paper introduces a data mining approach to create an MRO quotation system that enhances the data quantity and data quality, and enables significantly more precise MRO job quotations. Regular Expression was utilized to analyse descriptive textual feedback (i.e. engineer’s reports) in order to extract more referable highly normalised data for job quotation. A text mining based key influencer analysis function enables the user to proactively select sub-parts, defects and possible solutions to make queries more accurate. Implementation results show that system data would improve cost quotation in 40% of MRO jobs, would reduce service cost without causing a drop in service quality

    Increased demand for rapid access to UK magnetic observatory data : implications for quality control procedures

    Get PDF
    During the last decade the demand for magnetic observatory data has steadily increased both from the scientific community and in particular from commercial organisations. Not only are the quantity of data products greater now but the speed at which they are delivered is faster and the quality of the data provided better. The modern user requirements for timely data have prompted the need for improved automatic procedures utilising the new technologies available. This has to be balanced against the user requirements for accuracy, which necessitate rigorous quality control procedures. While some of these have been automated, as is shown in the flow diagram, there remains a requirement for human interpretation and action if and when the data contain errors. Software development to reduce this human intervention is on-going

    Toward a framework for data quality in cloud-based health information system

    No full text
    This Cloud computing is a promising platform for health information systems in order to reduce costs and improve accessibility. Cloud computing represents a shift away from computing being purchased as a product to be a service delivered over the Internet to customers. Cloud computing paradigm is becoming one of the popular IT infrastructures for facilitating Electronic Health Record (EHR) integration and sharing. EHR is defined as a repository of patient data in digital form. This record is stored and exchanged securely and accessible by different levels of authorized users. Its key purpose is to support the continuity of care, and allow the exchange and integration of medical information for a patient. However, this would not be achieved without ensuring the quality of data populated in the healthcare clouds as the data quality can have a great impact on the overall effectiveness of any system. The assurance of the quality of data used in healthcare systems is a pressing need to help the continuity and quality of care. Identification of data quality dimensions in healthcare clouds is a challenging issue as data quality of cloud-based health information systems arise some issues such as the appropriateness of use, and provenance. Some research proposed frameworks of the data quality dimensions without taking into consideration the nature of cloud-based healthcare systems. In this paper, we proposed an initial framework that fits the data quality attributes. This framework reflects the main elements of the cloud-based healthcare systems and the functionality of EHR
    corecore