2 research outputs found

    Influences of hospital information systems, indicator data collection and computation on reported Dutch hospital performance indicator scores

    Get PDF
    Background: For health care performance indicators (PIs) to be reliable, data underlying the PIs are required to be complete, accurate, consistent and reproducible. Given the lack of regulation of the data-systems used in the Netherlands, and the self-report based indicator scores, one would expect heterogeneity with respect to the data collection and the ways indicators are computed. This might affect the reliability and plausibility of the nationally reported scores. Methods. We aimed to investigate the extent to which local hospital data collection and indicator computation strategies differ and how this affects the plausibility of self-reported indicator scores, using survey results of 42 hospitals and data of the Dutch national quality database. Results: The data collection and indicator computation strategies of the hospitals were substantially heterogenic. Moreover, the Hip and Knee replacement PI scores can be regarded as largely implausible, which was, to a great extent, related to a limited (computerized) data registry. In contrast, Breast Cancer PI scores were more plausible, despite the incomplete data registry and limited data access. This might be explained by the role of the regional cancer centers that collect most of the indicator data for the national cancer registry, in a standardized manner. Hospitals can use cancer registry indicator scores to report to the government, instead of their own locally collected indicator scores. Conclusions: Indicator developers, users and the scientific field need to focus more on the underlying (heterogenic) ways of data collection and conditional data infrastructures. Countries that have a liberal software market and are aiming to implement a self-report based performance indicator system to obtain health care transparency, should secure the accuracy and precision of the heath care data from which the PIs are calculated. Moreover, ongoing research and development of PIs and profound insight in the clinical practice of data registration is warranted

    Testing the construct validity of hospital care quality indicators: a case study on hip replacement

    Get PDF
    BACKGROUND: Quality indicators are increasingly used to measure the quality of care and compare quality across hospitals. In the Netherlands over the past few years numerous hospital quality indicators have been developed and reported. Dutch indicators are mainly based on expert consensus and face validity and little is known about their construct validity. Therefore, we aim to study the construct validity of a set of national hospital quality indicators for hip replacements.METHODS: We used the scores of 100 Dutch hospitals on national hospital quality indicators looking at care delivered over a two year period. We assessed construct validity by relating structure, process and outcome indicators using chi-square statistics, bootstrapped Spearman correlations, and independent sample t-tests. We studied indicators that are expected to associate as they measure the same clinical construct.RESULT: Among the 28 hypothesized correlations, three associations were significant in the direction hypothesized. Hospitals with low scores on wound infections had high scores on scheduling postoperative appointments (p-value = 0.001) and high scores on not transfusing homologous blood (correlation coefficient = -0.28; p-value = 0.05). Hospitals with high scores on scheduling complication meetings, also had high scores on providing thrombosis prophylaxis (correlation coefficient = 0.21; p-value = 0.04).CONCLUSION: Despite the face validity of hospital quality indicators for hip replacement, construct validity seems to be limited. Although the individual indicators might be valid and actionable, drawing overall conclusions based on the whole indicator set should be done carefully, as construct validity could not be established. The factors that may explain the lack of construct validity are poor data quality, no adjustment for case-mix and statistical uncertainty
    corecore