12 research outputs found
Publishing Health IT Evaluation Studies
Progress in science is based on evidence from well-designed studies. However, publication quality of health IT evaluation studies is often low, making exploitation of published evidence within systematic reviews and meta-analysis a challenging task. Consequently, reporting guidelines have been published and recommended to be used. After a short overview of publication guidelines relevant for health IT evaluation studies (such as CONSORT and PRISMA), the STARE-HI guidelines for publishing health IT evaluation studies are presented. Health IT evaluation publications should take into account published guidelines, to improve the quality of published evidence. Publication guidelines, in line with addressing publication bias and low study quality, help strengthening the evidence available in the public domain to enable effective evidence-based health informatic
Evaluation of Health IT in Low-Income Countries
Low and middle income countries (LMICs) bear a disproportionate burden of major global health challenges. Health IT could be a promising solution in these settings but LMICs have the weakest evidence of application of health IT to enhance quality of care. Various systematic reviews show significant challenges in the implementation and evaluation of health IT. Key barriers to implementation include lack of adequate infrastructure, inadequate and poorly trained health workers, lack of appropriate legislation and policies and inadequate financial 333indicating the early state of generation of evidence to demonstrate the effectiveness of health IT in improving health outcomes and processes. The implementation challenges need to be addressed. The introduction of new guidelines such as GEP-HI and STARE-HI, as well as models for evaluation such as SEIPS, and the prioritization of evaluations in eHealth strategies of LMICs provide an opportunity to focus on strategic concepts that transform the demands of a modern integrated health care system into solutions that are secure, efficient and sustainabl
Tertiary Teledermatology: A Systematic Review
Telemedicine is becoming widely used in healthcare. Dermatology, because of its visual character, is especially suitable for telemedicine applications. Most common is teledermatology between general practitioners and dermatologists (secondary teledermatology). Another form of the teledermatology process is communication among dermatologists (tertiary teledermatology). The objective of this systematic review is to give an overview of studies on tertiary teledermatology with emphasis on the categories of use. A systematic literature search on tertiary teledermatology studies used all data-bases of the Cochrane Library, MEDLINE (1966-November 2007) and EMBASE (1980-November 2007). Categories of use were identified for all included articles and the modalities of tertiary teledermatology were extracted, together with technology, the setting the outcome measures, and their results. The search resulted in 1,377 publications, of which 11 were included. Four categories of use were found: getting an expert opinion from a specialized, often academic dermatologist (6/11); resident training (2/11); continuing medical education (4/11); and second opinion from a nonspecialized dermatologist (2/11). Three modalities were found: a teledermatology consultation application (7/11), a Web site (2/11), and an e-mail list (1/11). The majority (7/11) used store-and-forward, and 3/11 used store-and-forward and real-time. Outcome measures mentioned were learning effect (6), costs (5), diagnostic accuracy (1), validity (2) and reliability (2), patient and physician satisfaction (1), and efficiency improvement (3). Tertiary teledermatology's main category of use is getting an expert opinion from a specialized, often academic dermatologist. Tertiary teledermatology research is still in early development. Future research should focus on identifying the scale of tertiary teledermatology and on what modality of teledermatology is most suited for what purpose in communication among dermatologist
Barriers and facilitators to the conduct of critical care research in Low and Lower-middle income countries:A scoping review
BACKGROUND: Improvements in health-related outcomes for critically ill adults in low and lower-middle income countries need systematic investments in research capacity and infrastructure. High-quality research has been shown to strengthen health systems; yet, research contributions from these regions remain negligible or absent. We undertook a scoping review to describe barriers and facilitators for the conduct of critical care research. METHODS: We searched MEDLINE and EMBASE up to December 2021 using a strategy that combined keyword and controlled vocabulary terms. We included original studies that reported on barriers or facilitators to the conduct of critical care research in these settings. Two reviewers independently reviewed titles and abstracts, and where necessary, the full-text to select eligible studies. For each study, reviewers independently extracted data using a standardized data extraction form. Barriers and facilitators were classified along the lines of a previous review and based on additional themes that emerged. Study quality was assessed using appropriate tools. RESULTS: We identified 2693 citations, evaluated 49 studies and identified 6 for inclusion. Of the included studies, four were qualitative, one was a cross-sectional survey and one was reported as an ‘analysis’. The total number of participants ranged from 20–100 and included physicians, nurses, allied healthcare workers and researchers. Barriers identified included limited funding, poor institutional & national investment, inadequate access to mentors, absence of training in research methods, limited research support staff, and absence of statistical support. Our review identified potential solutions such as developing a mentorship network, streamlining of regulatory processes, implementing a centralized institutional research agenda, developing a core-outcome dataset and enhancing access to low-cost technology. CONCLUSION: Our scoping review highlights important barriers to the conduct of critical care research in low and lower-middle income countries, identifies potential solutions, and informs researchers, policymakers and governments on the steps necessary for strengthening research systems
Managing Pandemic Responses with Health Informatics – Challenges for Assessing Digital Health Technologies:A Joint Position Paper from the IMIA Technology Assessment & Quality Development in Health Informatics Working Group and EFMI Working Group for Assessment of Health Information Systems
Objectives : To highlight the role of technology assessment in the management of the COVID-19 pandemic. Method : An overview of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems. Results : Evaluation of digital health technologies for COVID-19 should be based on their technical maturity as well as the scale of implementation. For mature technologies like telehealth whose efficacy has been previously demonstrated, pragmatic, rapid evaluation using the complex systems paradigm which accounts for multiple sociotechnical factors, might be more suitable to examine their effectiveness and emerging safety concerns in new settings. New technologies, particularly those intended for use on a large scale such as digital contract tracing, will require assessment of their usability as well as performance prior to deployment, after which evaluation should shift to using a complex systems paradigm to examine the value of information provided. The success of a digital health technology is dependent on the value of information it provides relative to the sociotechnical context of the setting where it is implemented. Conclusion : Commitment to evaluation using the evidence-based medicine and complex systems paradigms will be critical to ensuring safe and effective use of digital health technologies for COVID-19 and future pandemics. There is an inherent tension between evaluation and the imperative to urgently deploy solutions that needs to be negotiated
Adjusting for Disease Severity Across ICUs in Multicenter Studies
Objectives: To compare methods to adjust for confounding by disease severity during multicenter intervention studies in ICU, when different disease severity measures are collected across centers. Design: In silico simulation study using national registry data. Setting: Twenty mixed ICUs in The Netherlands. Subjects: Fifty-five-thousand six-hundred fifty-five ICU admissions between January 1, 2011, and January 1, 2016. Interventions: None. Measurements and Main Results: To mimic an intervention study with confounding, a fictitious treatment variable was simulated whose effect on the outcome was confounded by Acute Physiology and Chronic Health Evaluation IV predicted mortality (a common measure for disease severity). Diverse, realistic scenarios were investigated where the availability of disease severity measures (i.e., Acute Physiology and Chronic Health Evaluation IV, Acute Physiology and Chronic Health Evaluation II, and Simplified Acute Physiology Score II scores) varied across centers. For each scenario, eight different methods to adjust for confounding were used to obtain an estimate of the (fictitious) treatment effect. These were compared in terms of relative (%) and absolute (odds ratio) bias to a reference scenario where the treatment effect was estimated following correction for the Acute Physiology and Chronic Health Evaluation IV scores from all centers. Complete neglect of differences in disease severity measures across centers resulted in bias ranging from 10.2% to 173.6% across scenarios, and no commonly used methodology - such as two-stage modeling or score standardization - was able to effectively eliminate bias. In scenarios where some of the included centers had (only) Acute Physiology and Chronic Health Evaluation II or Simplified Acute Physiology Score II available (and not Acute Physiology and Chronic Health Evaluation IV), either restriction of the analysis to Acute Physiology and Chronic Health Evaluation IV centers alone or multiple imputation of Acute Physiology and Chronic Health Evaluation IV scores resulted in the least amount of relative bias (0.0% and 5.1% for Acute Physiology and Chronic Health Evaluation II, respectively, and 0.0% and 4.6% for Simplified Acute Physiology Score II, respectively). In scenarios where some centers used Acute Physiology and Chronic Health Evaluation II, regression calibration yielded low relative bias too (relative bias, 12.4%); this was not true if these same centers only had Simplified Acute Physiology Score II available (relative bias, 54.8%). Conclusions: When different disease severity measures are available across centers, the performance of various methods to control for confounding by disease severity may show important differences. When planning multicenter studies, researchers should make contingency plans to limit the use of or properly incorporate different disease measures across centers in the statistical analysis
Adjusting for Disease Severity Across ICUs in Multicenter Studies
OBJECTIVES: To compare methods to adjust for confounding by disease severity during multicenter intervention studies in ICU, when different disease severity measures are collected across centers. DESIGN: In silico simulation study using national registry data. SETTING: Twenty mixed ICUs in The Netherlands. SUBJECTS: Fifty-five-thousand six-hundred fifty-five ICU admissions between January 1, 2011, and January 1, 2016.None. MEASUREMENTS AND MAIN RESULTS: To mimic an intervention study with confounding, a fictitious treatment variable was simulated whose effect on the outcome was confounded by Acute Physiology and Chronic Health Evaluation IV predicted mortality (a common measure for disease severity). Diverse, realistic scenarios were investigated where the availability of disease severity measures (i.e., Acute Physiology and Chronic Health Evaluation IV, Acute Physiology and Chronic Health Evaluation II, and Simplified Acute Physiology Score II scores) varied across centers. For each scenario, eight different methods to adjust for confounding were used to obtain an estimate of the (fictitious) treatment effect. These were compared in terms of relative (%) and absolute (odds ratio) bias to a reference scenario where the treatment effect was estimated following correction for the Acute Physiology and Chronic Health Evaluation IV scores from all centers. Complete neglect of differences in disease severity measures across centers resulted in bias ranging from 10.2% to 173.6% across scenarios, and no commonly used methodology-such as two-stage modeling or score standardization-was able to effectively eliminate bias. In scenarios where some of the included centers had (only) Acute Physiology and Chronic Health Evaluation II or Simplified Acute Physiology Score II available (and not Acute Physiology and Chronic Health Evaluation IV), either restriction of the analysis to Acute Physiology and Chronic Health Evaluation IV centers alone or multiple imputation of Acute Physiology and Chronic Health Evaluation IV scores resulted in the least amount of relative bias (0.0% and 5.1% for Acute Physiology and Chronic Health Evaluation II, respectively, and 0.0% and 4.6% for Simplified Acute Physiology Score II, respectively). In scenarios where some centers used Acute Physiology and Chronic Health Evaluation II, regression calibration yielded low relative bias too (relative bias, 12.4%); this was not true if these same centers only had Simplified Acute Physiology Score II available (relative bias, 54.8%). CONCLUSIONS: When different disease severity measures are available across centers, the performance of various methods to control for confounding by disease severity may show important differences. When planning multicenter studies, researchers should make contingency plans to limit the use of or properly incorporate different disease measures across centers in the statistical analysis
Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications
OBJECTIVES: This paper draws attention to: i) key considerations for evaluating artificial intelligence (AI) enabled clinical decision support; and ii) challenges and practical implications of AI design, development, selection, use, and ongoing surveillance. METHOD: A narrative review of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems. RESULTS: There is a rich history and tradition of evaluating AI in healthcare. While evaluators can learn from past efforts, and build on best practice evaluation frameworks and methodologies, questions remain about how to evaluate the safety and effectiveness of AI that dynamically harness vast amounts of genomic, biomarker, phenotype, electronic record, and care delivery data from across health systems. This paper first provides a historical perspective about the evaluation of AI in healthcare. It then examines key challenges of evaluating AI-enabled clinical decision support during design, development, selection, use, and ongoing surveillance. Practical aspects of evaluating AI in healthcare, including approaches to evaluation and indicators to monitor AI are also discussed. CONCLUSION: Commitment to rigorous initial and ongoing evaluation will be critical to ensuring the safe and effective integration of AI in complex sociotechnical settings. Specific enhancements that are required for the new generation of AI-enabled clinical decision support will emerge through practical application
Influences of definition ambiguity on hospital performance indicator scores:examples from The Netherlands
Reliable and unambiguously defined performance indicators are fundamental to objective and comparable measurements of hospitals' quality of care. In two separate case studies (intensive care and breast cancer care), we investigated if differences in definition interpretation of performance indicators affected the indicator scores. Information about possible definition interpretations was obtained by a short telephone survey and a Web survey. We quantified the interpretation differences using a patient-level dataset from a national clinical registry (Case I) and a hospital's local database (Case II). In Case II, there was additional textual information available about the patients' status, which was reviewed to get more insight into the origin of the differences. For Case I, we investigated 15 596 admissions of 33 intensive care units in 2009. Case II consisted of 144 admitted patients with a breast tumour surgically treated in one hospital in 2009. In both cases, hospitals reported different interpretations of the indicators, which lead to significant differences in the indicator values. Case II revealed that these differences could be explained by patient-related factors such as severe comorbidity and patients' individual preference in surgery date. With this article, we hope to increase the awareness on pitfalls regarding the indicator definitions and the quality of the underlying data. To enable objective and comparable measurements of hospitals' quality of care, organizations that request performance information should formalize the indicators they use, including standardization of all data elements of which the indicator is composed (procedures, diagnoses