8 research outputs found
Recommended from our members
Evaluation of an automated safety surveillance system using risk adjusted sequential probability ratio testing
<p>Abstract</p> <p>Background</p> <p>Automated adverse outcome surveillance tools and methods have potential utility in quality improvement and medical product surveillance activities. Their use for assessing hospital performance on the basis of patient outcomes has received little attention. We compared risk-adjusted sequential probability ratio testing (RA-SPRT) implemented in an automated tool to Massachusetts public reports of 30-day mortality after isolated coronary artery bypass graft surgery.</p> <p>Methods</p> <p>A total of 23,020 isolated adult coronary artery bypass surgery admissions performed in Massachusetts hospitals between January 1, 2002 and September 30, 2007 were retrospectively re-evaluated. The RA-SPRT method was implemented within an automated surveillance tool to identify hospital outliers in yearly increments. We used an overall type I error rate of 0.05, an overall type II error rate of 0.10, and a threshold that signaled if the odds of dying 30-days after surgery was at least twice than expected. Annual hospital outlier status, based on the state-reported classification, was considered the gold standard. An event was defined as at least one occurrence of a higher-than-expected hospital mortality rate during a given year.</p> <p>Results</p> <p>We examined a total of 83 hospital-year observations. The RA-SPRT method alerted 6 events among three hospitals for 30-day mortality compared with 5 events among two hospitals using the state public reports, yielding a sensitivity of 100% (5/5) and specificity of 98.8% (79/80).</p> <p>Conclusions</p> <p>The automated RA-SPRT method performed well, detecting all of the true institutional outliers with a small false positive alerting rate. Such a system could provide confidential automated notification to local institutions in advance of public reporting providing opportunities for earlier quality improvement interventions.</p
Recommended from our members
Considerations for addressing bias in artificial intelligence for health equity.
Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine learning (AI/ML)-enabled medical devices. Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. We propose an expanded Total Product Lifecycle (TPLC) framework for healthcare AI/ML, describing the sources and impacts of undesirable bias in AI/ML systems in each phase, how these can be analyzed using appropriate metrics, and how they can be potentially mitigated. The goal of these Considerations is to educate stakeholders on how potential AI/ML bias may impact healthcare outcomes and how to identify and mitigate inequities; to initiate a discussion between stakeholders on these issues, in order to ensure health equity along the expanded AI/ML TPLC framework, and ultimately, better health outcomes for all
Development of an Integrated Platform Using Multidisciplinary Real-World Data to Facilitate Biomarker Discovery for Medical Products
© 2019 The Authors. Clinical and Translational Science published by Wiley Periodicals Inc. on behalf of the American Society of Clinical Pharmacology & Therapeutics. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Translational multidisciplinary research is important for the Center for Devices and Radiological Health\u27s efforts for utilizing real-world data (RWD) to enhance predictive evaluation of medical device performance in patient subpopulations. As part of our efforts for developing new RWD-based evidentiary approaches, including in silico discovery of device-related risk predictors and biomarkers, this study aims to characterize the sex/race-related trends in hip replacement outcomes and identify corresponding candidate single nucleotide polymorphisms (SNPs). Adverse outcomes were assessed by deriving RWD from a retrospective analysis of hip replacement hospital discharge data from the National Inpatient Sample (NIS). Candidate SNPs were explored using pre-existing data from the Personalized Medicine Research Project (PMRP). High-Performance Integrated Virtual Environment was used for analyzing and visualizing putative associations between SNPs and adverse outcomes. Ingenuity Pathway Analysis (IPA) was used for exploring plausibility of the sex-related candidate SNPs and characterizing gene networks associated with the variants of interest. The NIS-based epidemiologic evidence showed that periprosthetic osteolysis (PO) was most prevalent among white men. The PMRP-based genetic evidence associated the PO-related male predominance with rs7121 (odds ratio = 4.89; 95% confidence interval = 1.41â17.05) and other candidate SNPs. SNP-based IPA analysis of the expected gene expression alterations and corresponding signaling pathways suggested possible role of sex-related metabolic factors in development of PO, which was substantiated by ad hoc epidemiologic analysis identifying the sex-related differences in metabolic comorbidities in men vs. women with hip replacement-related PO. Thus, our in silico study illustrates RWD-based evidentiary approaches that may facilitate cost/time-efficient discovery of biomarkers for informing use of medical products
Considerations for addressing bias in artificial intelligence for health equity
Abstract Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine learning (AI/ML)-enabled medical devices. Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. We propose an expanded Total Product Lifecycle (TPLC) framework for healthcare AI/ML, describing the sources and impacts of undesirable bias in AI/ML systems in each phase, how these can be analyzed using appropriate metrics, and how they can be potentially mitigated. The goal of these âConsiderationsâ is to educate stakeholders on how potential AI/ML bias may impact healthcare outcomes and how to identify and mitigate inequities; to initiate a discussion between stakeholders on these issues, in order to ensure health equity along the expanded AI/ML TPLC framework, and ultimately, better health outcomes for all