55 research outputs found
Description and pilot evaluation of the Metabolic Irregularities Narrowing down Device software: a case analysis of physician programming
Background: There is a gap between the abilities and the everyday applications of Computerized Decision Support Systems (CDSSs). This gap is further exacerbated by the different âworldsâ between the software designers and the clinician end-users. Software programmers often lack clinical experience whereas practicing physicians lack skills in design and engineering.
Objective: Our primary objective was to evaluate the performance of Metabolic Irregularities Narrowing down Device (MIND) intelligent medical calculator and differential diagnosis software through end-user surveys and discuss the roles of CDSS in the inpatient setting.
Setting: A tertiary care, teaching community hospital.
Study participants: Thirty-one responders answered the survey. Responders consisted of medical students, 24%; attending physicians, 16%, and residents, 60%.
Results: About 62.5% of the responders reported that MIND has the ability to potentially improve the quality of care, 20.8% were sure that MIND improves the quality of care, and only 4.2% of the responders felt that it does not improve the quality of care. Ninety-six percent of the responders felt that MIND definitely serves or has the potential to serve as a useful tool for medical students, and only 4% of the responders felt otherwise. Thirty-five percent of the responders rated the differential diagnosis list as excellent, 56% as good, 4% as fair, and 4% as poor.
Discussion: MIND is a suggesting, interpreting, alerting, and diagnosing CDSS with good performance and end-user satisfaction. In the era of the electronic medical record, the ongoing development of efficient CDSS platforms should be carefully considered by practicing physicians and institutions
Recommended from our members
Decision support tool for differential diagnosis of Acute Respiratory Distress Syndrome (ARDS) vs Cardiogenic Pulmonary Edema (CPE): a prospective validation and meta-analysis
Introduction: We recently presented a prediction score providing decision support with the often-challenging early differential diagnosis of acute lung injury (ALI) vs cardiogenic pulmonary edema (CPE). To facilitate clinical adoption, our objective was to prospectively validate its performance in an independent cohort. Methods: Over 9 months, adult patients consecutively admitted to any intensive care unit of a tertiary-care center developing acute pulmonary edema were identified in real-time using validated electronic surveillance. For eligible patients, predictors were abstracted from medical records within 48 hours of the alert. Post-hoc expert review blinded to the prediction score established gold standard diagnosis. Results: Of 1,516 patients identified by electronic surveillance, data were abstracted for 249 patients (93% within 48 hours of disease onset), of which expert review (kappa 0.93) classified 72 as ALI, 73 as CPE and excluded 104 as âotherâ. With an area under the curve (AUC) of 0.81 (95% confidence interval =0.73 to 0.88) the prediction score showed similar discrimination as in prior cohorts (development AUC = 0.81, P = 0.91; retrospective validation AUC = 0.80, P = 0.92). Hosmer-Lemeshow test was significant (P = 0.01), but across eight previously defined score ranges probabilities of ALI vs CPE were the same as in the development cohort (P = 0.60). Results were the same when comparing acute respiratory distress syndrome (ARDS, Berlin definition) vs CPE. Conclusion: The clinical prediction score reliably differentiates ARDS/ALI vs CPE. Pooled results provide precise estimates of the scoreâs performance which can be used to screen patient populations or to assess the probability of ALI/ARDS vs CPE in specific patients. The score may thus facilitate early inclusion into research studies and expedite prompt treatment. Electronic supplementary material The online version of this article (doi:10.1186/s13054-014-0659-x) contains supplementary material, which is available to authorized users
Time to diagnostic certainty for saddle pulmonary embolism in hospitalized patients
There is a lack of diagnostic performance measures associated with pulmonary embolism (PE). We aimed to explore the concept of the time to diagnostic certainty, which we defined as the time interval that elapses between first presentation of a patient to a confirmed PE diagnosis with computed tomography pulmonary angiogram (CT PA). This approach could be used to highlight variability in health system diagnostic performance, and to select patient outliers for structured chart review in order to identify underlying contributors to diagnostic error or delay. We performed a retrospective observational study at academic medical centers and associated community-based hospitals in one health system, examining randomly selected adult patients admitted to study sites with a diagnosis of acute saddle PE. One hundred patients were randomly selected from 340 patients discharged with saddle PE. Twenty-four patients were excluded. Among the 76 included patients, time to diagnostic certainty ranged from 1.5 to 310 hours. We found that 73/76 patients were considered to have PE present on admission (CT PA †48 hours). The proportion of patients with PE present on admission with time to diagnostic certainty of > 6 hours was 26% (19/73). The median (IQR) time to treatment (thrombolytics/anticoagulants) was 3.5 (2.5-5.1) hours among the 73 patients. The proportion of patients with PE present on admission with treatment delays of > 6 hours was 16% (12/73). Three patients acquired PE during hospitalization (CT PA > 48 hours). In this study, we developed and successfully tested the concept of time to diagnostic certainty for saddle PE
A Comparison of Administrative and Physiologic Predictive Models in Determining Risk Adjusted Mortality Rates in Critically Ill Patients
Hospitals are increasingly compared based on clinical outcomes adjusted for severity of illness. Multiple methods exist to adjust for differences between patients. The challenge for consumers of this information, both the public and healthcare providers, is interpreting differences in risk adjustment models particularly when models differ in their use of administrative and physiologic data. We set to examine how administrative and physiologic models compare to each when applied to critically ill patients.We prospectively abstracted variables for a physiologic and administrative model of mortality from two intensive care units in the United States. Predicted mortality was compared through the Pearsons Product coefficient and Bland-Altman analysis. A subgroup of patients admitted directly from the emergency department was analyzed to remove potential confounding changes in condition prior to ICU admission.We included 556 patients from two academic medical centers in this analysis. The administrative model and physiologic models predicted mortalities for the combined cohort were 15.3% (95% CI 13.7%, 16.8%) and 24.6% (95% CI 22.7%, 26.5%) (t-test p-value<0.001). The r(2) for these models was 0.297. The Bland-Atlman plot suggests that at low predicted mortality there was good agreement; however, as mortality increased the models diverged. Similar results were found when analyzing a subgroup of patients admitted directly from the emergency department. When comparing the two hospitals, there was a statistical difference when using the administrative model but not the physiologic model. Unexplained mortality, defined as those patients who died who had a predicted mortality less than 10%, was a rare event by either model.In conclusion, while it has been shown that administrative models provide estimates of mortality that are similar to physiologic models in non-critically ill patients with pneumonia, our results suggest this finding can not be applied globally to patients admitted to intensive care units. As patients and providers increasingly use publicly reported information in making health care decisions and referrals, it is critical that the provided information be understood. Our results suggest that severity of illness may influence the mortality index in administrative models. We suggest that when interpreting "report cards" or metrics, health care providers determine how the risk adjustment was made and compares to other risk adjustment models
Validation of automated data abstraction for SCCM discovery VIRUS COVID-19 registry: practical EHR export pathways (VIRUS-PEEP)
BackgroundThe gold standard for gathering data from electronic health records (EHR) has been manual data extraction; however, this requires vast resources and personnel. Automation of this process reduces resource burdens and expands research opportunities.ObjectiveThis study aimed to determine the feasibility and reliability of automated data extraction in a large registry of adult COVID-19 patients.Materials and methodsThis observational study included data from sites participating in the SCCM Discovery VIRUS COVID-19 registry. Important demographic, comorbidity, and outcome variables were chosen for manual and automated extraction for the feasibility dataset. We quantified the degree of agreement with Cohenâs kappa statistics for categorical variables. The sensitivity and specificity were also assessed. Correlations for continuous variables were assessed with Pearsonâs correlation coefficient and BlandâAltman plots. The strength of agreement was defined as almost perfect (0.81â1.00), substantial (0.61â0.80), and moderate (0.41â0.60) based on kappa statistics. Pearson correlations were classified as trivial (0.00â0.30), low (0.30â0.50), moderate (0.50â0.70), high (0.70â0.90), and extremely high (0.90â1.00).Measurements and main resultsThe cohort included 652 patients from 11 sites. The agreement between manual and automated extraction for categorical variables was almost perfect in 13 (72.2%) variables (Race, Ethnicity, Sex, Coronary Artery Disease, Hypertension, Congestive Heart Failure, Asthma, Diabetes Mellitus, ICU admission rate, IMV rate, HFNC rate, ICU and Hospital Discharge Status), and substantial in five (27.8%) (COPD, CKD, Dyslipidemia/Hyperlipidemia, NIMV, and ECMO rate). The correlations were extremely high in three (42.9%) variables (age, weight, and hospital LOS) and high in four (57.1%) of the continuous variables (Height, Days to ICU admission, ICU LOS, and IMV days). The average sensitivity and specificity for the categorical data were 90.7 and 96.9%.Conclusion and relevanceOur study confirms the feasibility and validity of an automated process to gather data from the EHR
Validation of Automated Data Abstraction for SCCM Discovery VIRUS COVID-19 Registry: Practical EHR Export Pathways (VIRUS-PEEP)
BACKGROUND: The gold standard for gathering data from electronic health records (EHR) has been manual data extraction; however, this requires vast resources and personnel. Automation of this process reduces resource burdens and expands research opportunities.
OBJECTIVE: This study aimed to determine the feasibility and reliability of automated data extraction in a large registry of adult COVID-19 patients.
MATERIALS AND METHODS: This observational study included data from sites participating in the SCCM Discovery VIRUS COVID-19 registry. Important demographic, comorbidity, and outcome variables were chosen for manual and automated extraction for the feasibility dataset. We quantified the degree of agreement with Cohen\u27s kappa statistics for categorical variables. The sensitivity and specificity were also assessed. Correlations for continuous variables were assessed with Pearson\u27s correlation coefficient and Bland-Altman plots. The strength of agreement was defined as almost perfect (0.81-1.00), substantial (0.61-0.80), and moderate (0.41-0.60) based on kappa statistics. Pearson correlations were classified as trivial (0.00-0.30), low (0.30-0.50), moderate (0.50-0.70), high (0.70-0.90), and extremely high (0.90-1.00).
MEASUREMENTS AND MAIN RESULTS: The cohort included 652 patients from 11 sites. The agreement between manual and automated extraction for categorical variables was almost perfect in 13 (72.2%) variables (Race, Ethnicity, Sex, Coronary Artery Disease, Hypertension, Congestive Heart Failure, Asthma, Diabetes Mellitus, ICU admission rate, IMV rate, HFNC rate, ICU and Hospital Discharge Status), and substantial in five (27.8%) (COPD, CKD, Dyslipidemia/Hyperlipidemia, NIMV, and ECMO rate). The correlations were extremely high in three (42.9%) variables (age, weight, and hospital LOS) and high in four (57.1%) of the continuous variables (Height, Days to ICU admission, ICU LOS, and IMV days). The average sensitivity and specificity for the categorical data were 90.7 and 96.9%.
CONCLUSION AND RELEVANCE: Our study confirms the feasibility and validity of an automated process to gather data from the EHR
- âŠ