14 research outputs found

    A Different Perspective on the Use of Sepsis Alert

    No full text

    Embracing cohort heterogeneity in clinical machine learning development: a step toward generalizable models

    No full text
    Abstract This study is a simple illustration of the benefit of averaging over cohorts, rather than developing a prediction model from a single cohort. We show that models trained on data from multiple cohorts can perform significantly better in new settings than models based on the same amount of training data but from just a single cohort. Although this concept seems simple and obvious, no current prediction model development guidelines recommend such an approach

    The usefulness of implementing minimum retest intervals in reducing inappropriate laboratory test requests in a Dutch hospital

    No full text
    Inappropriate use of laboratory testing remains a challenging problem worldwide. Minimum retest intervals (MRI) are used to reduce inappropriate laboratory testing. However, their effectiveness and the usefulness in reducing inappropriate laboratory testing is still a matter of debate. The aim of this study was to evaluate the effectiveness of broadly implemented MRIs as a means of reducing inappropriate laboratory test requests. We performed a retrospective study in a general care and teaching hospital in the Netherlands, where MRI alerts have been implemented as standard care since June 7th 2017. Clinical chemistry test orders in adult internal medicine patients placed between July 13th 2017 and December 31st 2019 were included. The primary outcome was the effectiveness of MRIs, expressed as percentages of tests ordered and barred as a result of MRIs. Of a total of 218,511 test requests, 4,159 (1.90%) got an MRI alert. These MRIs were overruled by physicians in 21.76% of the cases. As a result of implementing MRIs, 3,254 (1.49%) tests were barred. The financial savings for the department of internal medicine directly related to the included barred laboratory tests during this period were 11,880 euros on a total amount of 636,598 euros for all performed tests. Only a small proportion of laboratory tests are barred after implementation of MRIs, with a limited impact on the annual costs. However, MRIs provide a continuous reminder to focus on appropriate testing and the effectiveness of MRIs is potentially higher than described in this study

    Persistent high IgG phase I antibody levels against Coxiella burnetii among veterinarians compared to patients previously diagnosed with acute Q fever after three years of follow-up.

    No full text
    BACKGROUND: Little is known about the development of chronic Q fever in occupational risk groups. The aim of this study was to perform long-term follow-up of Coxiella burnetii seropositive veterinarians and investigate the course of IgG phase I and phase II antibodies against C. burnetii antigens and to compare this course with that in patients previously diagnosed with acute Q fever. METHODS: Veterinarians with IgG phase I ≥ 1:256 (immunofluorescence assay) that participated in a previous seroprevalence study were asked to provide a second blood sample three years later. IgG antibody profiles were compared to a group of acute Q fever patients who had IgG phase I ≥ 1:256 twelve months after diagnosis. RESULTS: IgG phase I was detected in all veterinarians (n = 76) and in 85% of Q fever patients (n = 98) after three years (p<0.001). IgG phase I ≥ 1:1,024, indicating possible chronic Q fever, was found in 36% of veterinarians and 12% of patients (OR 3.95, 95% CI: 1.84-8.49). CONCLUSIONS: IgG phase I persists among veterinarians presumably because of continuous exposure to C. burnetii during their work. Serological and clinical follow-up of occupationally exposed risk groups should be considered

    Implementing artificial intelligence in clinical practice: a mixed-method study of barriers and facilitators

    No full text
    Background: Though artificial intelligence (AI) in healthcare has great potential, medicine has been slow to adopt AI tools. Barriers and facilitators to clinical AI implementation among healthcare professionals (the end-users) are ill defined, nor have appropriate implementation strategies to overcome them been suggested. Therefore, we aim to study these barriers and facilitators, and find general insights that could be applicable to a wide variety of AI-tool implementations in clinical practice. Methods: We conducted a mixed-methods study encompassing individual interviews, a focus group, and a nationwide survey. End-users of AI in healthcare (physicians) from various medical specialties were included. We performed deductive direct content analysis, using the Consolidated Framework for Implementation Research (CFIR) for coding. CFIR constructs were entered into the Expert Recommendations for Implementing Change (ERIC) to find suitable implementation strategies. Quantitative survey data was descriptively analyzed. Results: We performed ten individual interviews, and one focus group with five physicians. The most prominent constructs identified during the qualitative interim analyses were incorporated in the nationwide survey, which had 106 survey respondents. We found nine CFIR constructs important to AI implementation: evidence strength, relative advantage, adaptability, trialability, structural characteristics, tension for change, compatibility, access to knowledge and information, and knowledge and beliefs about the intervention. Consequently, the ERIC tool displayed the following strategies: identify and prepare champions, conduct educational meetings, promote adaptability, develop educational materials, and distribute educational materials. Conclusions: The potential value of AI in healthcare is acknowledged by end-users, however, the current tension for change needs to be sparked to facilitate sustainable implementation. Strategies that should be used are: increasing the access to knowledge and information through educational meetings and materials with committed local leaders. A trial phase for end-users to test and compare AI algorithms. Lastly, algorithms should be tailored to be adaptable to the local context and existing workflows. Applying these implementation strategies will bring us one step closer to realizing the value of AI in healthcare

    Implementing artificial intelligence in clinical practice: a mixed-method study of barriers and facilitators

    No full text
    Background: Though artificial intelligence (AI) in healthcare has great potential, medicine has been slow to adopt AI tools. Barriers and facilitators to clinical AI implementation among healthcare professionals (the end-users) are ill defined, nor have appropriate implementation strategies to overcome them been suggested. Therefore, we aim to study these barriers and facilitators, and find general insights that could be applicable to a wide variety of AI-tool implementations in clinical practice. Methods: We conducted a mixed-methods study encompassing individual interviews, a focus group, and a nationwide survey. End-users of AI in healthcare (physicians) from various medical specialties were included. We performed deductive direct content analysis, using the Consolidated Framework for Implementation Research (CFIR) for coding. CFIR constructs were entered into the Expert Recommendations for Implementing Change (ERIC) to find suitable implementation strategies. Quantitative survey data was descriptively analyzed. Results: We performed ten individual interviews, and one focus group with five physicians. The most prominent constructs identified during the qualitative interim analyses were incorporated in the nationwide survey, which had 106 survey respondents. We found nine CFIR constructs important to AI implementation: evidence strength, relative advantage, adaptability, trialability, structural characteristics, tension for change, compatibility, access to knowledge and information, and knowledge and beliefs about the intervention. Consequently, the ERIC tool displayed the following strategies: identify and prepare champions, conduct educational meetings, promote adaptability, develop educational materials, and distribute educational materials. Conclusions: The potential value of AI in healthcare is acknowledged by end-users, however, the current tension for change needs to be sparked to facilitate sustainable implementation. Strategies that should be used are: increasing the access to knowledge and information through educational meetings and materials with committed local leaders. A trial phase for end-users to test and compare AI algorithms. Lastly, algorithms should be tailored to be adaptable to the local context and existing workflows. Applying these implementation strategies will bring us one step closer to realizing the value of AI in healthcare

    Diagnostic stewardship for blood cultures in the emergency department: A multicenter validation and prospective evaluation of a machine learning prediction tool

    No full text
    Background: Overuse of blood cultures (BCs) in emergency departments (EDs) leads to low yields and high numbers of contaminated cultures, accompanied by increased diagnostics, antibiotic usage, prolonged hospitalization, and mortality. We aimed to simplify and validate a recently developed machine learning model to help safely withhold BC testing in low-risk patients. Methods: We extracted data from the electronic health records (EHR) for 44.123 unique ED visits with BC sampling in the Amsterdam UMC (locations VUMC and AMC; the Netherlands), Zaans Medical Center (ZMC; the Netherlands), and Beth Israel Deaconess Medical Center (BIDMC; United States) in periods between 2011 and 2021. We trained a machine learning model on the VUMC data to predict blood culture outcomes and validated it in the AMC, ZMC, and BIDMC with subsequent real-time prospective evaluation in the VUMC. Findings: The model had an Area Under the Receiver Operating Characteristics curve (AUROC) of 0.81 (95%-CI = 0.78–0.83) in the VUMC test set. The most important predictors were temperature, creatinine, and C-reactive protein. The AUROCs in the validation cohorts were 0.80 (AMC; 0.78–0.82), 0.76 (ZMC; 0.74–0.78), and 0.75 (BIDMC; 0.74–0.76). During real-time prospective evaluation in the EHR of the VUMC, it reached an AUROC of 0.76 (0.71–0.81) among 590 patients with BC draws in the ED. The prospective evaluation showed that the model can be used to safely withhold blood culture analyses in at least 30% of patients in the ED. Interpretation: We developed a machine learning model to predict blood culture outcomes in the ED, which retained its performance during external validation and real-time prospective evaluation. Our model can identify patients at low risk of having a positive blood culture. Using the model in practice can significantly reduce the number of blood culture analyses and thus avoid the hidden costs of false-positive culture results. Funding: This research project was funded by the Amsterdam Public Health – Quality of Care program and the Dutch “Doen of Laten” project (project number: 839205002)

    Diagnostic stewardship for blood cultures in the emergency department: A multicenter validation and prospective evaluation of a machine learning prediction tool

    No full text
    Summary: Background: Overuse of blood cultures (BCs) in emergency departments (EDs) leads to low yields and high numbers of contaminated cultures, accompanied by increased diagnostics, antibiotic usage, prolonged hospitalization, and mortality. We aimed to simplify and validate a recently developed machine learning model to help safely withhold BC testing in low-risk patients. Methods: We extracted data from the electronic health records (EHR) for 44.123 unique ED visits with BC sampling in the Amsterdam UMC (locations VUMC and AMC; the Netherlands), Zaans Medical Center (ZMC; the Netherlands), and Beth Israel Deaconess Medical Center (BIDMC; United States) in periods between 2011 and 2021. We trained a machine learning model on the VUMC data to predict blood culture outcomes and validated it in the AMC, ZMC, and BIDMC with subsequent real-time prospective evaluation in the VUMC. Findings: The model had an Area Under the Receiver Operating Characteristics curve (AUROC) of 0.81 (95%-CI = 0.78–0.83) in the VUMC test set. The most important predictors were temperature, creatinine, and C-reactive protein. The AUROCs in the validation cohorts were 0.80 (AMC; 0.78–0.82), 0.76 (ZMC; 0.74–0.78), and 0.75 (BIDMC; 0.74–0.76). During real-time prospective evaluation in the EHR of the VUMC, it reached an AUROC of 0.76 (0.71–0.81) among 590 patients with BC draws in the ED. The prospective evaluation showed that the model can be used to safely withhold blood culture analyses in at least 30% of patients in the ED. Interpretation: We developed a machine learning model to predict blood culture outcomes in the ED, which retained its performance during external validation and real-time prospective evaluation. Our model can identify patients at low risk of having a positive blood culture. Using the model in practice can significantly reduce the number of blood culture analyses and thus avoid the hidden costs of false-positive culture results. Funding: This research project was funded by the Amsterdam Public Health – Quality of Care program and the Dutch “Doen of Laten” project (project number: 839205002)

    Boxplot of IgG phase I antibodies in two samples obtained from veterinarians (n = 78) and Q fever patients (n = 98) in a three-year time period.

    No full text
    <p>The horizontal dark lines within the boxes represent the median antibody titer, the lower and upper boundaries of the boxes represent the 25<sup>th</sup> and 75<sup>th</sup> percentiles, and the T-bars represent the 2.5<sup>th</sup> and 97.5<sup>th</sup> percentiles. Outliers are indicated with dots, extreme outliers (more than three times the height of the box) with asterisks. First serum sample: veterinarians in 2009 or 2010; patients in 2008 or 2009 (twelve months after acute Q fever diagnosis in 2007 or 2008). Follow-up sample: veterinarians in 2013 (three to four years after first sample); patients in 2011 or 2012 (four years after acute Q fever diagnosis in 2007 or 2008). When no end titration available: >1:2,048 categorized as 1:4,096, and >1:4,096 categorized as 1:8,912.</p
    corecore