16 research outputs found

    Contextualized Drug–Drug Interaction Management Improves Clinical Utility Compared With Basic Drug–Drug Interaction Management in Hospitalized Patients

    Get PDF
    Drug–drug interactions (DDIs) frequently trigger adverse drug events or reduced efficacy. Most DDI alerts, however, are overridden because of irrelevance for the specific patient. Basic DDI clinical decision support (CDS) systems offer limited possibilities for decreasing the number of irrelevant DDI alerts without missing relevant ones. Computerized decision tree rules were designed to context-dependently suppress irrelevant DDI alerts. A crossover study was performed to compare the clinical utility of contextualized and basic DDI management in hospitalized patients. First, a basic DDI-CDS system was used in clinical practice while contextualized DDI alerts were collected in the background. Next, this process was reversed. All medication orders (MOs) from hospitalized patients with at least one DDI alert were included. The following outcome measures were used to assess clinical utility: positive predictive value (PPV), negative predictive value (NPV), number of pharmacy interventions (PIs)/1,000 MOs, and the median time spent on DDI management/1,000 MOs. During the basic DDI management phase 1,919 MOs/day were included, triggering 220 DDI alerts/1,000 MOs; showing 57 basic DDI alerts/1,000 MOs to pharmacy staff; PPV was 2.8% with 1.6 PIs/1,000 MOs costing 37.2 minutes/1,000 MOs. No DDIs were missed by the contextualized CDS system (NPV 100%). During the contextualized DDI management phase 1,853 MOs/day were included, triggering 244 basic DDI alerts/1,000 MOs, showing 9.6 contextualized DDIs/1,000 MOs to pharmacy staff; PPV was 41.4% (P &lt; 0.01), with 4.0 PIs/1,000 MOs (P &lt; 0.01) and 13.7 minutes/1,000 MOs. The clinical utility of contextualized DDI management exceeds that of basic DDI management.</p

    Contextualized Drug–Drug Interaction Management Improves Clinical Utility Compared With Basic Drug–Drug Interaction Management in Hospitalized Patients

    Get PDF
    Drug–drug interactions (DDIs) frequently trigger adverse drug events or reduced efficacy. Most DDI alerts, however, are overridden because of irrelevance for the specific patient. Basic DDI clinical decision support (CDS) systems offer limited possibilities for decreasing the number of irrelevant DDI alerts without missing relevant ones. Computerized decision tree rules were designed to context-dependently suppress irrelevant DDI alerts. A crossover study was performed to compare the clinical utility of contextualized and basic DDI management in hospitalized patients. First, a basic DDI-CDS system was used in clinical practice while contextualized DDI alerts were collected in the background. Next, this process was reversed. All medication orders (MOs) from hospitalized patients with at least one DDI alert were included. The following outcome measures were used to assess clinical utility: positive predictive value (PPV), negative predictive value (NPV), number of pharmacy interventions (PIs)/1,000 MOs, and the median time spent on DDI management/1,000 MOs. During the basic DDI management phase 1,919 MOs/day were included, triggering 220 DDI alerts/1,000 MOs; showing 57 basic DDI alerts/1,000 MOs to pharmacy staff; PPV was 2.8% with 1.6 PIs/1,000 MOs costing 37.2 minutes/1,000 MOs. No DDIs were missed by the contextualized CDS system (NPV 100%). During the contextualized DDI management phase 1,853 MOs/day were included, triggering 244 basic DDI alerts/1,000 MOs, showing 9.6 contextualized DDIs/1,000 MOs to pharmacy staff; PPV was 41.4% (P &lt; 0.01), with 4.0 PIs/1,000 MOs (P &lt; 0.01) and 13.7 minutes/1,000 MOs. The clinical utility of contextualized DDI management exceeds that of basic DDI management.</p

    Biplanar versus conventional two-dimensional ultrasound guidance for radial artery catheterisation

    Get PDF
    Background: Ultrasound guidance increases first-pass success rates and decreases the number of cannulation attempts and complications during radial artery catheterisation but it is debatable whether short-, long-, or oblique-axis imaging is superior for obtaining access. Three-dimensional (3D) biplanar ultrasound combines both short- and long-axis views with their respective benefits. This study aimed to determine whether biplanar imaging would improve the accuracy of radial artery catheterisation compared with conventional 2D imaging. Methods: This before-and-after trial included adult patients who required radial artery catheterisation for elective cardiothoracic surgery. The participating anaesthesiologists were experienced in 2D and biplanar ultrasound-guided vascular access. The primary endpoint was successful catheterisation in one skin break without withdrawals. Secondary endpoints were the numbers of punctures and withdrawals, scanning and procedure times, needle visibility, perceived mental effort of the operator, and posterior wall puncture or other mechanical complications. Results: From November 2021 until April 2022, 158 patients were included and analysed (2D=75, biplanar=83), with two failures to catheterise in each group. First-pass success without needle redirections was 58.7% in the 2D group and 60.2% in the biplanar group (difference=1.6%; 95% confidence interval [CI], –14.0%–17.1%; P=0.84), and first-pass success within one skin break was 77.3% in the 2D group vs 81.9% in the biplanar group (difference=4.6%; 95% CI, 8.1%–17.3%; P=0.473). None of the secondary endpoints differed significantly. Conclusions: Biplanar ultrasound guidance did not improve success rates nor other performance measures of radial artery catheterisation. The additional visual information acquired with biplanar imaging did not offer any benefit. Clinical trial registration: N9687 (Dutch Trial Register).</p

    Development of a text mining algorithm for identifying adverse drug reactions in electronic health records

    Get PDF
    Objective: Adverse drug reactions (ADRs) are a significant healthcare concern. They are often documented as free text in electronic health records (EHRs), making them challenging to use in clinical decision support systems (CDSS). The study aimed to develop a text mining algorithm to identify ADRs in free text of Dutch EHRs. Materials and Methods: In Phase I, our previously developed CDSS algorithm was recoded and improved upon with the same relatively large dataset of 35 000 notes (Step A), using R to identify possible ADRs with Medical Dictionary for Regulatory Activities (MedDRA) terms and the related Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) (Step B). In Phase II, 6 existing text-mining R-scripts were used to detect and present unique ADRs, and positive predictive value (PPV) and sensitivity were observed. Results: In Phase IA, the recoded algorithm performed better than the previously developed CDSS algorithm, resulting in a PPV of 13% and a sensitivity of 93%. For The sensitivity for serious ADRs was 95%. The algorithm identified 58 additional possible ADRs. In Phase IB, the algorithm achieved a PPV of 10%, a sensitivity of 86%, and an F-measure of 0.18. In Phase II, four R-scripts enhanced the sensitivity and PPV of the algorithm, resulting in a PPV of 70%, a sensitivity of 73%, an F-measure of 0.71, and a 63% sensitivity for serious ADRs. Discussion and Conclusion: The recoded Dutch algorithm effectively identifies ADRs from free-text Dutch EHRs using R-scripts and MedDRA/SNOMED-CT. The study details its limitations, highlighting the algorithm's potential and significant improvements

    Contextualized Drug–Drug Interaction Management Improves Clinical Utility Compared With Basic Drug–Drug Interaction Management in Hospitalized Patients

    Get PDF
    Drug–drug interactions (DDIs) frequently trigger adverse drug events or reduced efficacy. Most DDI alerts, however, are overridden because of irrelevance for the specific patient. Basic DDI clinical decision support (CDS) systems offer limited possibilities for decreasing the number of irrelevant DDI alerts without missing relevant ones. Computerized decision tree rules were designed to context-dependently suppress irrelevant DDI alerts. A crossover study was performed to compare the clinical utility of contextualized and basic DDI management in hospitalized patients. First, a basic DDI-CDS system was used in clinical practice while contextualized DDI alerts were collected in the background. Next, this process was reversed. All medication orders (MOs) from hospitalized patients with at least one DDI alert were included. The following outcome measures were used to assess clinical utility: positive predictive value (PPV), negative predictive value (NPV), number of pharmacy interventions (PIs)/1,000 MOs, and the median time spent on DDI management/1,000 MOs. During the basic DDI management phase 1,919 MOs/day were included, triggering 220 DDI alerts/1,000 MOs; showing 57 basic DDI alerts/1,000 MOs to pharmacy staff; PPV was 2.8% with 1.6 PIs/1,000 MOs costing 37.2 minutes/1,000 MOs. No DDIs were missed by the contextualized CDS system (NPV 100%). During the contextualized DDI management phase 1,853 MOs/day were included, triggering 244 basic DDI alerts/1,000 MOs, showing 9.6 contextualized DDIs/1,000 MOs to pharmacy staff; PPV was 41.4% (P < 0.01), with 4.0 PIs/1,000 MOs (P < 0.01) and 13.7 minutes/1,000 MOs. The clinical utility of contextualized DDI management exceeds that of basic DDI management

    Clinical rule-guided pharmacists' intervention in hospitalized patients with hypokalaemia: A time series analysis

    No full text
    What is known and objective: Physicians’ response to moderate and severe hypokalaemia in hospitalized patients is frequently suboptimal, leading to increased risk of cardiac arrhythmias and sudden death. While actively alerting physicians on all critical care values using telephone or electronic pop-ups can improve response, it can also lead to alert fatigue and frustration due to non-specific and overdue alerts. Therefore, a new method was tested. A clinical rule built into a clinical decision support system (CDSS) generated alerts for patients with a serum potassium level (SPL) 18 years with SPL <2.9 mmol/L measured at least 24 hours after hospitalization in whom no potassium supplementation was initiated within 4 hours after measurement and normalization of SPL was not achieved within these 4 hours were included. Haemodialysis patients were excluded. The percentage of hypokalaemic patients with a subsequent prescription for potassium supplementation, time to subsequent potassium supplementation prescription, the percentage of patients who achieved normokalaemia (SPL ≥ 3.0 mmol/L), time to achieve normokalaemia and total duration of hospitalization were compared. Results and discussion: A total of 693 patients were included, of whom 278 participated in the intervention phase. The percentage of patients prescribed supplementation as well as time to prescription improved from 76.0% in 31.1 hours to 92.0% in 11.3 hours (P <.01). Time to achieve SPL ≥3.0 mmol/L improved, P <.009. No changes, however, were observed in the percentage of patients who achieved normokalaemia or time to reach normokalaemia, 87.5% in 65.2 hours pre-intervention compared to 90.2% (P =.69) in 64.0 hours (P =.71) in the intervention group. A non-significant decrease of 8.2 days was observed in the duration of hospitalization: 25.4 compared to 17.2 days (P =.29). What is new and conclusion: Combining CDSS alerting with a pharmacist evaluation is an effective method to improve response rate, time to supplementation and time to initial improvement, defined as SPL ≥3.0 mmol/L. However, it showed no significant effect on the percentage of patients achieving normokalaemia, time to normokalaemia or hospitalization. The discrepancy between rapid supplementation and improvement on the one hand and failure to improve time to normokalaemia on the other warrants further study

    Clinical rule-guided pharmacists' intervention in hospitalized patients with hypokalaemia: A time series analysis

    No full text
    What is known and objective: Physicians’ response to moderate and severe hypokalaemia in hospitalized patients is frequently suboptimal, leading to increased risk of cardiac arrhythmias and sudden death. While actively alerting physicians on all critical care values using telephone or electronic pop-ups can improve response, it can also lead to alert fatigue and frustration due to non-specific and overdue alerts. Therefore, a new method was tested. A clinical rule built into a clinical decision support system (CDSS) generated alerts for patients with a serum potassium level (SPL) 18 years with SPL <2.9 mmol/L measured at least 24 hours after hospitalization in whom no potassium supplementation was initiated within 4 hours after measurement and normalization of SPL was not achieved within these 4 hours were included. Haemodialysis patients were excluded. The percentage of hypokalaemic patients with a subsequent prescription for potassium supplementation, time to subsequent potassium supplementation prescription, the percentage of patients who achieved normokalaemia (SPL ≥ 3.0 mmol/L), time to achieve normokalaemia and total duration of hospitalization were compared. Results and discussion: A total of 693 patients were included, of whom 278 participated in the intervention phase. The percentage of patients prescribed supplementation as well as time to prescription improved from 76.0% in 31.1 hours to 92.0% in 11.3 hours (P <.01). Time to achieve SPL ≥3.0 mmol/L improved, P <.009. No changes, however, were observed in the percentage of patients who achieved normokalaemia or time to reach normokalaemia, 87.5% in 65.2 hours pre-intervention compared to 90.2% (P =.69) in 64.0 hours (P =.71) in the intervention group. A non-significant decrease of 8.2 days was observed in the duration of hospitalization: 25.4 compared to 17.2 days (P =.29). What is new and conclusion: Combining CDSS alerting with a pharmacist evaluation is an effective method to improve response rate, time to supplementation and time to initial improvement, defined as SPL ≥3.0 mmol/L. However, it showed no significant effect on the percentage of patients achieving normokalaemia, time to normokalaemia or hospitalization. The discrepancy between rapid supplementation and improvement on the one hand and failure to improve time to normokalaemia on the other warrants further study

    The impact of a notched peripheral intravenous catheter on the first attempt success rate in hospitalized adults: Block-randomized trial

    Get PDF
    INTRODUCTION: Peripheral intravenous cannulation is the preferred method to obtain vascular access, but not always successful on the first attempt. Evidence on the impact of the intravenous catheter itself on the success rate is lacking. Faster visualization of blood flashback into the catheter, as a result of a notched needle, is thought to increase first attempt success rate. The current study aimed to assess if inserting a notched peripheral intravenous catheter will increase first attempt cannulation success up to 90%, when compared to inserting a catheter without a notched needle. DESIGN: In this block-randomized trial, adult patients in the intervention group got a notched peripheral intravenous catheter inserted, patients in the control group received a traditional non-notched catheter. The primary objective was the first attempt success rate of peripheral intravenous cannulation. Intravenous cannulation was performed according to practice guidelines and hospital policy. RESULTS: About 328 patients were included in the intervention group and 330 patients in the control group. First attempt success was 85% and 79% for the intervention and control group respectively. First attempt success was remarkably higher in the intervention group regarding patients with a high risk for failed cannulation (29%), when compared to the control group (10%). CONCLUSION: This study was unable to reach a first attempt success of 90%, although first attempt cannulation success was higher in patients who got a notched needle inserted when compared to those who got a non-notched needle inserted, unless a patients individual risk profile for a difficult intravenous access

    Procedural sedation in the emergency department by Dutch emergency physicians: A prospective multicentre observational study of 1711 adults

    No full text
    Objective: To describe our experience performing ED procedural sedation in a country where emergency medicine (EM) is a relatively new specialty. Methods: This is a prospective observational study of adult patients undergoing procedural sedation by emergency physicians (EPs) or EM residents in eight hospitals in the Netherlands. Data were collected on a standardised form, including patient characteristics, sedative and analgesic used, procedural success, adverse events (classified according to World SIVA) and rescue interventions. Results: 1711 adult cases were included from 2006 to 2013. Propofol, midazolam and esketamine (S+ enantiomer of ketamine) were the most used sedatives (63%, 29% and 8%). We had adverse event data on all patients. The overall adverse event rate was 11%, mostly hypoxia or apnoea. There was no difference in adverse event rate between EPs and EM residents. However, there was a significantly higher success rate of the procedure when EPs did the procedural sedation (92% vs 84%). No moderate (unplanned hospital admission or escalation of care) or sentinel SIVA outcomes occurred ( pulmonary aspiration syndrome, death or permanent neurological deficit). Conclusion: Adverse events during procedural sedation occurred in 11% of patients. There were no moderate or sentinel outcomes. All events could be managed by the sedating physician. In a country where EM is a relatively new specialty, procedural sedation appears to be safe when performed by EPs or trained EM residents and has comparable adverse event rates to international studies
    corecore