10 research outputs found
Preventing Health Care-Associated Infection: Development of a Clinical Prediction Rule for Clostridium difficile Infection.
Introduction: The incidence of Clostridium difficile infection has been steadily rising, growing in virulence, and demonstrating an increase in the severity and morbidity of the disease. A clinical prediction rule (risk score), applied early in, or prior to, hospitalization is a strategy to identify vulnerable patients, target preventative interventions, improve outcomes for Clostridium difficile infection, and translate evidence into clinical practice. Objectives: The purpose of this research was to develop and validate a clinical prediction rule for the risk of Clostridium difficile infection. Methods: Between August 2007 and June 2009, preoperative variables and positive Clostridium difficile assays were collected for adult patients admitted for surgical colectomy from 24 hospitals in Michigan. After performing univariate analysis of 36 preoperative patient risk factors, significant variables associated with Clostridium difficile infection at a p value ≤ .15 were advanced into a binary logistic regression model. The regression coefficients of this model were translated into a weighted scoring system to develop the clinical prediction rule. The receiver operating characteristic curve analysis evaluated the predictive accuracy of the score. Results: 2274 patients underwent colectomy and fulfilled inclusion criteria. A total of 55 patients (2.4% overall) developed Clostridium difficile infection. Mechanical ventilation (p=.012) and a history of a transient ischemic attack (p=.042) were independently associated with Clostridium difficile infection. A clinical prediction rule, including the variables from the final model, demonstrated a larger score with an increased patient risk (p ≤ .01). The area under the receiver operating characteristic curve was 0.628 (95% CI .550 -.706). Conclusions: Pulmonary and neurological morbidities emerged as significant preoperative predictive variables of Clostridium difficile infection in this cohort. In contrast to previous studies, bowel preparation, with and without antibiotics, was not associated with an increased risk of CDI. Findings from this study suggest pathogen-directed interventions, such as a clinical prediction rule to quantify the risk factors of Clostridium difficile infection, may offer a promising adjunctive strategy to reduce infection and protect vulnerable patient populations.Ph.D.NursingUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/89642/1/krapohlg_1.pd
Recommended from our members
Urinary Retention Evaluation and Catheterization Algorithm for Adult Inpatients
Importance: Acute urinary retention (UR) is common, yet variations in diagnosis and management can lead to inappropriate catheterization and harm. Objective: To develop an algorithm for screening and management of UR among adult inpatients. Design, Setting, and Participants: In this mixed-methods study using the RAND/UCLA Appropriateness Method and qualitative interviews, an 11-member multidisciplinary expert panel of nurses and physicians from across the US used a formal multi-round process from March to May 2015 to rate 107 clinical scenarios involving diagnosis and management of adult UR in postoperative and medical inpatients. The panel ratings informed the first algorithm draft. Semistructured interviews were conducted from October 2020 to May 2021 with 33 frontline clinicians—nurses and surgeons from 5 Michigan hospitals—to gather feedback and inform algorithm refinements. Main Outcomes and Measures: Panelists categorized scenarios assessing when to use bladder scanners, catheterization at various scanned bladder volumes, and choice of catheterization modalities as appropriate, inappropriate, or uncertain. Next, qualitative methods were used to understand the perceived need, usability, and potential algorithm uses. Results: The 11-member expert panel (10 men and 1 woman) used the RAND/UCLA Appropriateness Method to develop a UR algorithm including the following: (1) bladder scanners were preferred over catheterization for UR diagnosis in symptomatic patients or starting as soon as 3 hours since last void if asymptomatic, (2) bladder scanner volumes appropriate to prompt catheterization were 300 mL or greater in symptomatic patients and 500 mL or greater in asymptomatic patients, and (3) intermittent was preferred to indwelling catheterization for managing lower bladder volumes. Interview findings were organized into 3 domains (perceived need, feedback on algorithm, and implementation suggestions). The 33 frontline clinicians (9 men and 24 women) who reviewed the algorithm reported that an evidence-based protocol (1) was needed and could be helpful to clinicians, (2) should be simple and graphically appealing to improve rapid clinician review, and (3) should be integrated within the electronic medical record and prominently displayed in hospital units to increase awareness. The draft algorithm was iteratively refined based on stakeholder feedback. Conclusions and Relevance: In this study using a systematic, multidisciplinary, evidence- and expert opinion–based approach, a UR evaluation and catheterization algorithm was developed to improve patient safety by increasing appropriate use of bladder scanners and catheterization. This algorithm addresses the need for practical guidance to manage UR among adult inpatients.</p
Building, scaling, and sustaining a learning health system for surgical quality improvement: A toolkit
This article describes how to start, replicate, scale, and sustain a learning health system for quality improvement, based on the experience of the Michigan Surgical Quality Collaborative (MSQC). The key components to operationalize a successful collaborative improvement infrastructure and the features of a learning health system are explained. This information is designed to guide others who desire to implement quality improvement interventions across a regional network of hospitals using a collaborative approach. A toolkit is provided (under Supporting Information) with practical information for implementation.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156156/3/lrh210215.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156156/2/lrh210215-sup-0001-supinfo.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156156/1/lrh210215_am.pd
Recommended from our members
What are the features of high-performing quality improvement collaboratives? Qualitative case study of a state-wide collaboratives programme
Objectives: Despite their widespread use, the evidence base for the effectiveness of quality improvement collaboratives remains mixed. Lack of clarity about “what good looks like” in collaboratives remains a persistent problem. We aimed to identify the distinctive features of a state-wide collaboratives programme that has demonstrated sustained improvements in quality of care in a range of clinical specialties over a long period.
Design: Qualitative case study involving interviews with purposively-sampled participants, observations, and analysis of documents.
Setting: The Michigan Collaborative Quality Initiatives (CQIs) programme.
Participants: 38 participants, including clinicians and managers from 10 collaboratives, and staff from University of Michigan and Blue Cross Blue Shield of Michigan.
Results: We identified five features that characterised success in the collaboratives programme: learning from positive deviance; high-quality coordination; high-quality measurement and comparative performance feedback; careful use of motivational levers; and mobilising professional leadership and building community. Rigorous measurement, securing professional leadership and engagement, cultivating a collaborative culture, creating accountability for quality, and relieving participating sites of unnecessary burdens associated with programme participation were all important to high performance.
Conclusions: Our findings offer valuable learning for optimising collaboration-based approaches to improvement in healthcare, with implications for the design, structure and resourcing of quality improvement collaboratives. These findings are likely to be useful to clinicians, managers, policymakers and health system leaders engaged in multi-organisational approaches to improving quality and safety.This study was funded by James McGowan’s NIHR Academic Clinical Fellowship (ACF-2016-14-011), by Mary Dixon-Woods’ Wellcome Trust Investigator award (WT097899) and by the Health Foundation’s grant to the University of Cambridge for The Healthcare Improvement Studies (THIS) Institute. THIS Institute is supported by the Health Foundation—an independent charity committed to bringing about better health and health care for people in the UK