376 research outputs found

    Detecting Environmental Contamination of MRSA in Ambulances: A Novel and Efficient Sampling Methodology

    Get PDF
    Background: Methicillin-resistant Staphylococcus aureus (MRSA) can be found in emergency medical services (EMS) ambulances. This poses an occupational risk and patient safety hazard. Screening for environmental contamination is often not performed due to limited resources and logistical challenges. This study’s objective was to compare traditional screening of individual surfaces versus “pooled sampling” to efficiently identify contamination. Methods: A cross-sectional study, conducted among 145 Ohio EMS ambulances from 84 agencies, tested a novel pooled sampling methodology to detect MRSA contaminated ambulances. For ambulances screened using pooled sampling, 3 samples were collected within each ambulance. Pool One included cabinets, doorways, and ceiling bar. Pool Two included cot, seats, and backboard. Pool Three included steering wheel, kits, and clipboard. For ambulances screened with the traditional detection technique, each of the 9 aforementioned surfaces were sampled individually. Descriptive statistics and unadjusted and adjusted odds ratios (OR) were calculated to compare the 2 methods. Results: Forty-seven of 145 ambulances (32.4%) had at least 1 of the 9 locations contaminated with MRSA. When comparing the 2 screening methodologies, no significant difference was observed regarding the overall detection of MRSA contaminated ambulances (24/60 [40%] versus 23/85 [27.6%], P value: 0.1000). This indicates that pooled sampling appears as an efficient method for identification of MRSA contaminated ambulances. Conclusion: One-third of Ohio ambulances had MRSA contamination in this study. Therefore, an efficient methodology to identify contaminated ambulances with hazardous pathogens is incredibly valuable. Pooling can help save resources and simplify sampling logistics, all which could positively impact infection control practices in emergency medical services

    Genetic relatedness and molecular characterization of multidrug resistant Acinetobacter baumannii isolated in central Ohio, USA

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Over the last decade, nosocomial infections due to <it>Acinetobacter baumannii </it>have been described with an increasing trend towards multidrug resistance, mostly in intensive care units. The aim of the present study was to determine the clonal relatedness of clinical isolates and to elucidate the genetic basis of imipenem resistance.</p> <p>Methods</p> <p><it>A. baumannii </it>isolates (n = 83) originated from two hospital settings in central Ohio were used in this study. Pulsed-field gel electrophoresis genotyping and antimicrobial susceptibility testing for clinically relevant antimicrobials were performed. Resistance determinants were characterized by using different phenotypic (accumulation assay for efflux) and genotypic (PCR, DNA sequencing, plasmid analysis and electroporation) approaches.</p> <p>Results</p> <p>The isolates were predominantly multidrug resistant (>79.5%) and comprised of thirteen unique pulsotypes, with genotype VII circulating in both hospitals. The presence of <it>bla</it><sub>OXA-23 </sub>in 13% (11/83) and IS<sub><it>Aba1 </it></sub>linked <it>bla</it><sub>OXA-66 </sub>in 79.5% (66/83) of clinical isolates was associated with high level imipenem resistance. In this set of OXA producing isolates, multidrug resistance was bestowed by <it>bla</it><sub>ADC-25</sub>, class 1 integron-borne aminoglycoside modifying enzymes, presence of sense mutations in <it>gyrA</it>/<it>parC </it>and involvement of active efflux (with evidence for the presence of <it>adeB </it>efflux gene).</p> <p>Conclusion</p> <p>This study underscores the major role of carbapenem-hydrolyzing class D β-lactamases, and in particular the acquired OXA-23, in the dissemination of imipenem-resistant <it>A. baumannii</it>. The co-occurrence of additional resistance determinant could also be a significant threat.</p

    Developing a risk stratification model for surgical site infection after abdominal hysterectomy

    Get PDF
    OBJECTIVE: The incidence of surgical site infection (SSI) ranges widely from 2-21% after hysterectomy. There is insufficient understanding of risk factors to build a specific risk stratification index. METHODS: Retrospective case-control study of 545 abdominal and 275 vaginal hysterectomies from 7/1/03 - 6/30/05 at four institutions. SSIs were defined using CDC/NNIS criteria. Independent risk factors for abdominal hysterectomy were identified by logistic regression. RESULTS: There were 13 deep incisional, 53 superficial incisional, and 18 organ-space SSI after abdominal and 14 organ-space SSI after vaginal hysterectomy. Because risk factors for organ-space SSI were different in univariate analysis, further analyses focused on incisional SSI after abdominal hysterectomy. The maximum serum glucose within 5 days after operation was highest in patients with deep incisional SSI, lower in patients with superficial incisional SSI and lowest in uninfected patients (median 189, 156, and 141mg/dL, p = .005). Independent risk factors for incisional SSI included blood transfusion (odds ratio (OR) 2.4) and morbid obesity (body mass index (BMI) > 35, OR 5.7). Duration of operation > 75th percentile (OR 1.7), obesity (BMI 30-35, OR 3.0), and lack of private health insurance (OR 1.7) were marginally associated with increased odds of SSI. CONCLUSIONS: Incisional SSI after abdominal hysterectomy was associated with increased BMI and blood transfusion. Longer operative time and lack of private health insurance were marginally associated with SSI. A specific risk stratification index could help to more accurately predict the risk of incisional SSI following abdominal hysterectomy

    Sentinel-based Surveillance of Coyotes to Detect Bovine Tuberculosis, Michigan

    Get PDF
    Bovine tuberculosis (TB) is endemic in white-tailed deer (Odocoileus virginianus) in the northeastern portion of Michigan’s Lower Peninsula. Bovine TB in deer and cattle has created immense fi nancial consequences for the livestock industry and hunting public. Surveillance identified coyotes (Canis latrans) as potential bio-accumulators of Mycobacterium bovis, a finding that generated interest in their potential to serve as sentinels for monitoring disease risk. We sampled 175 coyotes in the bovine TB–endemic area. Fifty-eight tested positive, and infection prevalence by county ranged from 19% to 52% (statistical mean 33%, SE 0.07). By contrast, prevalence in deer (n = 3,817) was lower (i.e., 1.49%; Mann-Whitney U4,4 = 14, pM. bovis by 40%. As a result of reduced sampling intensity, sentinel coyote surveys have the potential to be practical indicators of M. bovis presence in wildlife and livestock

    Enhanced surgical site infection surveillance following hysterectomy, vascular, and colorectal surgery

    Get PDF
    Objective.To evaluate the use of inpatient pharmacy and administrative data to detect surgical site infections (SSIs) following hysterectomy and colorectal and vascular surgery.Design.Retrospective cohort study.Setting.Five hospitals affiliated with academic medical centers.Patients.Adults who underwent abdominal or vaginal hysterectomy, colorectal surgery, or vascular surgery procedures between July 1, 2003, and June 30, 2005.Methods.We reviewed the medical records of weighted, random samples drawn from 3,079 abdominal and vaginal hysterectomy, 4,748 colorectal surgery, and 3,332 vascular surgery procedures. We compared routine surveillance with screening of inpatient pharmacy data and diagnosis codes and then performed medical record review to confirm SSI status.Results.Medical records from 823 hysterectomy, 736 colorectal surgery, and 680 vascular surgery procedures were reviewed. SSI rates determined by antimicrobial- and/or diagnosis code-based screening followed by medical record review (enhanced surveillance) were substantially higher than rates determined by routine surveillance (4.3% [95% confidence interval, 3.6%—5.1%] vs 2.7% for hysterectomies, 7.1% [95% confidence interval, 6.7%–8.2%] vs 2.0% for colorectal procedures, and 2.3% [95% confidence interval, 1.9%–2.9%] vs 1.4% for vascular procedures). Enhanced surveillance had substantially higher sensitivity than did routine surveillance to detect SSI (92% vs 59% for hysterectomies, 88% vs 22% for colorectal procedures, and 72% vs 43% for vascular procedures). A review of medical records confirmed SSI for 31% of hysterectomies, 20% of colorectal procedures, and 31% of vascular procedures that met the enhanced screening criteria.Conclusion.Antimicrobial- and diagnosis code-based screening may be a useful method for enhancing and streamlining SSI surveillance for a variety of surgical procedures, including those procedures targeted by the Centers for Medicare and Medicaid Services.</jats:sec

    Beyond 30 days: Does limiting the duration of surgical site infection follow-up limit detection?

    Get PDF
    Concern over consistency and completeness of surgical site infection (SSI) surveillance has increased due to public reporting of hospital SSI rates and imminent non-payment rules for hospitals that do not meet national benchmarks. Already, hospitals no longer receive additional payment from the Centers for Medicare &amp; Medicaid Services (CMS) for certain infections following coronary artery bypass graft (CABG) surgery, orthopedic procedures, and bariatric surgery. One major concern is incomplete and differential post-discharge surveillance. At present, substantial variation exists in how and whether hospitals identify SSI events after the hospitalization in which the surgery occurred. Parameters used for SSI surveillance such as the duration of the window of time that surveillance takes place following the surgical procedure can impact the completeness of surveillance data. Determination of the optimal surveillance time period involves balancing the potential increased case ascertainment associated with a longer follow-up period with the increased resources that would be required. Currently, the time window for identifying potentially preventable SSIs related to events at the time of surgery is not fully standardized. The Centers for Disease Control and Prevention (CDC) National Healthcare Surveillance Network (NHSN) requires a 365-day postoperative surveillance period for procedures involving implants and a 30-day period for non-implant procedures. In contrast, the National Surgical Quality Improvement Program (NSQIP) and the Society of Thoracic Surgeons (STS) systems employ 30-day post-operative surveillance regardless of implant. As consensus builds towards national quality measures for hospital-specific SSI rates, it will be important to assess the frequency of events beyond the 30-day post-surgical window that may quantify the value of various durations of surveillance, and ultimately inform the choice of specific outcome measures

    Deriving measures of intensive care unit antimicrobial use from computerized pharmacy data: Methods, validation, and overcoming barriers

    Get PDF
    OBJECTIVE: To outline methods for deriving and validating intensive care unit (ICU) antimicrobial utilization (AU) measures from computerized data and to describe programming problems that emerged. DESIGN: Retrospective evaluation of computerized pharmacy and administrative data. SETTING: ICUs from four academic medical centers over 36 months. INTERVENTIONS: Investigators separately developed and validated programming code to report AU measures in selected ICUs. Antibacterial and antifungal drugs for systemic administration were categorized and expressed as antimicrobial days (each day that each antimicrobial drug was given to each patient) and patient-days on antimicrobials (each day that any antimicrobial drug was given to each patient). Monthly rates were compiled and analyzed centrally with ICU patient-days as the denominator. Results were validated against data collected from manual medical record review. Frequent discussion among investigators aided identification and correction of programming problems. RESULTS: AU data were successfully programmed though a reiterative process of computer code revision. After identifying and resolving major programming errors, comparison of computerized patient-level data with data collected by manual medical record review revealed discrepancies in antimicrobial days and patient-days on antimicrobials ranging from <1% to 17.7%. The hospital for which numerator data were derived from electronic medication administration records had the least discrepant results. CONCLUSIONS: Computerized AU measures can be derived feasibly, but threats to validity must be sought and corrected. The magnitude of discrepancies between computerized AU data and a gold standard based on manual chart review varies, with electronic medication administration records providing maximal accuracy

    Multicenter evaluation of computer automated versus traditional surveillance of hospital-acquired bloodstream infections

    Get PDF
    Objective.Central line–associated bloodstream infection (BSI) rates are a key quality metric for comparing hospital quality and safety. Traditional BSI surveillance may be limited by interrater variability. We assessed whether a computer-automated method of central line–associated BSI detection can improve the validity of surveillance.Design.Retrospective cohort study.Setting.Eight medical and surgical intensive care units (ICUs) in 4 academic medical centers.Methods.Traditional surveillance (by hospital staff) and computer algorithm surveillance were each compared against a retrospective audit review using a random sample of blood culture episodes during the period 2004–2007 from which an organism was recovered. Episode-level agreement with audit review was measured with κ statistics, and differences were assessed using the test of equal κ coefficients. Linear regression was used to assess the relationship between surveillance performance (κ) and surveillance-reported BSI rates (BSIs per 1,000 central line–days).Results.We evaluated 664 blood culture episodes. Agreement with audit review was significantly lower for traditional surveillance (κ [95% confidence interval (CI)] = 0.44 [0.37–0.51]) than computer algorithm surveillance (κ [95% CI] [0.52–0.64]; P = .001). Agreement between traditional surveillance and audit review was heterogeneous across ICUs (P = .001); furthermore, traditional surveillance performed worse among ICUs reporting lower (better) BSI rates (P = .001). In contrast, computer algorithm performance was consistent across ICUs and across the range of computer-reported central line–associated BSI rates.Conclusions.Compared with traditional surveillance of bloodstream infections, computer automated surveillance improves accuracy and reliability, making interfacility performance comparisons more valid.Infect Control Hosp Epidemiol 2014;35(12):1483–1490</jats:sec

    Implementing automated surveillance for tracking Clostridium difficile infection at multiple healthcare facilities

    Get PDF
    Automated surveillance utilizing electronically available data has been found to be accurate and save time. An automated CDI surveillance algorithm was validated at four CDC Prevention Epicenters hospitals. Electronic surveillance was highly sensitive, specific, and showed good to excellent agreement for hospital-onset; community-onset, study facility associated; indeterminate; and recurrent CDI
    • …
    corecore