8 research outputs found

    International nosocomial infection control consortium (INICC) report, data summary of 36 countries, for 2004-2009

    Get PDF
    The results of a surveillance study conducted by the International Nosocomial Infection Control Consortium (INICC) from January 2004 through December 2009 in 422 intensive care units (ICUs) of 36 countries in Latin America, Asia, Africa, and Europe are reported. During the 6-year study period, using Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN; formerly the National Nosocomial Infection Surveillance system [NNIS]) definitions for device-associated health care-associated infections, we gathered prospective data from 313,008 patients hospitalized in the consortium's ICUs for an aggregate of 2,194,897 ICU bed-days. Despite the fact that the use of devices in the developing countries' ICUs was remarkably similar to that reported in US ICUs in the CDC's NHSN, rates of device-associated nosocomial infection were significantly higher in the ICUs of the INICC hospitals; the pooled rate of central line-associated bloodstream infection in the INICC ICUs of 6.8 per 1,000 central line-days was more than 3-fold higher than the 2.0 per 1,000 central line-days reported in comparable US ICUs. The overall rate of ventilator-associated pneumonia also was far higher (15.8 vs 3.3 per 1,000 ventilator-days), as was the rate of catheter-associated urinary tract infection (6.3 vs. 3.3 per 1,000 catheter-days). Notably, the frequencies of resistance of Pseudomonas aeruginosa isolates to imipenem (47.2% vs 23.0%), Klebsiella pneumoniae isolates to ceftazidime (76.3% vs 27.1%), Escherichia coli isolates to ceftazidime (66.7% vs 8.1%), Staphylococcus aureus isolates to methicillin (84.4% vs 56.8%), were also higher in the consortium's ICUs, and the crude unadjusted excess mortalities of device-related infections ranged from 7.3% (for catheter-associated urinary tract infection) to 15.2% (for ventilator-associated pneumonia). Copyright © 2012 by the Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved

    The Cholecystectomy As A Day Case (CAAD) Score: A Validated Score of Preoperative Predictors of Successful Day-Case Cholecystectomy Using the CholeS Data Set

    Get PDF
    Background Day-case surgery is associated with significant patient and cost benefits. However, only 43% of cholecystectomy patients are discharged home the same day. One hypothesis is day-case cholecystectomy rates, defined as patients discharged the same day as their operation, may be improved by better assessment of patients using standard preoperative variables. Methods Data were extracted from a prospectively collected data set of cholecystectomy patients from 166 UK and Irish hospitals (CholeS). Cholecystectomies performed as elective procedures were divided into main (75%) and validation (25%) data sets. Preoperative predictors were identified, and a risk score of failed day case was devised using multivariate logistic regression. Receiver operating curve analysis was used to validate the score in the validation data set. Results Of the 7426 elective cholecystectomies performed, 49% of these were discharged home the same day. Same-day discharge following cholecystectomy was less likely with older patients (OR 0.18, 95% CI 0.15–0.23), higher ASA scores (OR 0.19, 95% CI 0.15–0.23), complicated cholelithiasis (OR 0.38, 95% CI 0.31 to 0.48), male gender (OR 0.66, 95% CI 0.58–0.74), previous acute gallstone-related admissions (OR 0.54, 95% CI 0.48–0.60) and preoperative endoscopic intervention (OR 0.40, 95% CI 0.34–0.47). The CAAD score was developed using these variables. When applied to the validation subgroup, a CAAD score of ≤5 was associated with 80.8% successful day-case cholecystectomy compared with 19.2% associated with a CAAD score >5 (p < 0.001). Conclusions The CAAD score which utilises data readily available from clinic letters and electronic sources can predict same-day discharges following cholecystectomy

    Local Ion Signatures (LIS) for the examination of comprehensive two-dimensional gas chromatography applied to fire debris analysis

    No full text
    Forensic examination of fire debris evidence is a notoriously difficult analytical task due to the complexity and variability of sample composition. The use of comprehensive two-dimensional gas chromatography with mass spectrometry detection (GC × GC–MS) allows the coupling of orthogonal retention mechanisms and therefore a high peak capacity. We demonstrate recent innovations in combining chemometric techniques for data reduction and feature selection, with evaluation of the evidence for forensic questions pertaining to the detection and subsequent classification of ignitable liquid residue (ILR) in fire debris samples. Chromatograms are divided into non-overlapping spatially delimited regions; for each of these regions a Local Ion Signature (LIS) is computed by summing the intensities, per nominal mass/charge over all points contained within each region. This yields a reduced feature space representing the original data as a set of consolidated ion traces. Subsequent feature selection is performed by evaluating the individual efficacy of each feature using a univariate score-based likelihood ratio (LR) approach for discriminating between pairs of same or different type samples. The retained features are used to model each ILR class using linear discriminant analysis (LDA). Results are demonstrated for 155 arson samples containing a diversity of substrate compounds and several known ignitable liquids. ILR detection is performed at 84% accuracy with fewer than 1% false positives followed by subsequent classification. Likelihood ratio distributions are presented referring to both detection and classification tasks

    Impact of International Nosocomial Infection Control Consortium (INICC) strategy on central line-associated bloodstream infection rates in the intensive care units of 15 developing countries

    No full text
    BACKGROUND. The International Nosocomial Infection Control Consortium (INICC) was established in 15 developing countries to reduce infection rates in resource-limited hospitals by focusing on education and feedback of outcome surveillance (infection rates) and process surveillance (adherence to infection control measures). We report a time-sequence analysis of the effectiveness of this approach in reducing rates of central line-associated bloodstream infection (CLABSI) and associated deaths in 86 intensive care units with a minimum of 6-month INICC membership. METHODS. Pooled CLABSI rates during the first 3 months (baseline) were compared with rates at 6-month intervals during the first 24 months in 53,719 patients (190,905 central line-days). Process surveillance results at baseline were compared with intervention period data. RESULTS. During the first 6 months, CLABSI incidence decreased by 33% (from 14.5 to 9.7 CLABSIs per 1,000 central line-days). Over the first 24 months there was a cumulative reduction from baseline of 54% (from 16.0 to 7.4 CLABSIs per 1,000 central line-days; relative risk, 0.46 [95% confidence interval, 0.33-0.63]; P < .001). The number of deaths in patients with CLABSI decreased by 58%. During the intervention period, hand hygiene adherence improved from 50% to 60% (P < .001); the percentage of intensive care units that used maximal sterile barriers at insertion increased from 45% to 85% (P < .001 ), that adopted chlorhexidine for antisepsis increased from 7% to 27% (P=.018 ), and that sought to remove unneeded catheters increased from 37% to 83% (P=.004); and the duration of central line placement decreased from 4.1 to 3.5 days (P < .001). CONCLUSIONS. Education, performance feedback, and outcome and process surveillance of CLABSI rates significantly improved infection control adherence, reducing the CLABSI incidence by 54% and the number of CLABSI-associated deaths by 58% in INICC hospitals during the first 2 years. © 2010 by The Society for Healthcare Epidemiology of America. All rights reserved

    Predicting the difficult laparoscopic cholecystectomy: development and validation of a pre-operative risk score using an objective operative difficulty grading system

    No full text
    Background: The prediction of a difficult cholecystectomy has traditionally been based on certain pre-operative clinical and imaging factors. Most of the previous literature reported small patient cohorts and have not used an objective measure of operative difficulty. The aim of this study was to develop a pre-operative score to predict difficult cholecystectomy, as defined by a validated intra-operative difficulty grading scale. Method: Two cohorts from prospectively maintained databases of patients who underwent laparoscopic cholecystectomy were analysed: the CholeS Study (8755 patients) and a single surgeon series (4089 patients). Factors potentially predictive of difficulty were correlated to the Nassar intra-operative difficulty scale. A multivariable binary logistic regression analysis was then used to identify factors that were independently associated with difficult laparoscopic cholecystectomy, defined as operative difficulty grades 3 to 5. The resulting model was then converted to a risk score, and validated on both internal and external datasets. Result: Increasing age and ASA classification, male gender, diagnosis of CBD stone or cholecystitis, thick-walled gallbladders, CBD dilation, use of pre-operative ERCP and non-elective operations were found to be significant independent predictors of difficult cases. A risk score based on these factors returned an area under the ROC curve of 0.789 (95% CI 0.773–0.806, p &lt; 0.001) on external validation, with 11.0% versus 80.0% of patients classified as low versus high risk having difficult surgeries. Conclusion: We have developed and validated a pre-operative scoring system that uses easily available pre-operative variables to predict difficult laparoscopic cholecystectomies. This scoring system should assist in patient selection for day case surgery, optimising pre-operative surgical planning (e.g. allocation of the procedure to a suitably trained surgeon) and counselling patients during the consent process. The score could also be used to risk adjust outcomes in future research

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (&gt; 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations &gt; 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p &lt; 0.001), with the proportions of operations lasting &gt; 90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care
    corecore