162 research outputs found

    Which comorbid conditions should we be analyzing as risk factors for healthcare-associated infections?

    Get PDF
    OBJECTIVETo determine which comorbid conditions are considered causally related to central-line associated bloodstream infection (CLABSI) and surgical-site infection (SSI) based on expert consensus.DESIGNUsing the Delphi method, we administered an iterative, 2-round survey to 9 infectious disease and infection control experts from the United States.METHODSBased on our selection of components from the Charlson and Elixhauser comorbidity indices, 35 different comorbid conditions were rated from 1 (not at all related) to 5 (strongly related) by each expert separately for CLABSI and SSI, based on perceived relatedness to the outcome. To assign expert consensus on causal relatedness for each comorbid condition, all 3 of the following criteria had to be met at the end of the second round: (1) a majority (&gt;50%) of experts rating the condition at 3 (somewhat related) or higher, (2) interquartile range (IQR)≤1, and (3) standard deviation (SD)≤1.RESULTSFrom round 1 to round 2, the IQR and SD, respectively, decreased for ratings of 21 of 35 (60%) and 33 of 35 (94%) comorbid conditions for CLABSI, and for 17 of 35 (49%) and 32 of 35 (91%) comorbid conditions for SSI, suggesting improvement in consensus among this group of experts. At the end of round 2, 13 of 35 (37%) and 17 of 35 (49%) comorbid conditions were perceived as causally related to CLABSI and SSI, respectively.CONCLUSIONSOur results have produced a list of comorbid conditions that should be analyzed as risk factors for and further explored for risk adjustment of CLABSI and SSI.Infect Control Hosp Epidemiol 2017;38:449–454</jats:sec

    Is there a correlation between infection control performance and other hospital quality measures?

    Get PDF
    Quality measures are increasingly reported by hospitals to the Centers for Medicare and Medicaid Services (CMS), yet there may be tradeoffs in performance between infection control (IC) and other quality measures. Hospitals that performed best on IC measures did not perform well on most CMS non–IC quality measures

    Modelling Segmented Cardiotocography Time-Series Signals Using One-Dimensional Convolutional Neural Networks for the Early Detection of Abnormal Birth Outcomes

    Get PDF
    Gynaecologists and obstetricians visually interpret cardiotocography (CTG) traces using the International Federation of Gynaecology and Obstetrics (FIGO) guidelines to assess the wellbeing of the foetus during antenatal care. This approach has raised concerns among professionals with regards to inter- and intra-variability where clinical diagnosis only has a 30\% positive predictive value when classifying pathological outcomes. Machine learning models, trained with FIGO and other user derived features extracted from CTG traces, have been shown to increase positive predictive capacity and minimise variability. This is only possible however when class distributions are equal which is rarely the case in clinical trials where case-control observations are heavily skewed in favour of normal outcomes. Classes can be balanced using either synthetic data derived from resampled case training data or by decreasing the number of control instances. However, this either introduces bias or removes valuable information. Concerns have also been raised regarding machine learning studies and their reliance on manually handcrafted features. While this has led to some interesting results, deriving an optimal set of features is considered to be an art as well as a science and is often an empirical and time consuming process. In this paper, we address both of these issues and propose a novel CTG analysis methodology that a) splits CTG time-series signals into n-size windows with equal class distributions, and b) automatically extracts features from time-series windows using a one dimensional convolutional neural network (1DCNN) and multilayer perceptron (MLP) ensemble. Collectively, the proposed approach normally distributes classes and removes the need to handcrafted features from CTG traces

    The effect of adding comorbidities to current centers for disease control and prevention central-line–associated bloodstream infection risk-adjustment methodology

    Get PDF
    BACKGROUNDRisk adjustment is needed to fairly compare central-line–associated bloodstream infection (CLABSI) rates between hospitals. Until 2017, the Centers for Disease Control and Prevention (CDC) methodology adjusted CLABSI rates only by type of intensive care unit (ICU). The 2017 CDC models also adjust for hospital size and medical school affiliation. We hypothesized that risk adjustment would be improved by including patient demographics and comorbidities from electronically available hospital discharge codes.METHODSUsing a cohort design across 22 hospitals, we analyzed data from ICU patients admitted between January 2012 and December 2013. Demographics and International Classification of Diseases, Ninth Edition, Clinical Modification (ICD-9-CM) discharge codes were obtained for each patient, and CLABSIs were identified by trained infection preventionists. Models adjusting only for ICU type and for ICU type plus patient case mix were built and compared using discrimination and standardized infection ratio (SIR). Hospitals were ranked by SIR for each model to examine and compare the changes in rank.RESULTSOverall, 85,849 ICU patients were analyzed and 162 (0.2%) developed CLABSI. The significant variables added to the ICU model were coagulopathy, paralysis, renal failure, malnutrition, and age. The C statistics were 0.55 (95% CI, 0.51–0.59) for the ICU-type model and 0.64 (95% CI, 0.60–0.69) for the ICU-type plus patient case-mix model. When the hospitals were ranked by adjusted SIRs, 10 hospitals (45%) changed rank when comorbidity was added to the ICU-type model.CONCLUSIONSOur risk-adjustment model for CLABSI using electronically available comorbidities demonstrated better discrimination than did the CDC model. The CDC should strongly consider comorbidity-based risk adjustment to more accurately compare CLABSI rates across hospitals.Infect Control Hosp Epidemiol 2017;38:1019–1024</jats:sec

    The impact of universal glove and gown use on Clostridioides difficile acquisition: A cluster-randomized trial

    Get PDF
    BACKGROUND: Clostridioides difficile is the most common cause of healthcare-associated infections in the United States. It is unknown whether universal gown and glove use in intensive care units (ICUs) decreases acquisition of C. difficile. METHODS: This was a secondary analysis of a cluster-randomized trial in 20 medical and surgical ICUs in 20 US hospitals from 4 January 2012 to 4 October 2012. After a baseline period, ICUs were randomized to standard practice for glove and gown use versus the intervention of all healthcare workers being required to wear gloves and gowns for all patient contact and when entering any patient room (contact precautions). The primary outcome was acquisition of toxigenic C. difficile determined by surveillance cultures collected on admission and discharge from the ICU. RESULTS: A total of 21 845 patients had both admission and discharge perianal swabs cultured for toxigenic C. difficile. On admission, 9.43% (2060/21 845) of patients were colonized with toxigenic C. difficile. No significant difference was observed in the rate of toxigenic C. difficile acquisition with universal gown and glove use. Differences in acquisition rates in the study period compared with the baseline period in control ICUs were 1.49 per 100 patient-days versus 1.68 per 100 patient-days in universal gown and glove ICUs (rate difference, -0.28; generalized linear mixed model, P = .091). CONCLUSIONS: Glove and gown use for all patient contact in medical and surgical ICUs did not result in a reduction in the acquisition of C. difficile compared with usual care. CLINICAL TRIALS REGISTRATION: NCT01318213

    Electronically available patient claims data improve models for comparing antibiotic use across hospitals: Results from 576 US facilities

    Get PDF
    BACKGROUND: The Centers for Disease Control and Prevention (CDC) uses standardized antimicrobial administration ratios (SAARs)-that is, observed-to-predicted ratios-to compare antibiotic use across facilities. CDC models adjust for facility characteristics when predicting antibiotic use but do not include patient diagnoses and comorbidities that may also affect utilization. This study aimed to identify comorbidities causally related to appropriate antibiotic use and to compare models that include these comorbidities and other patient-level claims variables to a facility model for risk-adjusting inpatient antibiotic utilization. METHODS: The study included adults discharged from Premier Database hospitals in 2016-2017. For each admission, we extracted facility, claims, and antibiotic data. We evaluated 7 models to predict an admission\u27s antibiotic days of therapy (DOTs): a CDC facility model, models that added patient clinical constructs in varying layers of complexity, and an external validation of a published patient-variable model. We calculated hospital-specific SAARs to quantify effects on hospital rankings. Separately, we used Delphi Consensus methodology to identify Elixhauser comorbidities associated with appropriate antibiotic use. RESULTS: The study included 11 701 326 admissions across 576 hospitals. Compared to a CDC-facility model, a model that added Delphi-selected comorbidities and a bacterial infection indicator was more accurate for all antibiotic outcomes. For total antibiotic use, it was 24% more accurate (respective mean absolute errors: 3.11 vs 2.35 DOTs), resulting in 31-33% more hospitals moving into bottom or top usage quartiles postadjustment. CONCLUSIONS: Adding electronically available patient claims data to facility models consistently improved antibiotic utilization predictions and yielded substantial movement in hospitals\u27 utilization rankings

    Significant regional differences in antibiotic use across 576 US hospitals and 11 701 326 adult admissions, 2016-2017

    Get PDF
    BACKGROUND: Quantifying the amount and diversity of antibiotic use in United States hospitals assists antibiotic stewardship efforts but is hampered by limited national surveillance. Our study aimed to address this knowledge gap by examining adult antibiotic use across 576 hospitals and nearly 12 million encounters in 2016-2017. METHODS: We conducted a retrospective study of patients aged ≥ 18 years discharged from hospitals in the Premier Healthcare Database between 1 January 2016 and 31 December 2017. Using daily antibiotic charge data, we mapped antibiotics to mutually exclusive classes and to spectrum of activity categories. We evaluated relationships between facility and case-mix characteristics and antibiotic use in negative binomial regression models. RESULTS: The study included 11 701 326 admissions, totaling 64 064 632 patient-days, across 576 hospitals. Overall, patients received antibiotics in 65% of hospitalizations, at a crude rate of 870 days of therapy (DOT) per 1000 patient-days. By class, use was highest among β-lactam/β-lactamase inhibitor combinations, third- and fourth-generation cephalosporins, and glycopeptides. Teaching hospitals averaged lower rates of total antibiotic use than nonteaching hospitals (834 vs 957 DOT per 1000 patient-days; P \u3c .001). In adjusted models, teaching hospitals remained associated with lower use of third- and fourth-generation cephalosporins and antipseudomonal agents (adjusted incidence rate ratio [95% confidence interval], 0.92 [.86-.97] and 0.91 [.85-.98], respectively). Significant regional differences in total and class-specific antibiotic use also persisted in adjusted models. CONCLUSIONS: Adult inpatient antibiotic use remains high, driven predominantly by broad-spectrum agents. Better understanding reasons for interhospital usage differences, including by region and teaching status, may inform efforts to reduce inappropriate antibiotic prescribing

    Downregulated miR-195 Detected in Preeclamptic Placenta Affects Trophoblast Cell Invasion via Modulating ActRIIA Expression

    Get PDF
    Preeclampsia (PE) is a pregnancy-specific syndrome manifested by on-set of hypertension and proteinuria after 20 weeks of gestation. Abnormal placenta development has been generally accepted as initial cause of the disorder. Recently, miR-195 was found to be down-regulated in preeclamptic placentas compared with normal pregnant ones, indicating possible association of this small molecule with placental pathology of preeclampsia. By far the function of miR-195 in the development of placenta remains unknown.Bioinformatic assay predicted ActRIIA as one of the targets for miR-195. By using Real-time PCR, Western blotting and Dual Luciferase Assay, we validated that ActRIIA was the direct target of miR-195 in human trophoblast cells. Transwell insert invasion assay showed that miR-195 could promote cell invasion in trophoblast cell line, HTR8/SVneo cells, and the effect could be abrogated by overexpressed ActRIIA. In preeclamptic placenta tissues, pri-miR-195 and mature miR-195 expressions were down-regulated, whereas ActRIIA level appeared to be increased when compared with that in gestational-week-matched normal placentas.This is the first report on the function of miR-195 in human placental trophoblast cells which reveals an invasion-promoting effect of the small RNA via repressing ActRIIA. Aberrant expression of miR-195 may contribute to the occurrence of preeclampsia through interfering with Activin/Nodal signaling mediated by ActRIIA in human placenta

    Linking Fearfulness and Coping Styles in Fish

    Get PDF
    Consistent individual differences in cognitive appraisal and emotional reactivity, including fearfulness, are important personality traits in humans, non-human mammals, and birds. Comparative studies on teleost fishes support the existence of coping styles and behavioral syndromes also in poikilothermic animals. The functionalist approach to emotions hold that emotions have evolved to ensure appropriate behavioral responses to dangerous or rewarding stimuli. Little information is however available on how evolutionary widespread these putative links between personality and the expression of emotional or affective states such as fear are. Here we disclose that individual variation in coping style predicts fear responses in Nile tilapia Oreochromis niloticus, using the principle of avoidance learning. Fish previously screened for coping style were given the possibility to escape a signalled aversive stimulus. Fearful individuals showed a range of typically reactive traits such as slow recovery of feed intake in a novel environment, neophobia, and high post-stress cortisol levels. Hence, emotional reactivity and appraisal would appear to be an essential component of animal personality in species distributed throughout the vertebrate subphylum
    • …
    corecore