38 research outputs found

    Incidence and Outcomes Associated With Clostridium difficile Infections: A Systematic Review and Meta-analysis

    Get PDF
    Importance: An understanding of the incidence and outcomes of Clostridium difficile infection (CDI) in the United States can inform investments in prevention and treatment interventions. Objective: To quantify the incidence of CDI and its associated hospital length of stay (LOS) in the United States using a systematic literature review and meta-analysis. Data Sources: MEDLINE via Ovid, Cochrane Library Databases via Wiley, Cumulative Index of Nursing and Allied Health Complete via EBSCO Information Services, Scopus, and Web of Science were searched for studies published in the United States between 2000 and 2019 that evaluated CDI and its associated LOS. Study Selection: Incidence data were collected only from multicenter studies that had at least 5 sites. The LOS studies were included only if they assessed postinfection LOS or used methods accounting for time to infection using a multistate model or compared propensity score-matched patients with CDI with control patients without CDI. Long-term-care facility studies were excluded. Of the 119 full-text articles, 86 studies (72.3%) met the selection criteria. Data Extraction and Synthesis: Two independent reviewers performed the data abstraction and quality assessment. Incidence data were pooled only when the denominators used the same units (eg, patient-days). These data were pooled by summing the number of hospital-onset CDI incident cases and the denominators across studies. Random-effects models were used to obtain pooled mean differences. Heterogeneity was assessed using the I2 value. Data analysis was performed in February 2019. Main Outcomes and Measures: Incidence of CDI and CDI-associated hospital LOS in the United States. Results: When the 13 studies that evaluated incidence data in patient-days due to hospital-onset CDI were pooled, the CDI incidence rate was 8.3 cases per 10 000 patient-days. Among propensity score-matched studies (16 of 20 studies), the CDI-associated mean difference in LOS (in days) between patients with and without CDI varied from 3.0 days (95% CI, 1.44-4.63 days) to 21.6 days (95% CI, 19.29-23.90 days). Conclusions and Relevance: Pooled estimates from currently available literature suggest that CDI is associated with a large burden on the health care system. However, these estimates should be interpreted with caution because higher-quality studies should be completed to guide future evaluations of CDI prevention and treatment interventions

    Electronic nutritional intake assessment in patients with urolithiasis: A decision impact analysis

    Get PDF
    Purpose: To evaluate a physician’s impression of a urinary stone patient’s dietary intake and whether it was dependent on the medium through which the nutritional data were obtained. Furthermore, we sought to determine if using an electronic food frequency questionnaire (FFQ) impacted dietary recommendations for these patients. Materials and Methods: Seventy-six patients attended the Stone Clinic over a period of 6 weeks. Seventy-five gave consent for enrollment in our study. Patients completed an office-based interview with a fellowship-trained endourologist, and a FFQ administered on an iPad. The FFQ assessed intake of various dietary components related to stone development, such as oxalate and calcium. The urologists were blinded to the identity of patients’ FFQ results. Based on the office-based interview and the FFQ results, the urologists provided separate assessments of the impact of nutrition and hydration on the patient’s stone disease (nutrition impact score and hydration impact score, respectively) and treatment recommendations. Multivariate logistic regressions were used to compare pre-FFQ data to post-FFQ data. Results: Higher FFQ scores for sodium (odds ratio [OR], 1.02; p=0.02) and fluids (OR, 1.03, p=0.04) were associated with a higher nutritional impact score. None of the FFQ parameters impacted hydration impact score. A higher FFQ score for oxalate (OR, 1.07; p=0.02) was associated with the addition of at least one treatment recommendation. Conclusions: Information derived from a FFQ can yield a significant impact on a physician’s assessment of stone risks and decision for management of stone disease

    Donor whole blood DNA methylation is not a strong predictor of acute graft versus host disease in unrelated donor allogeneic haematopoietic cell transplantation

    Get PDF
    Allogeneic hematopoietic cell transplantation (HCT) is used to treat many blood-based disorders and malignancies. While this is an effective treatment, it can result in serious adverse events, such as the development of acute graft-versus-host disease (aGVHD). This study aimed to develop a donor-specific epigenetic classifier that could be used in donor selection in HCT to reduce the incidence of aGVHD. The discovery cohort of the study consisted of 288 donors from a population receiving HLA-A, -B, -C and -DRB1 matched unrelated donor HCT with T cell replete peripheral blood stem cell grafts for treatment of acute leukaemia or myelodysplastic syndromes after myeloablative conditioning. Donors were selected based on recipient aGVHD outcome; this cohort consisted of 144 cases with aGVHD grades III-IV and 144 controls with no aGVHD that survived at least 100 days post-HCT matched for sex, age, disease and GVHD prophylaxis. Genome-wide DNA methylation was assessed using the Infinium Methylation EPIC BeadChip (Illumina), measuring CpG methylation at >850,000 sites across the genome. Following quality control, pre-processing and exploratory analyses, we applied a machine learning algorithm (Random Forest) to identify CpG sites predictive of aGVHD. Receiver operating characteristic (ROC) curve analysis of these sites resulted in a classifier with an encouraging area under the ROC curve (AUC) of 0.91. To test this classifier, we used an independent validation cohort (n=288) selected using the same criteria as the discovery cohort. Different attempts to validate the classifier using the independent validation cohort failed with the AUC falling to 0.51. These results indicate that donor DNA methylation may not be a suitable predictor of aGVHD in an HCT setting involving unrelated donors, despite the initial promising results in the discovery cohort. Our work highlights the importance of independent validation of machine learning classifiers, particularly when developing classifiers intended for clinical use

    Impact of Previously Unrecognized HLA Mismatches Using Ultrahigh Resolution Typing in Unrelated Donor Hematopoietic Cell Transplantation

    Get PDF
    PURPOSE: Ultrahigh resolution (UHR) HLA matching is reported to result in better outcomes following unrelated donor hematopoietic cell transplantation, improving survival and reducing post-transplant complications. However, most studies included relatively small numbers of patients. Here we report the findings from a large, multicenter validation study. METHODS: UHR HLA typing was available on 5,140 conventionally 10 out of 10 HLA-matched patients with malignant disease transplanted between 2008 and 2017. RESULTS: After UHR HLA typing, 82% of pairs remained 10 out of 10 UHR-matched; 12.3% of patients were 12 out of 12 UHR HLA-matched. Compared with 12 out of 12 UHR-matched patients, probabilities of grade 2-4 acute graft-versus-host disease (aGVHD) were significantly increased with UHR mismatches (overall P = .0019) and in those patients who were HLA-DPB1 T-cell epitope permissively mismatched or nonpermissively mismatched (overall P = .0011). In the T-cell-depleted subset, the degree of UHR HLA mismatch was only associated with increased transplant-related mortality (TRM) (overall P = .0068). In the T-cell-replete subset, UHR HLA matching was associated with a lower probability of aGVHD (overall P = .0020); 12 out of 12 UHR matching was associated with reduced TRM risk when compared with HLA-DPB1 T-cell epitope permissively mismatched patients, whereas nonpermissive mismatching resulted in a greater risk (overall P = .0003). CONCLUSION: This study did not confirm that UHR 12 out of 12 HLA matching increases the probability of overall survival but does demonstrate that aGVHD risk, and in certain settings TRM, is lowest in UHR HLA-matched pairs and thus warrants consideration when multiple 10 out of 10 HLA-matched donors of equivalent age are available

    Haplotype motif-based models for KIR-genotype informed selection of hematopoietic cell donors fail to predict outcome of patients with Myelodysplastic Syndromes or secondary Acute Myeloid Leukemia

    Get PDF
    Results from registry studies suggest that harnessing Natural Killer (NK) cell reactivity mediated through Killer cell Immunoglobulin-like Receptors (KIR) could reduce the risk of relapse after allogeneic Hematopoietic Cell Transplantation (HCT). Several competing models have been developed to classify donors as KIR-advantageous or disadvantageous. Basically, these models differ by grouping donors based on distinct KIR-KIR-ligand combinations or by haplotype motif assignment. This study aimed to validate different models for unrelated donor selection for patients with Myelodysplatic Syndromes (MDS) or secondary Acute Myeloid Leukemia (sAML). In a joint retrospective study of the European Society for Blood and Marrow Transplantation (EBMT) and the Center for International Blood and Marrow Transplant Research (CIBMTR) registry data from 1704 patients with secondary AML or MDS were analysed. The cohort consisted mainly of older patients (median age 61 years) with high risk disease who had received chemotherapy-based reduced intensity conditioning and anti-thymocyte globulin prior to allogeneic HCT from well-matched unrelated stem cell donors. The impact of the predictors on Overall Survival (OS) and relapse incidence was tested in Cox regression models adjusted for patient age, a modified disease risk index, performance status, donor age, HLA-match, sex-match, CMV-match, conditioning intensity, type of T-cell depletion and graft type. KIR genes were typed using high-resolution amplicon-based next generation sequencing. In univariable and multivariable analyses none of the models predicted OS and the risk of relapse consistently. Our results do not support the hypothesis that optimizing NK-mediated alloreactivity is possible by KIR-genotype informed selection of HLA-matched unrelated donors. However, in the context of allogeneic transplantation, NK-cell biology is complex and only partly understood. KIR-genes are highly diverse and current assignment of haplotype motifs based on the presence or absence of selected KIR genes is over-simplistic. As a consequence, further research is highly warranted and should integrate cutting edge knowledge on KIR genetics, and NK-cell biology into future studies focused on homogeneous groups of patients and treatment modalities.Development and application of statistical models for medical scientific researc

    Technologies of contraception and abortion

    Get PDF
    Soon to turn 60, the oral contraceptive pill still dominates histories of technology in the ‘sexual revolution’ and after. ‘The pill’ was revolutionary for many, though by no means all, women in the west, but there have always been alternatives, and looking globally yields a different picture. The condom, intrauterine device (IUD), surgical sterilization (male and female) and abortion were all transformed in the twentieth century, some more than once. Today, female sterilization (tubal ligation) and IUDs are the world's most commonly used technologies of contraception. The pill is in third place, followed closely by the condom. Long-acting hormonal injections are most frequently used in parts of Africa, male sterilization by vasectomy is unusually prevalent in Britain, and about one in five pregnancies worldwide ends in induced abortion. Though contraceptive use has generally increased in recent decades, the disparity between rich and poor countries is striking: the former tend to use condoms and pills, the latter sterilization and IUDs. Contraception, a term dating from the late nineteenth century and since then often conflated with abortion, has existed in many forms, and techniques have changed and proliferated over time. Diverse local cultures have embraced new technologies while maintaining older practices. Focusing on Britain and the United States, with excursions to India, China and France, this chapter shows how the patterns observed today were established and stabilized, often despite persistent criticism and reform efforts. By examining past innovation, and the distribution and use of a variety of tools and techniques, it reconsiders some widely held assumptions about what counts as revolutionary and for whom. Analytically, it takes up and reflects on one of the main issues raised by feminists and social historians: the agency of users as patients and consumers faced with choice and coercion. By examining practices of contraception alongside those of abortion, it revisits the knotty question of technology in the sexual revolution and the related themes of medical, legal, religious and political forms of control
    corecore