60 research outputs found

    Novel Mutation Hotspots within Non-Coding Regulatory Regions of the Chronic Lymphocytic Leukemia Genome

    Get PDF
    Mutations in non-coding DNA regions are increasingly recognized as cancer drivers. These mutations can modify gene expression in cis or by inducing high-order chormatin structure modifications with long-range effects. Previous analysis reported the detection of recurrent and functional non-coding DNA mutations in the chronic lymphocytic leukemia (CLL) genome, such as those in the 3' untranslated region of NOTCH1 and in the PAX5 super-enhancer. In this report, we used whole genome sequencing data produced by the International Cancer Genome Consortium in order to analyze regions with previously reported regulatory activity. This approach enabled the identification of numerous recurrently mutated regions that were frequently positioned in the proximity of genes involved in immune and oncogenic pathways. By correlating these mutations with expression of their nearest genes, we detected significant transcriptional changes in genes such as PHF2 and S1PR2. More research is needed to clarify the function of these mutations in CLL, particularly those found in intergenic regions

    Time to Treatment Prediction in Chronic Lymphocytic Leukemia Based on New Transcriptional Patterns

    Get PDF
    Chronic lymphocytic leukemia (CLL) is the most frequent lymphoproliferative syndrome in western countries. CLL evolution is frequently indolent, and treatment is mostly reserved for those patients with signs or symptoms of disease progression. In this work, we used RNA sequencing data from the International Cancer Genome Consortium CLL cohort to determine new gene expression patterns that correlate with clinical evolution.We determined that a 290-gene expression signature, in addition to immunoglobulin heavy chain variable region (IGHV) mutation status, stratifies patients into four groups with notably different time to first treatment. This finding was confirmed in an independent cohort. Similarly, we present a machine learning algorithm that predicts the need for treatment within the first 5 years following diagnosis using expression data from 2,198 genes. This predictor achieved 90% precision and 89% accuracy when classifying independent CLL cases. Our findings indicate that CLL progression risk largely correlates with particular transcriptomic patterns and paves the way for the identification of high-risk patients who might benefit from prompt therapy following diagnosis.S

    The association of germline variants with chronic lymphocytic leukemia outcome suggests the implication of novel genes and pathways in clinical evolution

    Get PDF
    Background Chronic Lymphocytic Leukemia (CLL) is the most frequent lymphoproliferative disorder in western countries and is characterized by a remarkable clinical heterogeneity. During the last decade, multiple genomic studies have identified a myriad of somatic events driving CLL proliferation and aggressivity. Nevertheless, and despite the mounting evidence of inherited risk for CLL development, the existence of germline variants associated with clinical outcomes has not been addressed in depth. Methods Exome sequencing data from control leukocytes of CLL patients involved in the International Cancer Genome Consortium (ICGC) was used for genotyping. Cox regression was used to detect variants associated with clinical outcomes. Gene and pathways level associations were also calculated. Results Single nucleotide polymorphisms in PPP4R2 and MAP3K4 were associated with earlier treatment need. A gene-level analysis evidenced a significant association of RIPK3 with both treatment need and survival. Furthermore, germline variability in pathways such as apoptosis, cell-cycle, pentose phosphate, GNα13 and Nitric oxide was associated with overall survival. Conclusion Our results support the existence of inherited conditionants of CLL evolution and points towards genes and pathways that may results useful as biomarkers of disease outcome. More research is needed to validate these findings.S

    Multifaceted role of BTLA in the control of CD8+ T cell fate after antigen encounter

    Get PDF
    Purpose: Adoptive T-cell therapy using autologous tumor-infiltrating lymphocytes (TIL) has shown an overall clinical response rate 40%–50% in metastatic melanoma patients. BTLA (B-and-T lymphocyte associated) expression on transferred CD8+ TILs was associated with better clinical outcome. The suppressive function of the ITIM and ITSM motifs of BTLA is well described. Here, we sought to determine the functional characteristics of the CD8+BTLA+TIL subset and define the contribution of the Grb2 motif of BTLA in T-cell costimulation. Experimental Design: We determined the functional role and downstream signal of BTLA in both human CD8+ TILs and mouse CD8+ T cells. Functional assays were used including single-cell analysis, reverse-phase protein array (RPPA), antigen-specific vaccination models with adoptively transferred TCR-transgenic T cells as well as patient-derived xenograft (PDX) model using immunodeficient NOD-scid IL2Rgammanull (NSG) tumor-bearing mice treated with autologous TILs. Results: CD8+BTLA? TILs could not control tumor growth in vivo as well as their BTLA+ counterpart and antigen-specific CD8+BTLA? T cells had impaired recall response to a vaccine. However, CD8+BTLA+ TILs displayed improved survival following the killing of a tumor target and heightened “serial killing” capacity. Using mutants of BTLA signaling motifs, we uncovered a costimulatory function mediated by Grb2 through enhancing the secretion of IL-2 and the activation of Src after TCR stimulation. Conclusions: Our data portrays BTLA as a molecule with the singular ability to provide both costimulatory and coinhibitory signals to activated CD8+ T cells, resulting in extended survival, improved tumor control, and the development of a functional recall response. Clin Cancer Res; 23(20); 6151–64. ©2017 AACR

    Survival prediction and treatment optimization of multiple myeloma patients using machine-learning models based on clinical and gene expression data

    Get PDF
    Multiple myeloma (MM) remains mostly an incurable disease with a heterogeneous clinical evolution. Despite the availability of several prognostic scores, substantial room for improvement still exists. Promising results have been obtained by integrating clinical and biochemical data with gene expression profiling (GEP). In this report, we applied machine learning algorithms to MM clinical and RNAseq data collected by the CoMMpass consortium. We created a 50-variable random forests model (IAC-50) that could predict overall survival with high concordance between both training and validation sets (c-indexes, 0.818 and 0.780). This model included the following covariates: patient age, ISS stage, serum B2-microglobulin, first-line treatment, and the expression of 46 genes. Survival predictions for each patient considering the first line of treatment evidenced that those individuals treated with the best-predicted drug combination were significantly less likely to die than patients treated with other schemes. This was particularly important among patients treated with a triplet combination including bortezomib, an immunomodulatory drug (ImiD), and dexamethasone. Finally, the model showed a trend to retain its predictive value in patients with high-risk cytogenetics. In conclusion, we report a predictive model for MM survival based on the integration of clinical, biochemical, and gene expression data with machine learning tools

    Improved personalized survival prediction of patients with diffuse large B-cell Lymphoma using gene expression profiling

    Get PDF
    BACKGROUND: Thirty to forty percent of patients with Diffuse Large B-cell Lymphoma (DLBCL) have an adverse clinical evolution. The increased understanding of DLBCL biology has shed light on the clinical evolution of this pathology, leading to the discovery of prognostic factors based on gene expression data, genomic rearrangements and mutational subgroups. Nevertheless, additional efforts are needed in order to enable survival predictions at the patient level. In this study we investigated new machine learning-based models of survival using transcriptomic and clinical data. METHODS: Gene expression profiling (GEP) of in 2 different publicly available retrospective DLBCL cohorts were analyzed. Cox regression and unsupervised clustering were performed in order to identify probes associated with overall survival on the largest cohort. Random forests were created to model survival using combinations of GEP data, COO classification and clinical information. Cross-validation was used to compare model results in the training set, and Harrel's concordance index (c-index) was used to assess model's predictability. Results were validated in an independent test set. RESULTS: Two hundred thirty-three and sixty-four patients were included in the training and test set, respectively. Initially we derived and validated a 4-gene expression clusterization that was independently associated with lower survival in 20% of patients. This pattern included the following genes: TNFRSF9, BIRC3, BCL2L1 and G3BP2. Thereafter, we applied machine-learning models to predict survival. A set of 102 genes was highly predictive of disease outcome, outperforming available clinical information and COO classification. The final best model integrated clinical information, COO classification, 4-gene-based clusterization and the expression levels of 50 individual genes (training set c-index, 0.8404, test set c-index, 0.7942). CONCLUSION: Our results indicate that DLBCL survival models based on the application of machine learning algorithms to gene expression and clinical data can largely outperform other important prognostic variables such as disease stage and COO. Head-to-head comparisons with other risk stratification models are needed to compare its usefulness

    Impact of the first wave of the SARS-CoV-2 pandemic on the outcome of neurosurgical patients: A nationwide study in Spain

    Get PDF
    Objective To assess the effect of the first wave of the SARS-CoV-2 pandemic on the outcome of neurosurgical patients in Spain. Settings The initial flood of COVID-19 patients overwhelmed an unprepared healthcare system. Different measures were taken to deal with this overburden. The effect of these measures on neurosurgical patients, as well as the effect of COVID-19 itself, has not been thoroughly studied. Participants This was a multicentre, nationwide, observational retrospective study of patients who underwent any neurosurgical operation from March to July 2020. Interventions An exploratory factorial analysis was performed to select the most relevant variables of the sample. Primary and secondary outcome measures Univariate and multivariate analyses were performed to identify independent predictors of mortality and postoperative SARS-CoV-2 infection. Results Sixteen hospitals registered 1677 operated patients. The overall mortality was 6.4%, and 2.9% (44 patients) suffered a perioperative SARS-CoV-2 infection. Of those infections, 24 were diagnosed postoperatively. Age (OR 1.05), perioperative SARS-CoV-2 infection (OR 4.7), community COVID-19 incidence (cases/10 5 people/week) (OR 1.006), postoperative neurological worsening (OR 5.9), postoperative need for airway support (OR 5.38), ASA grade =3 (OR 2.5) and preoperative GCS 3-8 (OR 2.82) were independently associated with mortality. For SARS-CoV-2 postoperative infection, screening swab test <72 hours preoperatively (OR 0.76), community COVID-19 incidence (cases/10 5 people/week) (OR 1.011), preoperative cognitive impairment (OR 2.784), postoperative sepsis (OR 3.807) and an absence of postoperative complications (OR 0.188) were independently associated. Conclusions Perioperative SARS-CoV-2 infection in neurosurgical patients was associated with an increase in mortality by almost fivefold. Community COVID-19 incidence (cases/10 5 people/week) was a statistically independent predictor of mortality. Trial registration number CEIM 20/217
    • 

    corecore