10 research outputs found

    Whole Slide Imaging-Based Prediction of TP53 Mutations Identifies an Aggressive Disease Phenotype in Prostate Cancer

    Get PDF
    FWO FWO 3G045620UGent BOF BOF/IOP/2022/045 01J06219Flanders Innovation & Entrepreneurship (VLAIO, project "ATHENA") HBC.2019.2528United States Department of Health & Human Services National Institutes of Health (NIH) - USA NIH National Cancer Institute (NCI) R01 CA260271Belgian American Educational FoundationFWO 1161223N 3F013118 3S037019Spanish Government RTI-2018-101674-B-I00 PID2021-128317OB-I00Junta de Andalucia P20-00163Fulbright Spanish Commissio

    Recent progress in epicardial and pericardial adipose tissue segmentation and quantification based on deep learning : a systematic review

    No full text
    Epicardial and pericardial adipose tissues (EAT and PAT), which are located around the heart, have been linked to coronary atherosclerosis, cardiomyopathy, coronary artery disease, and other cardiovascular diseases. Additionally, the volume and thickness of EAT are good predictors of CVD risk levels. Manual quantification of these tissues is a tedious and error-prone process. This paper presents a comprehensive and critical overview of research on the epicardial and pericardial adipose tissue segmentation and quantification methods, evaluates their effectiveness in terms of segmentation time and accuracy, provides a critical comparison of the methods, and presents ongoing and future challenges in the field. Described methods are classified into pericardial adipose tissue segmentation, direct epicardial adipose tissue segmentation, and epicardial adipose tissue segmentation via pericardium delineation. A comprehensive categorization of the underlying methods is conducted with insights into their evolution from traditional image processing methods to recent deep learning-based methods. The paper also provides an overview of the research on the clinical significance of epicardial and pericardial adipose tissues as well as the terminology and definitions used in the medical literature

    Spatial cellular architecture predicts prognosis in glioblastoma

    No full text
    Intra-tumoral heterogeneity and cell-state plasticity are key drivers for the therapeutic resistance of glioblastoma. Here, we investigate the association between spatial cellular organization and glioblastoma prognosis. Leveraging single-cell RNA-seq and spatial transcriptomics data, we develop a deep learning model to predict transcriptional subtypes of glioblastoma cells from histology images. Employing this model, we phenotypically analyze 40 million tissue spots from 410 patients and identify consistent associations between tumor architecture and prognosis across two independent cohorts. Patients with poor prognosis exhibit higher proportions of tumor cells expressing a hypoxia-induced transcriptional program. Furthermore, a clustering pattern of astrocyte-like tumor cells is associated with worse prognosis, while dispersion and connection of the astrocytes with other transcriptional subtypes correlate with decreased risk. To validate these results, we develop a separate deep learning model that utilizes histology images to predict prognosis. Applying this model to spatial transcriptomics data reveal survival-associated regional gene expression programs. Overall, our study presents a scalable approach to unravel the transcriptional heterogeneity of glioblastoma and establishes a critical connection between spatial cellular architecture and clinical outcomes. Intra-tumoral heterogeneity and cell-state plasticity contribute to the development of therapeutic resistance in glioblastoma (GBM). Here the authors use two deep learning models to predict spatial transcriptional programs and prognosis from histology images in GBM

    Overview of the whole heart and heart chamber segmentation methods

    No full text
    Background-Preservation and improvement of heart and vessel health is the primary motivation behind cardiovascular disease (CVD) research. Development of advanced imaging techniques can improve our understanding of disease physiology and serve as a monitor for disease progression. Various image processing approaches have been proposed to extract parameters of cardiac shape and function from different cardiac imaging modalities with an overall intention of providing full cardiac analysis. Due to differences in image modalities, the selection of an appropriate segmentation algorithm may be a challenging task. Purpose-This paper presents a comprehensive and critical overview of research on the whole heart, bi-ventricles and left atrium segmentation methods from computed tomography (CT), magnetic resonance (MRI) and echocardiography (echo) imaging. The paper aims to: (1) summarize the considerable challenges of cardiac image segmentation, (2) provide the comparison of the segmentation methods, (3) classify significant contributions in the field and (4) critically review approaches in terms of their performance and accuracy. Conclusion-The methods described are classified based on the used segmentation approach into (1) edge-based segmentation methods, (2) model-fitting segmentation methods and (3) machine and deep learning segmentation methods and are further split based on the targeted cardiac structure. Edge-based methods are mostly developed as semi-automatic and allow end-user interaction, which provides physicians with extra control over the final segmentation. Model-fitting methods are very robust and resistant to the high variability in image contrast and overall image quality. Nevertheless, they are often time-consuming and require appropriate models with prior knowledge. While the emerging deep learning segmentation approaches provide unprecedented performance in some specific scenarios and under the appropriate training, their performance highly depends on the data quality and the amount and the accuracy of provided annotations

    Multimodal data fusion for cancer biomarker discovery with deep learning

    No full text
    Cancer diagnosis and treatment decisions often focus on one data source. Steyaert and colleagues discuss the current status and challenges of data fusion, including electronic health records, molecular data, digital pathology and radiographic images, in cancer research and translational development. Technological advances have made it possible to study a patient from multiple angles with high-dimensional, high-throughput multiscale biomedical data. In oncology, massive amounts of data are being generated, ranging from molecular, histopathology, radiology to clinical records. The introduction of deep learning has greatly advanced the analysis of biomedical data. However, most approaches focus on single data modalities, leading to slow progress in methods to integrate complementary data types. Development of effective multimodal fusion approaches is becoming increasingly important as a single modality might not be consistent and sufficient to capture the heterogeneity of complex diseases to tailor medical care and improve personalized medicine. Many initiatives now focus on integrating these disparate modalities to unravel the biological processes involved in multifactorial diseases such as cancer. However, many obstacles remain, including lack of usable data as well as methods for clinical validation and interpretation. Here, we cover these current challenges and reflect on opportunities through deep learning to tackle data sparsity and scarcity, multimodal interpretability and standardization of datasets

    Frozen pretrained transformers for neural sign language translation

    No full text
    One of the major challenges in sign language translation from a sign language to a spoken language is the lack of parallel corpora. Recent works have achieved promising results on the RWTH-PHOENIX-Weather 2014T dataset, which consists of over eight thousand parallel sentences between German sign language and German. However, from the perspective of neural machine translation, this is still a tiny dataset. To improve the performance of models trained on small datasets, transfer learning can be used. While this has been previously applied in sign language translation for feature extraction, to the best of our knowledge, pretrained language models have not yet been investigated. We use pretrained BERT-base and mBART-50 models to initialize our sign language video to spoken language text translation model. To mitigate overfitting, we apply the frozen pretrained transformer technique: we freeze the majority of parameters during training. Using a pretrained BERT model, we outperform a baseline trained from scratch by 1 to 2 BLEU-4. Our results show that pretrained language models can be used to improve sign language translation performance and that the self-attention patterns in BERT transfer in zero-shot to the encoder and decoder of sign language translation models

    Synthetic whole-slide image tile generation with gene expression profile-infused deep generative models

    No full text
    In this work, we propose an approach to generate whole-slide image (WSI) tiles by using deep generative models infused with matched gene expression profiles. First, we train a variational autoencoder (VAE) that learns a latent, lower-dimensional representation of multi-tissue gene expression profiles. Then, we use this representation to infuse generative adversarial networks (GANs) that generate lung and brain cortex tissue tiles, resulting in a new model that we call RNA-GAN. Tiles generated by RNA-GAN were preferred by expert pathologists compared with tiles generated using traditional GANs, and in addition, RNA-GAN needs fewer training epochs to generate high-quality tiles. Finally, RNA-GAN was able to generalize to gene expression profiles outside of the training set, showing imputation capabilities. A web-based quiz is available for users to play a game distinguishing real and synthetic tiles: https://rna-gan.stanford.edu/, and the code for RNA-GAN is available here: https://github.com/gevaertlab/RNA-GAN

    Whole slide imaging-based prediction of TP53 mutations identifies an aggressive disease phenotype in prostate cancer

    No full text
    In prostate cancer, there is an urgent need for objective prognostic biomarkers that identify the metastatic potential of a tumor at an early stage. While recent analyses indicated TP53 mutations as candidate biomarkers, molecular profiling in a clinical setting is complicated by tumor heterogeneity. Deep learning models that predict the spatial presence of TP53 mutations in whole slide images (WSI) offer the potential to mitigate this issue. To assess the potential of WSIs as proxies for spatially resolved profiling and as biomarkers for aggressive disease, we developed TiDo, a deep learning model that achieves state-of-the-art performance in predicting TP53 mutations from WSIs of primary prostate tumors. In an independent multifocal cohort, the model showed successful generalization at both the patient and lesion level. Analysis of model predictions revealed that false positive (FP) predictions could at least partially be explained by TP53 deletions, suggesting that some FP carry an alteration that leads to the same histological phenotype as TP53 mutations. Comparative expression and histologic cell type analyses identified a TP53-like cellular phenotype triggered by expression of pathways affecting stromal composition. Together, these findings indicate that WSI-based models might not be able to perfectly predict the spatial presence of individual TP53 mutations but they have the potential to elucidate the prognosis of a tumor by depicting a downstream phenotype associated with aggressive disease biomarkers

    GeNNius: an ultrafast drug-target interaction inference method based on graph neural networks

    No full text
    Motivation: Drug-target interaction (DTI) prediction is a relevant but challenging task in the drug repurposing field. In-silico approaches have drawn particular attention as they can reduce associated costs and time commitment of traditional methodologies. Yet, current state-of-the-art methods present several limitations: existing DTI prediction approaches are computationally expensive, thereby hindering the ability to use large networks and exploit available datasets and, the generalization to unseen datasets of DTI prediction methods remains unexplored, which could potentially improve the development processes of DTI inferring approaches in terms of accuracy and robustness. Results: In this work, we introduce GeNNius (Graph Embedding Neural Network Interaction Uncovering System), a Graph Neural Network (GNN)-based method that outperforms state-of-the-art models in terms of both accuracy and time efficiency across a variety of datasets. We also demonstrated its prediction power to uncover new interactions by evaluating not previously known DTIs for each dataset. We further assessed the generalization capability of GeNNius by training and testing it on different datasets, showing that this framework can potentially improve the DTI prediction task by training on large datasets and testing on smaller ones. Finally, we investigated qualitatively the embeddings generated by GeNNius, revealing that the GNN encoder maintains biological information after the graph convolutions while diffusing this information through nodes, eventually distinguishing protein families in the node embedding space. Availability and implementation: GeNNius code is available at https://github.com/ubioinformat/GeNNius
    corecore