11 research outputs found

    PathologyGAN: Learning deep representations of cancer tissue

    Get PDF
    We apply Generative Adversarial Networks (GANs) to the domain of digital pathology. Current machine learning research for digital pathology focuses on diagnosis, but we suggest a different approach and advocate that generative models could drive forward the understanding of morphological characteristics of cancer tissue. In this paper, we develop a framework which allows GANs to capture key tissue features and uses these characteristics to give structure to its latent space. To this end, we trained our model on 249K H&E breast cancer tissue images, extracted from 576 TMA images of patients from the Netherlands Cancer Institute (NKI) and Vancouver General Hospital (VGH) cohorts. We show that our model generates high quality images, with a Frechet Inception Distance (FID) of 16.65. We further assess the quality of the images with cancer tissue characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using quantitative information to calculate the FID and showing consistent performance of 9.86. Additionally, the latent space of our model shows an interpretable structure and allows semantic vector operations that translate into tissue feature transformations. Furthermore, ratings from two expert pathologists found no significant difference between our generated tissue images from real ones. The code, generated images, and pretrained model are available at https://github.com/AdalbertoCq/Pathology-GANComment: MIDL 2020 final versio

    PathologyGAN: Learning deep representations of cancer tissue

    Get PDF
    Histopathological images of tumours contain abundant information about how tumours grow and how they interact with their micro-environment. Better understanding of tissue phenotypes in these images could reveal novel determinants of pathological processes underlying cancer, and in turn improve diagnosis and treatment options. Advances of Deep learning makes it ideal to achieve those goals, however, its application is limited by the cost of high quality labels from patients data. Unsupervised learning, in particular, deep generative models with representation learning properties provides an alternative path to further understand cancer tissue phenotypes, capturing tissue morphologies. In this paper, we develop a framework which allows Generative Adversarial Networks (GANs) to capture key tissue features and uses these characteristics to give structure to its latent space. To this end, we trained our model on two different datasets, an H and E colorectal cancer tissue from the National Center for Tumor diseases (NCT, Germany) and an H and E breast cancer tissue from the Netherlands Cancer Institute (NKI, Netherlands) and Vancouver General Hospital (VGH, Canada). Composed of 86 slide images and 576 tissue micro-arrays (TMAs) respectively. We show that our model generates high quality images, with a Frechet Inception Distance (FID) of 16.65 (breast cancer) and 32.05 (colorectal cancer). We further assess the quality of the images with cancer tissue characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using quantitative information to calculate the FID and showing consistent performance of 9.86. Additionally, the latent space of our model shows an interpretable structure and allows semantic vector operations that translate into tissue feature transformations. Furthermore, ratings from two expert pathologists found no significant difference between our generated tissue images from real ones. The code, generated images, and pretrained model are available at https://github.com/AdalbertoCq/Pathology-GA

    Self-supervised learning in non-small cell lung cancer discovers novel morphological clusters linked to patient outcome and molecular phenotypes

    Full text link
    Histopathological images provide the definitive source of cancer diagnosis, containing information used by pathologists to identify and subclassify malignant disease, and to guide therapeutic choices. These images contain vast amounts of information, much of which is currently unavailable to human interpretation. Supervised deep learning approaches have been powerful for classification tasks, but they are inherently limited by the cost and quality of annotations. Therefore, we developed Histomorphological Phenotype Learning, an unsupervised methodology, which requires no annotations and operates via the self-discovery of discriminatory image features in small image tiles. Tiles are grouped into morphologically similar clusters which appear to represent recurrent modes of tumor growth emerging under natural selection. These clusters have distinct features which can be identified using orthogonal methods. Applied to lung cancer tissues, we show that they align closely with patient outcomes, with histopathologically recognised tumor types and growth patterns, and with transcriptomic measures of immunophenotype

    Mapping the landscape of histomorphological cancer phenotypes using self-supervised learning on unannotated pathology slides

    Get PDF
    Cancer diagnosis and management depend upon the extraction of complex information from microscopy images by pathologists, which requires time-consuming expert interpretation prone to human bias. Supervised deep learning approaches have proven powerful, but are inherently limited by the cost and quality of annotations used for training. Therefore, we present Histomorphological Phenotype Learning, a self-supervised methodology requiring no labels and operating via the automatic discovery of discriminatory features in image tiles. Tiles are grouped into morphologically similar clusters which constitute an atlas of histomorphological phenotypes (HP-Atlas), revealing trajectories from benign to malignant tissue via inflammatory and reactive phenotypes. These clusters have distinct features which can be identified using orthogonal methods, linking histologic, molecular and clinical phenotypes. Applied to lung cancer, we show that they align closely with patient survival, with histopathologically recognised tumor types and growth patterns, and with transcriptomic measures of immunophenotype. These properties are maintained in a multi-cancer study

    HPL LUAD 20x

    No full text
    Pre-trained weights on Barlow Twins LUAD at 20X magnification with 60% background limit</p

    PathologyGAN: Learning Deep Representations of Cancer Tissue

    No full text
    We apply Generative Adversarial Networks (GANs) to the domain of digital pathology. Current machine learning research for digital pathology focuses on diagnosis, but we suggest a different approach and advocate that generative models could drive forward the understanding of morphological characteristics of cancer tissue. In this paper, we develop a framework which allows GANs to capture key tissue features and uses these characteristics to give structure to its latent space. To this end, we trained our model on 249K H+E breast cancer tissue images, extracted from 576 TMA images of patients from the Netherlands Cancer Institute (NKI) and Vancouver General Hospital (VGH) cohorts. We show that our model generates high quality images, with a Fréchet Inception Distance (FID) of 16.65. We further assess the quality of the images with cancer tissue characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using quantitative information to calculate the FID and showing consistent performance of 9.86. Additionally, the latent space of our model shows an interpretable structure and allows semantic vector operations that translate into tissue feature transformations. Furthermore, ratings from two expert pathologists found no significant difference between our generated tissue images from real ones. The code, generated images, and pretrained model are available at \href{https://github.com/AdalbertoCq/Pathology-GAN}{https://github.com/AdalbertoCq/Pathology-GAN

    Learning a Low Dimensional Manifold of Real Cancer Tissue with PathologyGAN

    No full text
    Histopathological images contain information about how a tumor interacts with its micro-environment. Better understanding of such interaction holds the key for improved diagnosis and treatment of cancer. Deep learning shows promise on achieving those goals, however, its application is limited by the cost of high quality labels. Unsupervised learning, in particular, deep generative models with representation learning properties provides an alternative path to further understand cancer tissue phenotypes, capturing tissue morphologies. We present a deep generative model that learns to simulate high-fidelity cancer tissue images while mapping the real images onto an interpretable low dimensional latent space. The key to the model is an encoder trained by a previously developed generative adversarial network, PathologyGAN. Here we provide examples of how the latent space holds morphological characteristics of cancer tissue (e.g. tissue type or cancer, lymphocytes, and stroma cells). We tested the general applicability of our representations in three different settings: latent space visualization, training a tissue type classifier over latent representations, and on multiple instance learning (MIL). Latent visualizations of breast cancer tissue show that distinct regions of the latent space enfold different characteristics (stroma, lymphocytes, and cancer cells). A logistic regression for colorectal tissue type classification trained over latent projections achieves 87% accuracy. Finally, we used the attention-based deep MIL for predicting presence of epithelial cells in colorectal tissue, achieving 90% accuracy. Our results show that PathologyGAN captures distinct phenotype characteristics, paving the way for further understanding of tumor micro-environment and ultimately refining histopathological classification for diagnosis and treatment

    Adversarial Learning of Cancer Tissue Representations

    Get PDF
    Deep learning based analysis of histopathology images shows promise in advancing the understanding of tumor progression, tumor micro-environment, and their underpinning biological processes. So far, these approaches have focused on extracting information associated with annotations. In this work, we ask how much information can be learned from the tissue architecture itself. We present an adversarial learning model to extract feature representations of cancer tissue, without the need for manual annotations. We show that these representations are able to identify a variety of morphological characteristics across three cancer types: Breast, colon, and lung. This is supported by 1) the separation of morphologic characteristics in the latent space; 2) the ability to classify tissue type with logistic regression using latent representations, with an AUC of 0.97 and 85% accuracy, comparable to supervised deep models; 3) the ability to predict the presence of tumor in Whole Slide Images (WSIs) using multiple instance learning (MIL), achieving an AUC of 0.98 and 94% accuracy. Our results show that our model captures distinct phenotypic characteristics of real tissue samples, paving the way for further understanding of tumor progression and tumor micro-environment, and ultimately refining histopathological classification for diagnosis and treatment (The code and pretrained models are available at: https://github.com/AdalbertoCq/Adversarial-learning-of-cancer-tissue-representations)

    Self-supervised learning reveals clinically relevant histomorphological patterns for therapeutic strategies in colon cancer

    Get PDF
    Self-supervised learning (SSL) automates the extraction and interpretation of histopathology features on unannotated hematoxylin-and-eosin-stained whole-slide images (WSIs). We trained an SSL Barlow Twins-encoder on 435 TCGA colon adenocarcinoma WSIs to extract features from small image patches. Leiden community detection then grouped tiles into histomorphological phenotype clusters (HPCs). HPC reproducibility and predictive ability for overall survival was confirmed in an independent clinical trial cohort (N=1213 WSIs). This unbiased atlas resulted in 47 HPCs displaying unique and sharing clinically significant histomorphological traits, highlighting tissue type, quantity, and architecture, especially in the context of tumor stroma. Through in-depth analysis of these HPCs, including immune landscape and gene set enrichment analysis, and association to clinical outcomes, we shed light on the factors influencing survival and responses to treatments like standard adjuvant chemotherapy and experimental therapies. Further exploration of HPCs may unveil new insights and aid decision-making and personalized treatments for colon cancer patients
    corecore