4 research outputs found

    PathologyGAN: Learning deep representations of cancer tissue

    Get PDF
    Histopathological images of tumours contain abundant information about how tumours grow and how they interact with their micro-environment. Better understanding of tissue phenotypes in these images could reveal novel determinants of pathological processes underlying cancer, and in turn improve diagnosis and treatment options. Advances of Deep learning makes it ideal to achieve those goals, however, its application is limited by the cost of high quality labels from patients data. Unsupervised learning, in particular, deep generative models with representation learning properties provides an alternative path to further understand cancer tissue phenotypes, capturing tissue morphologies. In this paper, we develop a framework which allows Generative Adversarial Networks (GANs) to capture key tissue features and uses these characteristics to give structure to its latent space. To this end, we trained our model on two different datasets, an H and E colorectal cancer tissue from the National Center for Tumor diseases (NCT, Germany) and an H and E breast cancer tissue from the Netherlands Cancer Institute (NKI, Netherlands) and Vancouver General Hospital (VGH, Canada). Composed of 86 slide images and 576 tissue micro-arrays (TMAs) respectively. We show that our model generates high quality images, with a Frechet Inception Distance (FID) of 16.65 (breast cancer) and 32.05 (colorectal cancer). We further assess the quality of the images with cancer tissue characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using quantitative information to calculate the FID and showing consistent performance of 9.86. Additionally, the latent space of our model shows an interpretable structure and allows semantic vector operations that translate into tissue feature transformations. Furthermore, ratings from two expert pathologists found no significant difference between our generated tissue images from real ones. The code, generated images, and pretrained model are available at https://github.com/AdalbertoCq/Pathology-GA

    Towards Robust Deep Learning for Medical Image Analysis

    Get PDF
    Multi-dimensional medical data are rapidly collected to enhance healthcare. With the recent advance in artificial intelligence, deep learning techniques have been widely applied to medical images, constituting a significant proportion of medical data. The techniques of automated medical image analysis have the potential to benefit general clinical procedures, e.g., disease screening, malignancy diagnosis, patient risk prediction, and surgical planning. Although preliminary success takes place, the robustness of these approaches requires to be cautiously validated and sufficiently guaranteed before their application to real-world clinical problems. In this thesis, we propose different approaches to improve the robustness of deep learning algorithms for automated medical image analysis. (i) In terms of network architecture, we leverage the advantages of both 2D and 3D networks, and propose an alternative 2.5D approach for 3D organ segmentation. (ii) To improve data efficiency and utilize large-scale unlabeled medical data, we propose a unified framework for semi-supervised medical image segmentation and domain adaptation. (iii) For the safety-critical applications, we design a unified approach for failure detection and anomaly segmentation. (iv) We study the problem of Federated Learning, which enables collaborative learning and preserves data privacy, and improve the robustness of the algorithm in the non-i.i.d setting. (v) We incorporate multi-phase information for more accurate pancreatic tumor detection. (vi) Finally, we show our discovery for potential pancreatic cancer screening on non-contrast CT scans which outperform expert radiologists
    corecore