164 research outputs found

    End-to-end Learning for Image-based Detection of Molecular Alterations in Digital Pathology

    Full text link
    Current approaches for classification of whole slide images (WSI) in digital pathology predominantly utilize a two-stage learning pipeline. The first stage identifies areas of interest (e.g. tumor tissue), while the second stage processes cropped tiles from these areas in a supervised fashion. During inference, a large number of tiles are combined into a unified prediction for the entire slide. A major drawback of such approaches is the requirement for task-specific auxiliary labels which are not acquired in clinical routine. We propose a novel learning pipeline for WSI classification that is trainable end-to-end and does not require any auxiliary annotations. We apply our approach to predict molecular alterations for a number of different use-cases, including detection of microsatellite instability in colorectal tumors and prediction of specific mutations for colon, lung, and breast cancer cases from The Cancer Genome Atlas. Results reach AUC scores of up to 94% and are shown to be competitive with state of the art two-stage pipelines. We believe our approach can facilitate future research in digital pathology and contribute to solve a large range of problems around the prediction of cancer phenotypes, hopefully enabling personalized therapies for more patients in future.Comment: MICCAI 2022; 8.5 Pages, 4 Figure

    Self-rule to multi-adapt: Generalized multi-source feature learning using unsupervised domain adaptation for colorectal cancer tissue detection.

    Get PDF
    Supervised learning is constrained by the availability of labeled data, which are especially expensive to acquire in the field of digital pathology. Making use of open-source data for pre-training or using domain adaptation can be a way to overcome this issue. However, pre-trained networks often fail to generalize to new test domains that are not distributed identically due to tissue stainings, types, and textures variations. Additionally, current domain adaptation methods mainly rely on fully-labeled source datasets. In this work, we propose Self-Rule to Multi-Adapt (SRMA), which takes advantage of self-supervised learning to perform domain adaptation, and removes the necessity of fully-labeled source datasets. SRMA can effectively transfer the discriminative knowledge obtained from a few labeled source domain's data to a new target domain without requiring additional tissue annotations. Our method harnesses both domains' structures by capturing visual similarity with intra-domain and cross-domain self-supervision. Moreover, we present a generalized formulation of our approach that allows the framework to learn from multiple source domains. We show that our proposed method outperforms baselines for domain adaptation of colorectal tissue type classification in single and multi-source settings, and further validate our approach on an in-house clinical cohort. The code and trained models are available open-source: https://github.com/christianabbet/SRA

    Automated Grading of Bladder Cancer using Deep Learning

    Get PDF
    PhD thesis in Information technologyUrothelial carcinoma is the most common type of bladder cancer and is among the cancer types with the highest recurrence rate and lifetime treatment cost per patient. Diagnosed patients are stratified into risk groups, mainly based on the histological grade and stage. However, it is well known that correct grading of bladder cancer suffers from intra- and interobserver variability and inconsistent reproducibility between pathologists, potentially leading to under- or overtreatment of the patients. The economic burden, unnecessary patient suffering, and additional load on the health care system illustrate the importance of developing new tools to aid pathologists. With the introduction of digital pathology, large amounts of data have been made available in the form of digital histological whole-slide images (WSI). However, despite the massive amount of data, annotations for the given data are lacking. Another potential problem is that the tissue samples of urothelial carcinoma contain a mixture of damaged tissue, blood, stroma, muscle, and urothelium, where it is mainly the urothelium tissue that is diagnostically relevant for grading. A method for tissue segmentation is investigated, where the aim is to segment WSIs into the six tissue classes: urothelium, stroma, muscle, damaged tissue, blood, and background. Several methods based on convolutional neural networks (CNN) for tile-wise classification are proposed. Both single-scale and multiscale models were explored to see if including more magnification levels would improve the performance. Different techniques, such as unsupervised learning, semi-supervised learning, and domain adaptation techniques, are explored to mitigate the challenge of missing large quantities of annotated data. It is necessary to extract tiles from the WSI since it is intractable to process the entire WSI at full resolution at once. We have proposed a method to parameterize and automate the task of extracting tiles from different scales with a region of interest (ROI) defined at one of the scales. The method is reproducible and easy to describe by reporting the parameters. A pipeline for automated diagnostic grading is proposed, called TRIgrade. First, the tissue segmentation method is utilized to find the diagnostically relevant urothelium tissue. Then, the parameterized tile extraction method is used to extract tiles from the urothelium regions at three magnification levels from 300 WSIs. The extracted tiles form the training, validation, and test data used to train and test a diagnostic model. The final system outputs a segmented tissue image showing all the tissue regions in the WSI, a WHO grade heatmap indicating low- and high-grade carcinoma regions, and finally, a slide-level WHO grade prediction. The proposed TRIgrade pipeline correctly graded 45 of 50 WSIs, achieving an accuracy of 90%

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    Segmentation of Pathology Images: A Deep Learning Strategy with Annotated Data

    Get PDF
    Cancer has significantly threatened human life and health for many years. In the clinic, histopathology image segmentation is the golden stand for evaluating the prediction of patient prognosis and treatment outcome. Generally, manually labelling tumour regions in hundreds of high-resolution histopathological images is time-consuming and expensive for pathologists. Recently, the advancements in hardware and computer vision have allowed deep-learning-based methods to become mainstream to segment tumours automatically, significantly reducing the workload of pathologists. However, most current methods rely on large-scale labelled histopathological images. Therefore, this research studies label-effective tumour segmentation methods using deep-learning paradigms to relieve the annotation limitations. Chapter 3 proposes an ensemble framework for fully-supervised tumour segmentation. Usually, the performance of an individual-trained network is limited by significant morphological variances in histopathological images. We propose a fully-supervised learning ensemble fusion model that uses both shallow and deep U-Nets, trained with images of different resolutions and subsets of images, for robust predictions of tumour regions. Noise elimination is achieved with Convolutional Conditional Random Fields. Two open datasets are used to evaluate the proposed method: the ACDC@LungHP challenge at ISBI2019 and the DigestPath challenge at MICCAI2019. With a dice coefficient of 79.7 %, the proposed method takes third place in ACDC@LungHP. In DigestPath 2019, the proposed method achieves a dice coefficient 77.3 %. Well-annotated images are an indispensable part of training fully-supervised segmentation strategies. However, large-scale histopathology images are hardly annotated finely in clinical practice. It is common for labels to be of poor quality or for only a few images to be manually marked by experts. Consequently, fully-supervised methods cannot perform well in these cases. Chapter 4 proposes a self-supervised contrast learning for tumour segmentation. A self-supervised cancer segmentation framework is proposed to reduce label dependency. An innovative contrastive learning scheme is developed to represent tumour features based on unlabelled images. Unlike a normal U-Net, the backbone is a patch-based segmentation network. Additionally, data augmentation and contrastive losses are applied to improve the discriminability of tumour features. A convolutional Conditional Random Field is used to smooth and eliminate noise. Three labelled, and fourteen unlabelled images are collected from a private skin cancer dataset called BSS. Experimental results show that the proposed method achieves better tumour segmentation performance than other popular self-supervised methods. However, by evaluated on the same public dataset as chapter 3, the proposed self-supervised method is hard to handle fine-grained segmentation around tumour boundaries compared to the supervised method we proposed. Chapter 5 proposes a sketch-based weakly-supervised tumour segmentation method. To segment tumour regions precisely with coarse annotations, a sketch-supervised method is proposed, containing a dual CNN-Transformer network and a global normalised class activation map. CNN-Transformer networks simultaneously model global and local tumour features. With the global normalised class activation map, a gradient-based tumour representation can be obtained from the dual network predictions. We invited experts to mark fine and coarse annotations in the private BSS and the public PAIP2019 datasets to facilitate reproducible performance comparisons. Using the BSS dataset, the proposed method achieves 76.686 % IOU and 86.6 % Dice scores, outperforming state-of-the-art methods. Additionally, the proposed method achieves a Dice gain of 8.372 % compared with U-Net on the PAIP2019 dataset. The thesis presents three approaches to segmenting cancers from histology images: fully-supervised, unsupervised, and weakly supervised methods. This research effectively segments tumour regions based on histopathological annotations and well-designed modules. Our studies comprehensively demonstrate label-effective automatic histopathological image segmentation. Experimental results prove that our works achieve state-of-the-art segmentation performances on private and public datasets. In the future, we plan to integrate more tumour feature representation technologies with other medical modalities and apply them to clinical research

    Automatic detection of pathological regions in medical images

    Get PDF
    Medical images are an essential tool in the daily clinical routine for the detection, diagnosis, and monitoring of diseases. Different imaging modalities such as magnetic resonance (MR) or X-ray imaging are used to visualize the manifestations of various diseases, providing physicians with valuable information. However, analyzing every single image by human experts is a tedious and laborious task. Deep learning methods have shown great potential to support this process, but many images are needed to train reliable neural networks. Besides the accuracy of the final method, the interpretability of the results is crucial for a deep learning method to be established. A fundamental problem in the medical field is the availability of sufficiently large datasets due to the variability of different imaging techniques and their configurations. The aim of this thesis is the development of deep learning methods for the automatic identification of anomalous regions in medical images. Each method is tailored to the amount and type of available data. In the first step, we present a fully supervised segmentation method based on denoising diffusion models. This requires a large dataset with pixel-wise manual annotations of the pathological regions. Due to the implicit ensemble characteristic, our method provides uncertainty maps to allow interpretability of the model’s decisions. Manual pixel-wise annotations face the problems that they are prone to human bias, hard to obtain, and often even unavailable. Weakly supervised methods avoid these issues by only relying on image-level annotations. We present two different approaches based on generative models to generate pixel-wise anomaly maps using only image-level annotations, i.e., a generative adversarial network and a denoising diffusion model. Both perform image-to-image translation between a set of healthy and a set of diseased subjects. Pixel-wise anomaly maps can be obtained by computing the difference between the original image of the diseased subject and the synthetic image of its healthy representation. In an extension of the diffusion-based anomaly detection method, we present a flexible framework to solve various image-to-image translation tasks. With this method, we managed to change the size of tumors in MR images, and we were able to add realistic pathologies to images of healthy subjects. Finally, we focus on a problem frequently occurring when working with MR images: If not enough data from one MR scanner are available, data from other scanners need to be considered. This multi-scanner setting introduces a bias between the datasets of different scanners, limiting the performance of deep learning models. We present a regularization strategy on the model’s latent space to overcome the problems raised by this multi-site setting

    Symbolic versus sub-symbolic approaches: a case study on training Deep Networks to play Nine Men’s Morris game

    Get PDF
    Le reti neurali artificiali, grazie alle nuove tecniche di Deep Learning, hanno completamente rivoluzionato il panorama tecnologico degli ultimi anni, dimostrandosi efficaci in svariati compiti di Intelligenza Artificiale e ambiti affini. Sarebbe quindi interessante analizzare in che modo e in quale misura le deep network possano sostituire le IA simboliche. Dopo gli impressionanti risultati ottenuti nel gioco del Go, come caso di studio è stato scelto il gioco del Mulino, un gioco da tavolo largamente diffuso e ampiamente studiato. È stato quindi creato il sistema completamente sub-simbolico Neural Nine Men’s Morris, che sfrutta tre reti neurali per scegliere la mossa migliore. Le reti sono state addestrate su un dataset di più di 1.500.000 coppie (stato del gioco, mossa migliore), creato in base alle scelte di una IA simbolica. Il sistema ha dimostrato di aver imparato le regole del gioco proponendo una mossa valida in più del 99% dei casi di test. Inoltre ha raggiunto un’accuratezza del 39% rispetto al dataset e ha sviluppato una propria strategia di gioco diversa da quella della IA addestratrice, dimostrandosi un giocatore peggiore o migliore a seconda dell’avversario. I risultati ottenuti in questo caso di studio mostrano che, in questo contesto, la chiave del successo nella progettazione di sistemi AI allo stato dell’arte sembra essere un buon bilanciamento tra tecniche simboliche e sub-simboliche, dando più rilevanza a queste ultime, con lo scopo di raggiungere la perfetta integrazione di queste tecnologie

    Advanced Deep Learning for Medical Image Analysis

    Get PDF
    The application of deep learning is evolving, including in expert systems for healthcare, such as disease classification. Several challenges in the use of deep-learning algorithms in application to disease classification. The study aims to improve classification to address the problem. The thesis proposes a cost-sensitive imbalance training algorithm to address an unequal number of training examples, a two-stage Bayesian optimisation training algorithm and a dual-branch network to train a one-class classification scheme, further improving classification performance

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis
    corecore