694 research outputs found
CELNet: Evidence Localization for Pathology Images using Weakly Supervised Learning
Despite deep convolutional neural networks boost the performance of image
classification and segmentation in digital pathology analysis, they are usually
weak in interpretability for clinical applications or require heavy annotations
to achieve object localization. To overcome this problem, we propose a weakly
supervised learning-based approach that can effectively learn to localize the
discriminative evidence for a diagnostic label from weakly labeled training
data. Experimental results show that our proposed method can reliably pinpoint
the location of cancerous evidence supporting the decision of interest, while
still achieving a competitive performance on glimpse-level and slide-level
histopathologic cancer detection tasks.Comment: Accepted for MICCAI 201
Confocal Laser Endomicroscopy Image Analysis with Deep Convolutional Neural Networks
abstract: Rapid intraoperative diagnosis of brain tumors is of great importance for planning treatment and guiding the surgeon about the extent of resection. Currently, the standard for the preliminary intraoperative tissue analysis is frozen section biopsy that has major limitations such as tissue freezing and cutting artifacts, sampling errors, lack of immediate interaction between the pathologist and the surgeon, and time consuming.
Handheld, portable confocal laser endomicroscopy (CLE) is being explored in neurosurgery for its ability to image histopathological features of tissue at cellular resolution in real time during brain tumor surgery. Over the course of examination of the surgical tumor resection, hundreds to thousands of images may be collected. The high number of images requires significant time and storage load for subsequent reviewing, which motivated several research groups to employ deep convolutional neural networks (DCNNs) to improve its utility during surgery. DCNNs have proven to be useful in natural and medical image analysis tasks such as classification, object detection, and image segmentation.
This thesis proposes using DCNNs for analyzing CLE images of brain tumors. Particularly, it explores the practicality of DCNNs in three main tasks. First, off-the shelf DCNNs were used to classify images into diagnostic and non-diagnostic. Further experiments showed that both ensemble modeling and transfer learning improved the classifier’s accuracy in evaluating the diagnostic quality of new images at test stage. Second, a weakly-supervised learning pipeline was developed for localizing key features of diagnostic CLE images from gliomas. Third, image style transfer was used to improve the diagnostic quality of CLE images from glioma tumors by transforming the histology patterns in CLE images of fluorescein sodium-stained tissue into the ones in conventional hematoxylin and eosin-stained tissue slides.
These studies suggest that DCNNs are opted for analysis of CLE images. They may assist surgeons in sorting out the non-diagnostic images, highlighting the key regions and enhancing their appearance through pattern transformation in real time. With recent advances in deep learning such as generative adversarial networks and semi-supervised learning, new research directions need to be followed to discover more promises of DCNNs in CLE image analysis.Dissertation/ThesisDoctoral Dissertation Neuroscience 201
Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis
The full acceptance of Deep Learning (DL) models in the clinical field is
rather low with respect to the quantity of high-performing solutions reported
in the literature. Particularly, end users are reluctant to rely on the rough
predictions of DL models. Uncertainty quantification methods have been proposed
in the literature as a potential response to reduce the rough decision provided
by the DL black box and thus increase the interpretability and the
acceptability of the result by the final user. In this review, we propose an
overview of the existing methods to quantify uncertainty associated to DL
predictions. We focus on applications to medical image analysis, which present
specific challenges due to the high dimensionality of images and their quality
variability, as well as constraints associated to real-life clinical routine.
We then discuss the evaluation protocols to validate the relevance of
uncertainty estimates. Finally, we highlight the open challenges of uncertainty
quantification in the medical field
Minimally Interactive Segmentation with Application to Human Placenta in Fetal MR Images
Placenta segmentation from fetal Magnetic Resonance (MR) images is important for fetal surgical planning. However, accurate segmentation results are difficult to achieve for automatic methods, due to sparse acquisition, inter-slice motion, and the widely varying position and shape of the placenta among pregnant women. Interactive methods have been widely used to get more accurate and robust results. A good interactive segmentation method should achieve high accuracy, minimize user interactions with low variability among users, and be computationally fast. Exploiting recent advances in machine learning, I explore a family of new interactive methods for placenta segmentation from fetal MR images. I investigate the combination of user interactions with learning from a single image or a large set of images. For learning from a single image, I propose novel Online Random Forests to efficiently leverage user interactions for the segmentation of 2D and 3D fetal MR images. I also investigate co-segmentation of multiple volumes of the same patient with 4D Graph Cuts. For learning from a large set of images, I first propose a deep learning-based framework that combines user interactions with Convolutional Neural Networks (CNN) based on geodesic distance transforms to achieve accurate segmentation and good interactivity. I then propose image-specific fine-tuning to make CNNs adaptive to different individual images and able to segment previously unseen objects. Experimental results show that the proposed algorithms outperform traditional interactive segmentation methods in terms of accuracy and interactivity. Therefore, they might be suitable for segmentation of the placenta in planning systems for fetal and maternal surgery, and for rapid characterization of the placenta by MR images. I also demonstrate that they can be applied to the segmentation of other organs from 2D and 3D images
Going Deep in Medical Image Analysis: Concepts, Methods, Challenges and Future Directions
Medical Image Analysis is currently experiencing a paradigm shift due to Deep
Learning. This technology has recently attracted so much interest of the
Medical Imaging community that it led to a specialized conference in `Medical
Imaging with Deep Learning' in the year 2018. This article surveys the recent
developments in this direction, and provides a critical review of the related
major aspects. We organize the reviewed literature according to the underlying
Pattern Recognition tasks, and further sub-categorize it following a taxonomy
based on human anatomy. This article does not assume prior knowledge of Deep
Learning and makes a significant contribution in explaining the core Deep
Learning concepts to the non-experts in the Medical community. Unique to this
study is the Computer Vision/Machine Learning perspective taken on the advances
of Deep Learning in Medical Imaging. This enables us to single out `lack of
appropriately annotated large-scale datasets' as the core challenge (among
other challenges) in this research direction. We draw on the insights from the
sister research fields of Computer Vision, Pattern Recognition and Machine
Learning etc.; where the techniques of dealing with such challenges have
already matured, to provide promising directions for the Medical Imaging
community to fully harness Deep Learning in the future
Vessel-CAPTCHA: An efficient learning framework for vessel annotation and segmentation
Deep learning techniques for 3D brain vessel image segmentation have not been as successful as in the segmentation of other organs and tissues. This can be explained by two factors. First, deep learning techniques tend to show poor performances at the segmentation of relatively small objects compared to the size of the full image. Second, due to the complexity of vascular trees and the small size of vessels, it is challenging to obtain the amount of annotated training data typically needed by deep learning methods. To address these problems, we propose a novel annotation-efficient deep learning vessel segmentation framework. The framework avoids pixel-wise annotations, only requiring weak patch-level labels to discriminate between vessel and non-vessel 2D patches in the training set, in a setup similar to the CAPTCHAs used to differentiate humans from bots in web applications. The user-provided weak annotations are used for two tasks: (1) to synthesize pixel-wise pseudo-labels for vessels and background in each patch, which are used to train a segmentation network, and (2) to train a classifier network. The classifier network allows to generate additional weak patch labels, further reducing the annotation burden, and it acts as a second opinion for poor quality images. We use this framework for the segmentation of the cerebrovascular tree in Time-of-Flight angiography (TOF) and Susceptibility-Weighted Images (SWI). The results show that the framework achieves state-of-the-art accuracy, while reducing the annotation time by
77% w.r.t. learning-based segmentation methods using pixel-wise labels for training
Vessel-CAPTCHA: An efficient learning framework for vessel annotation and segmentation
Deep learning techniques for 3D brain vessel image segmentation have not been as successful as in the segmentation of other organs and tissues. This can be explained by two factors. First, deep learning techniques tend to show poor performances at the segmentation of relatively small objects compared to the size of the full image. Second, due to the complexity of vascular trees and the small size of vessels, it is challenging to obtain the amount of annotated training data typically needed by deep learning methods. To address these problems, we propose a novel annotation-efficient deep learning vessel segmentation framework. The framework avoids pixel-wise annotations, only requiring weak patch-level labels to discriminate between vessel and non-vessel 2D patches in the training set, in a setup similar to the CAPTCHAs used to differentiate humans from bots in web applications. The user-provided weak annotations are used for two tasks: (1) to synthesize pixel-wise pseudo-labels for vessels and background in each patch, which are used to train a segmentation network, and (2) to train a classifier network. The classifier network allows to generate additional weak patch labels, further reducing the annotation burden, and it acts as a second opinion for poor quality images. We use this framework for the segmentation of the cerebrovascular tree in Time-of-Flight angiography (TOF) and Susceptibility-Weighted Images (SWI). The results show that the framework achieves state-of-the-art accuracy, while reducing the annotation time by ∼77% w.r.t. learning-based segmentation methods using pixel-wise labels for training
- …