2,365 research outputs found
Deep Learning with Mixed Supervision for Brain Tumor Segmentation
International audienceMost of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained on manually segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. In this paper, we propose to use both types of training data (fully-annotated and weakly-annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks in order to exploit information contained in weakly-annotated images while preventing the network to learn features which are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor seg-mentation in Magnetic Resonance images from BRATS 2018 challenge. We show that the proposed approach provides a significant improvement of seg-mentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly-annotated and fully-annotated images available for training
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
A 3D Coarse-to-Fine Framework for Volumetric Medical Image Segmentation
In this paper, we adopt 3D Convolutional Neural Networks to segment
volumetric medical images. Although deep neural networks have been proven to be
very effective on many 2D vision tasks, it is still challenging to apply them
to 3D tasks due to the limited amount of annotated 3D data and limited
computational resources. We propose a novel 3D-based coarse-to-fine framework
to effectively and efficiently tackle these challenges. The proposed 3D-based
framework outperforms the 2D counterpart to a large margin since it can
leverage the rich spatial infor- mation along all three axes. We conduct
experiments on two datasets which include healthy and pathological pancreases
respectively, and achieve the current state-of-the-art in terms of
Dice-S{\o}rensen Coefficient (DSC). On the NIH pancreas segmentation dataset,
we outperform the previous best by an average of over 2%, and the worst case is
improved by 7% to reach almost 70%, which indicates the reliability of our
framework in clinical applications.Comment: 9 pages, 4 figures, Accepted to 3D
Compositional Representation Learning for Brain Tumour Segmentation
For brain tumour segmentation, deep learning models can achieve human
expert-level performance given a large amount of data and pixel-level
annotations. However, the expensive exercise of obtaining pixel-level
annotations for large amounts of data is not always feasible, and performance
is often heavily reduced in a low-annotated data regime. To tackle this
challenge, we adapt a mixed supervision framework, vMFNet, to learn robust
compositional representations using unsupervised learning and weak supervision
alongside non-exhaustive pixel-level pathology labels. In particular, we use
the BraTS dataset to simulate a collection of 2-point expert pathology
annotations indicating the top and bottom slice of the tumour (or tumour
sub-regions: peritumoural edema, GD-enhancing tumour, and the necrotic /
non-enhancing tumour) in each MRI volume, from which weak image-level labels
that indicate the presence or absence of the tumour (or the tumour sub-regions)
in the image are constructed. Then, vMFNet models the encoded image features
with von-Mises-Fisher (vMF) distributions, via learnable and compositional vMF
kernels which capture information about structures in the images. We show that
good tumour segmentation performance can be achieved with a large amount of
weakly labelled data but only a small amount of fully-annotated data.
Interestingly, emergent learning of anatomical structures occurs in the
compositional representation even given only supervision relating to pathology
(tumour).Comment: Accepted by DART workshop, MICCAI 202
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
- …