4,345 research outputs found

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Data efficient deep learning for medical image analysis: A survey

    Full text link
    The rapid evolution of deep learning has significantly advanced the field of medical image analysis. However, despite these achievements, the further enhancement of deep learning models for medical image analysis faces a significant challenge due to the scarcity of large, well-annotated datasets. To address this issue, recent years have witnessed a growing emphasis on the development of data-efficient deep learning methods. This paper conducts a thorough review of data-efficient deep learning methods for medical image analysis. To this end, we categorize these methods based on the level of supervision they rely on, encompassing categories such as no supervision, inexact supervision, incomplete supervision, inaccurate supervision, and only limited supervision. We further divide these categories into finer subcategories. For example, we categorize inexact supervision into multiple instance learning and learning with weak annotations. Similarly, we categorize incomplete supervision into semi-supervised learning, active learning, and domain-adaptive learning and so on. Furthermore, we systematically summarize commonly used datasets for data efficient deep learning in medical image analysis and investigate future research directions to conclude this survey.Comment: Under Revie

    Accurate segmentation and registration of skin lesion images to evaluate lesion change

    Full text link
    Skin cancer is a major health problem. There are several techniques to help diagnose skin lesions from a captured image. Computer-aided diagnosis (CAD) systems operate on single images of skin lesions, extracting lesion features to further classify them and help the specialists. Accurate feature extraction, which later on depends on precise lesion segmentation, is key for the performance of these systems. In this paper, we present a skin lesion segmentation algorithm based on a novel adaptation of superpixels techniques and achieve the best reported results for the ISIC 2017 challenge dataset. Additionally, CAD systems have paid little attention to a critical criterion in skin lesion diagnosis: the lesion's evolution. This requires operating on two or more images of the same lesion, captured at different times but with a comparable scale, orientation, and point of view; in other words, an image registration process should first be performed. We also propose in this work, an image registration approach that outperforms top image registration techniques. Combined with the proposed lesion segmentation algorithm, this allows for the accurate extraction of features to assess the evolution of the lesion. We present a case study with the lesion-size feature, paving the way for the development of automatic systems to easily evaluate skin lesion evolutionThis work was supported in part by the Spanish Government (HAVideo, TEC2014-53176-R) and in part by the TEC department (Universidad Autonoma de Madrid

    Breast ultrasound lesions recognition::end-to-end deep learning approaches

    Get PDF
    Multistage processing of automated breast ultrasound lesions recognition is dependent on the performance of prior stages. To improve the current state of the art, we propose the use of end-to-end deep learning approaches using fully convolutional networks (FCNs), namely FCN-AlexNet, FCN-32s, FCN-16s, and FCN-8s for semantic segmentation of breast lesions. We use pretrained models based on ImageNet and transfer learning to overcome the issue of data deficiency. We evaluate our results on two datasets, which consist of a total of 113 malignant and 356 benign lesions. To assess the performance, we conduct fivefold cross validation using the following split: 70% for training data, 10% for validation data, and 20% testing data. The results showed that our proposed method performed better on benign lesions, with a top "mean Dice" score of 0.7626 with FCN-16s, when compared with the malignant lesions with a top mean Dice score of 0.5484 with FCN-8s. When considering the number of images with Dice score >0.5 , 89.6% of the benign lesions were successfully segmented and correctly recognised, whereas 60.6% of the malignant lesions were successfully segmented and correctly recognized. We conclude the paper by addressing the future challenges of the work

    Cancer diagnosis using deep learning: A bibliographic review

    Get PDF
    In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements

    Domain Adaptation for Novel Imaging Modalities with Application to Prostate MRI

    Get PDF
    The need for training data can impede the adoption of novel imaging modalities for deep learning-based medical image analysis. Domain adaptation can mitigate this problem by exploiting training samples from an existing, densely-annotated source domain within a novel, sparsely-annotated target domain, by bridging the differences between the two domains. In this thesis we present methods for adapting between diffusion-weighed (DW)-MRI data from multiparametric (mp)-MRI acquisitions and VERDICT (Vascular, Extracellular and Restricted Diffusion for Cytometry in Tumors) MRI, a richer DW-MRI technique involving an optimized acquisition protocol for cancer characterization. We also show that the proposed methods are general and their applicability extends beyond medical imaging. First, we propose a semi-supervised domain adaptation method for prostate lesion segmentation on VERDICT MRI. Our approach relies on stochastic generative modelling to translate across two heterogeneous domains at pixel-space and exploits the inherent uncertainty in the cross-domain mapping to generate multiple outputs conditioned on a single input. We further extend this approach to the unsupervised scenario where there is no labeled data for the target domain. We rely on stochastic generative modelling to translate across the two domains at pixel space and introduce two loss functions that promote semantic consistency. Finally we demonstrate that the proposed approaches extend beyond medical image analysis and focus on unsupervised domain adaptation for semantic segmentation of urban scenes. We show that relying on stochastic generative modelling allows us to train more accurate target networks and achieve state-of-the-art performance on two challenging semantic segmentation benchmarks
    • …
    corecore