4,309 research outputs found

    A Multi-task Framework for Skin Lesion Detection and Segmentation

    Full text link
    Early detection and segmentation of skin lesions is crucial for timely diagnosis and treatment, necessary to improve the survival rate of patients. However, manual delineation is time consuming and subject to intra- and inter-observer variations among dermatologists. This underlines the need for an accurate and automatic approach to skin lesion segmentation. To tackle this issue, we propose a multi-task convolutional neural network (CNN) based, joint detection and segmentation framework, designed to initially localize the lesion and subsequently, segment it. A `Faster region-based convolutional neural network' (Faster-RCNN) which comprises a region proposal network (RPN), is used to generate bounding boxes/region proposals, for lesion localization in each image. The proposed regions are subsequently refined using a softmax classifier and a bounding-box regressor. The refined bounding boxes are finally cropped and segmented using `SkinNet', a modified version of U-Net. We trained and evaluated the performance of our network, using the ISBI 2017 challenge and the PH2 datasets, and compared it with the state-of-the-art, using the official test data released as part of the challenge for the former. Our approach outperformed others in terms of Dice coefficients (>0.93>0.93), Jaccard index (>0.88>0.88), accuracy (>0.96>0.96) and sensitivity (>0.95>0.95), across five-fold cross validation experiments.Comment: Accepted in ISIC-MICCAI 2018 Worksho

    Segmentation of Skin Lesions and their Attributes Using Multi-Scale Convolutional Neural Networks and Domain Specific Augmentations

    Full text link
    Computer-aided diagnosis systems for classification of different type of skin lesions have been an active field of research in recent decades. It has been shown that introducing lesions and their attributes masks into lesion classification pipeline can greatly improve the performance. In this paper, we propose a framework by incorporating transfer learning for segmenting lesions and their attributes based on the convolutional neural networks. The proposed framework is based on the encoder-decoder architecture which utilizes a variety of pre-trained networks in the encoding path and generates the prediction map by combining multi-scale information in decoding path using a pyramid pooling manner. To address the lack of training data and increase the proposed model generalization, an extensive set of novel domain-specific augmentation routines have been applied to simulate the real variations in dermoscopy images. Finally, by performing broad experiments on three different data sets obtained from International Skin Imaging Collaboration archive (ISIC2016, ISIC2017, and ISIC2018 challenges data sets), we show that the proposed method outperforms other state-of-the-art approaches for ISIC2016 and ISIC2017 segmentation task and achieved the first rank on the leader-board of ISIC2018 attribute detection task.Comment: 18 page

    Detector-SegMentor Network for Skin Lesion Localization and Segmentation

    Full text link
    Melanoma is a life-threatening form of skin cancer when left undiagnosed at the early stages. Although there are more cases of non-melanoma cancer than melanoma cancer, melanoma cancer is more deadly. Early detection of melanoma is crucial for the timely diagnosis of melanoma cancer and prohibit its spread to distant body parts. Segmentation of skin lesion is a crucial step in the classification of melanoma cancer from the cancerous lesions in dermoscopic images. Manual segmentation of dermoscopic skin images is very time consuming and error-prone resulting in an urgent need for an intelligent and accurate algorithm. In this study, we propose a simple yet novel network-in-network convolution neural network(CNN) based approach for segmentation of the skin lesion. A Faster Region-based CNN (Faster RCNN) is used for preprocessing to predict bounding boxes of the lesions in the whole image which are subsequently cropped and fed into the segmentation network to obtain the lesion mask. The segmentation network is a combination of the UNet and Hourglass networks. We trained and evaluated our models on ISIC 2018 dataset and also cross-validated on PH\textsuperscript{2} and ISBI 2017 datasets. Our proposed method surpassed the state-of-the-art with Dice Similarity Coefficient of 0.915 and Accuracy 0.959 on ISIC 2018 dataset and Dice Similarity Coefficient of 0.947 and Accuracy 0.971 on ISBI 2017 dataset.Comment: 9 pages, 7 figures, accepted at NCVPRIPG 201

    Multi-class Semantic Segmentation of Skin Lesions via Fully Convolutional Networks

    Full text link
    Melanoma is clinically difficult to distinguish from common benign skin lesions, particularly melanocytic naevus and seborrhoeic keratosis. The dermoscopic appearance of these lesions has huge intra-class variations and high inter-class visual similarities. Most current research is focusing on single-class segmentation irrespective of classes of skin lesions. In this work, we evaluate the performance of deep learning on multi-class segmentation of ISIC-2017 challenge dataset, which consists of 2,750 dermoscopic images. We propose an end-to-end solution using fully convolutional networks (FCNs) for multi-class semantic segmentation to automatically segment the melanoma, seborrhoeic keratosis and naevus. To improve the performance of FCNs, transfer learning and a hybrid loss function are used. We evaluate the performance of the deep learning segmentation methods for multi-class segmentation and lesion diagnosis (with post-processing method) on the testing set of the ISIC-2017 challenge dataset. The results showed that the two-tier level transfer learning FCN-8s achieved the overall best result with \textit{Dice} score of 78.5% in a naevus category, 65.3% in melanoma, and 55.7% in seborrhoeic keratosis in multi-class segmentation and Accuracy of 84.62% for recognition of melanoma in lesion diagnosis.Comment: Comp2clinic workshop at Biostec 202

    A Multi-stage Framework with Context Information Fusion Structure for Skin Lesion Segmentation

    Full text link
    The computer-aided diagnosis (CAD) systems can highly improve the reliability and efficiency of melanoma recognition. As a crucial step of CAD, skin lesion segmentation has the unsatisfactory accuracy in existing methods due to large variability in lesion appearance and artifacts. In this work, we propose a framework employing multi-stage UNets (MS-UNet) in the auto-context scheme to segment skin lesion accurately end-to-end. We apply two approaches to boost the performance of MS-UNet. First, UNet is coupled with a context information fusion structure (CIFS) to integrate the low-level and context information in the multi-scale feature space. Second, to alleviate the gradient vanishing problem, we use deep supervision mechanism through supervising MS-UNet by minimizing a weighted Jaccard distance loss function. Four out of five commonly used performance metrics, including Jaccard index and Dice coefficient, show that our approach outperforms the state-ofthe-art deep learning based methods on the ISBI 2016 Skin Lesion Challenge dataset.Comment: 4 pages, 3 figures, 1 tabl

    Global and Local Information Based Deep Network for Skin Lesion Segmentation

    Full text link
    With a large influx of dermoscopy images and a growing shortage of dermatologists, automatic dermoscopic image analysis plays an essential role in skin cancer diagnosis. In this paper, a new deep fully convolutional neural network (FCNN) is proposed to automatically segment melanoma out of skin images by end-to-end learning with only pixels and labels as inputs. Our proposed FCNN is capable of using both local and global information to segment melanoma by adopting skipping layers. The public benchmark database consisting of 150 validation images, 600 test images and 2000 training images in the melanoma detection challenge 2017 at International Symposium Biomedical Imaging 2017 is used to test the performance of our algorithm. All large size images (for example, 4000×60004000\times 6000 pixels) are reduced to much smaller images with 384×384384\times 384 pixels (more than 10 times smaller). We got and submitted preliminary results to the challenge without any pre or post processing. The performance of our proposed method could be further improved by data augmentation and by avoiding image size reduction.Comment: 4 pages, 3 figures. ISIC201

    Generative Adversarial Network in Medical Imaging: A Review

    Full text link
    Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.Comment: 24 pages; v4; added missing references from before Jan 1st 2019; accepted to MedI

    Region of Interest Detection in Dermoscopic Images for Natural Data-augmentation

    Full text link
    With the rapid growth of medical imaging research, there is a great interest in the automated detection of skin lesions with computer algorithms. The state-of-the-art datasets for skin lesions are often accompanied with very limited amount of ground truth labeling as it is laborious and expensive. The Region Of Interest (ROI) detection is vital to locate the lesion accurately and must be robust to subtle features of different skin lesion types. In this work, we propose the use of two object localization meta-architectures for end-to-end ROI skin lesion detection in dermoscopic images. We trained the Faster-RCNN-InceptionV2 and SSD-InceptionV2 on the ISBI-2017 training dataset and evaluated their performance on the ISBI-2017 testing set, PH2 and HAM10000 datasets. Since there was no earlier work in ROI detection for skin lesion with CNNs, we compared the performance of skin localization methods with the state-of-the-art segmentation method. The localization methods proved superior to the segmentation method in ROI detection on skin lesion datasets. In addition, based on the detected ROI, an automated natural data-augmentation method is proposed and used as pre-processing in the lesion diagnosis and segmentation task. To further demonstrate the potential of our work, we developed a real-time smart-phone application for automated skin lesions detection.Comment: Natural Augmentatio

    Multiple Abnormality Detection for Automatic Medical Image Diagnosis Using Bifurcated Convolutional Neural Network

    Full text link
    Automating classification and segmentation process of abnormal regions in different body organs has a crucial role in most of medical imaging applications such as funduscopy, endoscopy, and dermoscopy. Detecting multiple abnormalities in each type of images is necessary for better and more accurate diagnosis procedure and medical decisions. In recent years portable medical imaging devices such as capsule endoscopy and digital dermatoscope have been introduced and made the diagnosis procedure easier and more efficient. However, these portable devices have constrained power resources and limited computational capability. To address this problem, we propose a bifurcated structure for convolutional neural networks performing both classification and segmentation of multiple abnormalities simultaneously. The proposed network is first trained by each abnormality separately. Then the network is trained using all abnormalities. In order to reduce the computational complexity, the network is redesigned to share some features which are common among all abnormalities. Later, these shared features are used in different settings (directions) to segment and classify the abnormal region of the image. Finally, results of the classification and segmentation directions are fused to obtain the classified segmentation map. Proposed framework is simulated using four frequent gastrointestinal abnormalities as well as three dermoscopic lesions and for evaluation of the proposed framework the results are compared with the corresponding ground truth map. Properties of the bifurcated network like low complexity and resource sharing make it suitable to be implemented as a part of portable medical imaging devices

    Fully Convolutional Neural Networks to Detect Clinical Dermoscopic Features

    Full text link
    The presence of certain clinical dermoscopic features within a skin lesion may indicate melanoma, and automatically detecting these features may lead to more quantitative and reproducible diagnoses. We reformulate the task of classifying clinical dermoscopic features within superpixels as a segmentation problem, and propose a fully convolutional neural network to detect clinical dermoscopic features from dermoscopy skin lesion images. Our neural network architecture uses interpolated feature maps from several intermediate network layers, and addresses imbalanced labels by minimizing a negative multi-label Dice-F1_1 score, where the score is computed across the mini-batch for each label. Our approach ranked first place in the 2017 ISIC-ISBI Part 2: Dermoscopic Feature Classification Task challenge over both the provided validation and test datasets, achieving a 0.895% area under the receiver operator characteristic curve score. We show how simple baseline models can outrank state-of-the-art approaches when using the official metrics of the challenge, and propose to use a fuzzy Jaccard Index that ignores the empty set (i.e., masks devoid of positive pixels) when ranking models. Our results suggest that (i) the classification of clinical dermoscopic features can be effectively approached as a segmentation problem, and (ii) the current metrics used to rank models may not well capture the efficacy of the model. We plan to make our trained model and code publicly available.Comment: Accepted JBHI versio
    • …
    corecore