224 research outputs found
Segmentation of Skin Lesions and their Attributes Using Multi-Scale Convolutional Neural Networks and Domain Specific Augmentations
Computer-aided diagnosis systems for classification of different type of skin
lesions have been an active field of research in recent decades. It has been
shown that introducing lesions and their attributes masks into lesion
classification pipeline can greatly improve the performance. In this paper, we
propose a framework by incorporating transfer learning for segmenting lesions
and their attributes based on the convolutional neural networks. The proposed
framework is based on the encoder-decoder architecture which utilizes a variety
of pre-trained networks in the encoding path and generates the prediction map
by combining multi-scale information in decoding path using a pyramid pooling
manner. To address the lack of training data and increase the proposed model
generalization, an extensive set of novel domain-specific augmentation routines
have been applied to simulate the real variations in dermoscopy images.
Finally, by performing broad experiments on three different data sets obtained
from International Skin Imaging Collaboration archive (ISIC2016, ISIC2017, and
ISIC2018 challenges data sets), we show that the proposed method outperforms
other state-of-the-art approaches for ISIC2016 and ISIC2017 segmentation task
and achieved the first rank on the leader-board of ISIC2018 attribute detection
task.Comment: 18 page
Learning to Detect Blue-white Structures in Dermoscopy Images with Weak Supervision
We propose a novel approach to identify one of the most significant
dermoscopic criteria in the diagnosis of Cutaneous Melanoma: the Blue-whitish
structure. In this paper, we achieve this goal in a Multiple Instance Learning
framework using only image-level labels of whether the feature is present or
not. As the output, we predict the image classification label and as well
localize the feature in the image. Experiments are conducted on a challenging
dataset with results outperforming state-of-the-art. This study provides an
improvement on the scope of modelling for computerized image analysis of skin
lesions, in particular in that it puts forward a framework for identification
of dermoscopic local features from weakly-labelled data
Semi-supervised Skin Lesion Segmentation via Transformation Consistent Self-ensembling Model
Automatic skin lesion segmentation on dermoscopic images is an essential
component in computer-aided diagnosis of melanoma. Recently, many fully
supervised deep learning based methods have been proposed for automatic skin
lesion segmentation. However, these approaches require massive pixel-wise
annotation from experienced dermatologists, which is very costly and
time-consuming. In this paper, we present a novel semi-supervised method for
skin lesion segmentation by leveraging both labeled and unlabeled data. The
network is optimized by the weighted combination of a common supervised loss
for labeled inputs only and a regularization loss for both labeled and
unlabeled data. In this paper, we present a novel semi-supervised method for
skin lesion segmentation, where the network is optimized by the weighted
combination of a common supervised loss for labeled inputs only and a
regularization loss for both labeled and unlabeled data. Our method encourages
a consistent prediction for unlabeled images using the outputs of the
network-in-training under different regularizations, so that it can utilize the
unlabeled data. To utilize the unlabeled data, our method encourages the
consistent predictions of the network-in-training for the same input under
different regularizations. Aiming for the semi-supervised segmentation problem,
we enhance the effect of regularization for pixel-level predictions by
introducing a transformation, including rotation and flipping, consistent
scheme in our self-ensembling model. With only 300 labeled training samples,
our method sets a new record on the benchmark of the International Skin Imaging
Collaboration (ISIC) 2017 skin lesion segmentation challenge. Such a result
clearly surpasses fully-supervised state-of-the-arts that are trained with 2000
labeled data.Comment: BMVC 201
Improving Automatic Skin Lesion Segmentation using Adversarial Learning based Data Augmentation
Segmentation of skin lesions is considered as an important step in computer
aided diagnosis (CAD) for automated melanoma diagnosis. In recent years,
segmentation methods based on fully convolutional networks (FCN) have achieved
great success in general images. This success is primarily due to the
leveraging of large labelled datasets to learn features that correspond to the
shallow appearance as well as the deep semantics of the images. However, the
dependence on large dataset does not translate well into medical images. To
improve the FCN performance for skin lesion segmentations, researchers
attempted to use specific cost functions or add post-processing algorithms to
refine the coarse boundaries of the FCN results. However, the performance of
these methods is heavily reliant on the tuning of many parameters and
post-processing techniques. In this paper, we leverage the state-of-the-art
image feature learning method of generative adversarial network (GAN) for its
inherent ability to produce consistent and realistic image features by using
deep neural networks and adversarial learning concept. We improve upon GAN such
that skin lesion features can be learned at different level of complexities, in
a controlled manner. The outputs from our method is then augmented to the
existing FCN training data, thus increasing the overall feature diversity. We
evaluated our method on the ISIC 2018 skin lesion segmentation challenge
dataset and showed that it was more accurate and robust when compared to the
existing skin lesion segmentation methods.Comment: 6 page
Towards Automated Melanoma Screening: Proper Computer Vision & Reliable Results
In this paper we survey, analyze and criticize current art on automated
melanoma screening, reimplementing a baseline technique, and proposing two
novel ones. Melanoma, although highly curable when detected early, ends as one
of the most dangerous types of cancer, due to delayed diagnosis and treatment.
Its incidence is soaring, much faster than the number of trained professionals
able to diagnose it. Automated screening appears as an alternative to make the
most of those professionals, focusing their time on the patients at risk while
safely discharging the other patients. However, the potential of automated
melanoma diagnosis is currently unfulfilled, due to the emphasis of current
literature on outdated computer vision models. Even more problematic is the
irreproducibility of current art. We show how streamlined pipelines based upon
current Computer Vision outperform conventional models - a model based on an
advanced bags of words reaches an AUC of 84.6%, and a model based on deep
neural networks reaches 89.3%, while the baseline (a classical bag of words)
stays at 81.2%. We also initiate a dialog to improve reproducibility in our
communityComment: Minor corrections on State of the Art and Conclusio
Bi-directional Dermoscopic Feature Learning and Multi-scale Consistent Decision Fusion for Skin Lesion Segmentation
Accurate segmentation of skin lesion from dermoscopic images is a crucial
part of computer-aided diagnosis of melanoma. It is challenging due to the fact
that dermoscopic images from different patients have non-negligible lesion
variation, which causes difficulties in anatomical structure learning and
consistent skin lesion delineation. In this paper, we propose a novel
bi-directional dermoscopic feature learning (biDFL) framework to model the
complex correlation between skin lesions and their informative context. By
controlling feature information passing through two complementary directions, a
substantially rich and discriminative feature representation is achieved.
Specifically, we place biDFL module on the top of a CNN network to enhance
high-level parsing performance. Furthermore, we propose a multi-scale
consistent decision fusion (mCDF) that is capable of selectively focusing on
the informative decisions generated from multiple classification layers. By
analysis of the consistency of the decision at each position, mCDF
automatically adjusts the reliability of decisions and thus allows a more
insightful skin lesion delineation. The comprehensive experimental results show
the effectiveness of the proposed method on skin lesion segmentation, achieving
state-of-the-art performance consistently on two publicly available dermoscopic
image databases.Comment: Accepted to TI
Less is More: Sample Selection and Label Conditioning Improve Skin Lesion Segmentation
Segmenting skin lesions images is relevant both for itself and for assisting
in lesion classification, but suffers from the challenge in obtaining annotated
data. In this work, we show that segmentation may improve with less data, by
selecting the training samples with best inter-annotator agreement, and
conditioning the ground-truth masks to remove excessive detail. We perform an
exhaustive experimental design considering several sources of variation,
including three different test sets, two different deep-learning architectures,
and several replications, for a total of 540 experimental runs. We found that
sample selection and detail removal may have impacts corresponding,
respectively, to 12% and 16% of the one obtained by picking a better
deep-learning model.Comment: Accepted to the ISIC Skin Image Analysis Workshop @ CVPR 202
Deep Clustering via Center-Oriented Margin Free-Triplet Loss for Skin Lesion Detection in Highly Imbalanced Datasets
Melanoma is a fatal skin cancer that is curable and has dramatically
increasing survival rate when diagnosed at early stages. Learning-based methods
hold significant promise for the detection of melanoma from dermoscopic images.
However, since melanoma is a rare disease, existing databases of skin lesions
predominantly contain highly imbalanced numbers of benign versus malignant
samples. In turn, this imbalance introduces substantial bias in classification
models due to the statistical dominance of the majority class. To address this
issue, we introduce a deep clustering approach based on the latent-space
embedding of dermoscopic images. Clustering is achieved using a novel
center-oriented margin-free triplet loss (COM-Triplet) enforced on image
embeddings from a convolutional neural network backbone. The proposed method
aims to form maximally-separated cluster centers as opposed to minimizing
classification error, so it is less sensitive to class imbalance. To avoid the
need for labeled data, we further propose to implement COM-Triplet based on
pseudo-labels generated by a Gaussian mixture model. Comprehensive experiments
show that deep clustering with COM-Triplet loss outperforms clustering with
triplet loss, and competing classifiers in both supervised and unsupervised
settings.Comment: 12 pages, 4 figure
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Segmentation of Lesions in Dermoscopy Images Using Saliency Map And Contour Propagation
Melanoma is one of the most dangerous types of skin cancer and causes thousands of deaths worldwide each year. Recently dermoscopic imaging systems have been widely used as a diagnostic tool for melanoma detection. The first step in the automatic analysis of dermoscopy images is the lesion segmentation. In this article, a novel method for skin lesion segmentation that could be applied to a variety of images with different properties and deficiencies is proposed. After a multi-step preprocessing phase (hair removal and illumination correction), a supervised saliency map construction method is used to obtain an initial guess of lesion location. The construction of the saliency map is based on a random forest regressor that takes a vector of regional image features and return a saliency score based on them. This regressor is trained in a multi-level manner based on 2000 training data provided in ISIC2017 melanoma recognition challenge. In addition to obtaining an initial contour of lesion, the output saliency map can be used as a speed function alongside with image gradient to derive the initial contour toward the lesion boundary using a propagation model. The proposed algorithm has been tested on the ISIC2017 training, validation and test datasets, and gained high values for evaluation metrics
- …