4,346 research outputs found
Data Augmentation for Skin Lesion Analysis
Deep learning models show remarkable results in automated skin lesion
analysis. However, these models demand considerable amounts of data, while the
availability of annotated skin lesion images is often limited. Data
augmentation can expand the training dataset by transforming input images. In
this work, we investigate the impact of 13 data augmentation scenarios for
melanoma classification trained on three CNNs (Inception-v4, ResNet, and
DenseNet). Scenarios include traditional color and geometric transforms, and
more unusual augmentations such as elastic transforms, random erasing and a
novel augmentation that mixes different lesions. We also explore the use of
data augmentation at test-time and the impact of data augmentation on various
dataset sizes. Our results confirm the importance of data augmentation in both
training and testing and show that it can lead to more performance gains than
obtaining new images. The best scenario results in an AUC of 0.882 for melanoma
classification without using external data, outperforming the top-ranked
submission (0.874) for the ISIC Challenge 2017, which was trained with
additional data.Comment: 8 pages, 3 figures, to be presented on ISIC Skin Image Analysis
Worksho
Knowledge Transfer for Melanoma Screening with Deep Learning
Knowledge transfer impacts the performance of deep learning -- the state of
the art for image classification tasks, including automated melanoma screening.
Deep learning's greed for large amounts of training data poses a challenge for
medical tasks, which we can alleviate by recycling knowledge from models
trained on different tasks, in a scheme called transfer learning. Although much
of the best art on automated melanoma screening employs some form of transfer
learning, a systematic evaluation was missing. Here we investigate the presence
of transfer, from which task the transfer is sourced, and the application of
fine tuning (i.e., retraining of the deep learning model after transfer). We
also test the impact of picking deeper (and more expensive) models. Our results
favor deeper models, pre-trained over ImageNet, with fine-tuning, reaching an
AUC of 80.7% and 84.5% for the two skin-lesion datasets evaluated.Comment: 4 page
Assessing the Generalizability of Deep Neural Networks-Based Models for Black Skin Lesions
Melanoma is the most severe type of skin cancer due to its ability to cause
metastasis. It is more common in black people, often affecting acral regions:
palms, soles, and nails. Deep neural networks have shown tremendous potential
for improving clinical care and skin cancer diagnosis. Nevertheless, prevailing
studies predominantly rely on datasets of white skin tones, neglecting to
report diagnostic outcomes for diverse patient skin tones. In this work, we
evaluate supervised and self-supervised models in skin lesion images extracted
from acral regions commonly observed in black individuals. Also, we carefully
curate a dataset containing skin lesions in acral regions and assess the
datasets concerning the Fitzpatrick scale to verify performance on black skin.
Our results expose the poor generalizability of these models, revealing their
favorable performance for lesions on white skin. Neglecting to create diverse
datasets, which necessitates the development of specialized models, is
unacceptable. Deep neural networks have great potential to improve diagnosis,
particularly for populations with limited access to dermatology. However,
including black skin lesions is necessary to ensure these populations can
access the benefits of inclusive technology.Comment: 18 pages, 3 figures, 7 tables. Accepted at CIARP 202
- …