59 research outputs found
Knowledge Transfer for Melanoma Screening with Deep Learning
Knowledge transfer impacts the performance of deep learning -- the state of
the art for image classification tasks, including automated melanoma screening.
Deep learning's greed for large amounts of training data poses a challenge for
medical tasks, which we can alleviate by recycling knowledge from models
trained on different tasks, in a scheme called transfer learning. Although much
of the best art on automated melanoma screening employs some form of transfer
learning, a systematic evaluation was missing. Here we investigate the presence
of transfer, from which task the transfer is sourced, and the application of
fine tuning (i.e., retraining of the deep learning model after transfer). We
also test the impact of picking deeper (and more expensive) models. Our results
favor deeper models, pre-trained over ImageNet, with fine-tuning, reaching an
AUC of 80.7% and 84.5% for the two skin-lesion datasets evaluated.Comment: 4 page
The impact of segmentation on the accuracy and sensitivity of a melanoma classifier based on skin lesion images
Postprint (published version
Visualizing convolutional neural networks to improve decision support for skin lesion classification
Because of their state-of-the-art performance in computer vision, CNNs are
becoming increasingly popular in a variety of fields, including medicine.
However, as neural networks are black box function approximators, it is
difficult, if not impossible, for a medical expert to reason about their
output. This could potentially result in the expert distrusting the network
when he or she does not agree with its output. In such a case, explaining why
the CNN makes a certain decision becomes valuable information. In this paper,
we try to open the black box of the CNN by inspecting and visualizing the
learned feature maps, in the field of dermatology. We show that, to some
extent, CNNs focus on features similar to those used by dermatologists to make
a diagnosis. However, more research is required for fully explaining their
output.Comment: 8 pages, 6 figures, Workshop on Interpretability of Machine
Intelligence in Medical Image Computing at MICCAI 201
- …