47 research outputs found

    Data Augmentation for Skin Lesion Analysis

    Full text link
    Deep learning models show remarkable results in automated skin lesion analysis. However, these models demand considerable amounts of data, while the availability of annotated skin lesion images is often limited. Data augmentation can expand the training dataset by transforming input images. In this work, we investigate the impact of 13 data augmentation scenarios for melanoma classification trained on three CNNs (Inception-v4, ResNet, and DenseNet). Scenarios include traditional color and geometric transforms, and more unusual augmentations such as elastic transforms, random erasing and a novel augmentation that mixes different lesions. We also explore the use of data augmentation at test-time and the impact of data augmentation on various dataset sizes. Our results confirm the importance of data augmentation in both training and testing and show that it can lead to more performance gains than obtaining new images. The best scenario results in an AUC of 0.882 for melanoma classification without using external data, outperforming the top-ranked submission (0.874) for the ISIC Challenge 2017, which was trained with additional data.Comment: 8 pages, 3 figures, to be presented on ISIC Skin Image Analysis Worksho

    On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors

    Full text link
    Deep learning based medical image classifiers have shown remarkable prowess in various application areas like ophthalmology, dermatology, pathology, and radiology. However, the acceptance of these Computer-Aided Diagnosis (CAD) systems in real clinical setups is severely limited primarily because their decision-making process remains largely obscure. This work aims at elucidating a deep learning based medical image classifier by verifying that the model learns and utilizes similar disease-related concepts as described and employed by dermatologists. We used a well-trained and high performing neural network developed by REasoning for COmplex Data (RECOD) Lab for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis and performed a detailed analysis on its latent space. Two well established and publicly available skin disease datasets, PH2 and derm7pt, are used for experimentation. Human understandable concepts are mapped to RECOD image classification model with the help of Concept Activation Vectors (CAVs), introducing a novel training and significance testing paradigm for CAVs. Our results on an independent evaluation set clearly shows that the classifier learns and encodes human understandable concepts in its latent representation. Additionally, TCAV scores (Testing with CAVs) suggest that the neural network indeed makes use of disease-related concepts in the correct way when making predictions. We anticipate that this work can not only increase confidence of medical practitioners on CAD but also serve as a stepping stone for further development of CAV-based neural network interpretation methods.Comment: Accepted for the IEEE International Joint Conference on Neural Networks (IJCNN) 202

    Aprendizado profundo em triagem de melanoma

    Get PDF
    Orientadores: Eduardo Alves do Valle Junior, Lin Tzy LiDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: De todos os cânceres de pele, melanoma representa apenas 1% dos casos, mas 75% das mortes. O prognóstico do melanoma é bom quando detectado cedo, mas deteriora rápido ao longo que a doença progride. Ferramentas automatizadas podem prover triagem mais rápida, ajudando médicos a focar em pacientes ou lesões de risco. As características da doença --- raridade, letalidade, rápida progressão, e diagnóstico sutil --- fazem a triagem de melanoma automática particularmente desafiadora. O objetivo deste trabalho é melhor compreender como Deep Learning pode ser utilizado --- mais precisamente, Redes Neurais Convolucionais --- para classificar corretamente imagens de lesões de pele. Para isso, este trabalho está dividido em duas linhas de pesquisa. Primeiro, o estudo está focado na transferibilidade de características das redes CNN pré-treinadas. O objetivo principal desse tópico é estudar como as características transferidas se comportam em diferentes esquemas, com o objetivo de gerar melhores características para a camada de decisão. Em um segundo tópico, esse estudo incidirá na melhoria das métricas de classificação, que é o objetivo geral. Sobre a transferibilidade das características, foram realizados experimentos para analisar a forma como os diferentes esquemas de transferência afetariam a Área sob a Curva ROC (AUC): treinar uma CNN a partir do zero; transferir o conhecimento de uma CNN pré-treinada com imagens gerais ou específicas; realizar uma transferência dupla, que é uma sequência de treinamento onde em um primeiro momento a rede é treinada com imagens gerais, em um segundo momento com as imagens específicas, e, finalmente, em um terceiro momento com as imagens de melanoma. A partir desses experimentos, aprendemos que a transferência de aprendizagem é uma boa prática, assim como é o ajuste fino. Os resultados também sugerem que modelos mais profundos conduzem a melhores resultados. Hipotetizamos que a transferência de aprendizagem de uma tarefa relacionada sob ponto de vista médico (no caso, a partir de um dataset de imagens de retinopatia) levaria a melhores resultados, especialmente no esquema de transferência dupla, mas os resultados mostraram o oposto, sugerindo que a adaptação de tarefas muito específicas representa desafios específicos. Sobre a melhoria das métricas, discute-se o pipeline vencedor utilizado no International Skin Imaging Collaboration (ISIC) Challenge 2017, alcançando o estado da arte na classificação de melanoma com 87.4% AUC. A solução é baseada em stacking/meta learning dos modelos Inception v4 e Resnet101, realizando fine tuning enquanto executa a aumentação de dados nos conjuntos de treino e teste. Também comparamos diferentes técnicas de segmentação --- multiplicação elemento a elemento da imagem da lesão de pele e sua máscara de segmentação, e utilizar a máscara de segmentação como quarto canal --- com uma rede treinada sem segmentação. A rede sem segmentação é a que obteve melhor desemepnho (96.0% AUC) contra a máscara de segmentação como quarto canal (94.5% AUC). Nós também disponibilizamos uma implementação de referência reprodutível com todo o código desenvolvido para as contribuições desta dissertaçãoAbstract: From all skin cancers, melanoma represents just 1% of cases, but 75% of deaths. Melanoma¿s prognosis is good when detected early, but deteriorates fast as the disease progresses. Automated tools may play an essential role in providing timely screening, helping doctors focus on patients or lesions at risk. However, due to the disease¿s characteristics --- rarity, lethality, fast progression, and diagnosis subtlety --- automated screening for melanoma is particularly challenging. The objective of this work is to understand better how can we use Deep Learning --- more precisely, Convolutional Neural Networks --- to correctly classify images of skin lesions. This work is divided into two lines of investigation to achieve the objective. First, the study is focused on the transferability of features from pretrained CNN networks. The primary objective of that thread is to study how the transferred features behave in different schemas, aiming at generating better features to the classifier layer. Second, this study will also improve the classification metrics, which is the overall objective of this line of research. On the transferability of features, we performed experiments to analyze how different transfer schemas would impact the overall Area Under the ROC Curve (AUC) training a CNN from scratch; transferring from pretrained CNN on general and specific image databases; performing a double transfer, in a sequence from general to specific and finally melanoma databases. From those experiments, we learned that transfer learning is a good practice, as is fine tuning. The results also suggest that deeper models lead to better results. We expected that transfer learning from a related task (in the case, from a retinopathy image database) would lead to better outcomes, but results showed the opposite, suggesting that adaptation from particular tasks poses specific challenges. On the improvement of metrics, we discussed the winner pipeline used in the International Skin Imaging Collaboration (ISIC) Challenge 2017, reaching state-of-the-art results on melanoma classification with 87.4% AUC. The solution is based on the stacking/meta-learning from Inception v4 and Resnet101 models, fine tuning them while performing data augmentation on the train and test sets. Also, we compare different segmentation techniques - elementwise multiplication of the skin lesion image and its mask, and input the segmentation mask as a fourth channel - with a network trained without segmentation. The network with no segmentation is the one who performs better (96.0% AUC) against segmentation mask as a fourth channel (94.5% AUC). We made available a reproducible reference implementation with all developed source code for the contributions of this thesisMestradoEngenharia de ComputaçãoMestre em Engenharia Elétrica133530/2016-7CNP

    Skin Lesion Analysis Towards Melanoma Detection Using Deep Learning Network

    Full text link
    Skin lesion is a severe disease in world-wide extent. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons, e.g. low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. International Skin Imaging Collaboration (ISIC) is a challenge focusing on the automatic analysis of skin lesion. In this paper, we proposed two deep learning methods to address all the three tasks announced in ISIC 2017, i.e. lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully-convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. To our best knowledges, we are not aware of any previous work proposed for this task. The proposed deep learning frameworks were evaluated on the ISIC 2017 testing set. Experimental results show the promising accuracies of our frameworks, i.e. 0.718 for task 1, 0.833 for task 2 and 0.823 for task 3 were achieved.Comment: ISIC201
    corecore