73 research outputs found

    Triagem robusta de melanoma : em defesa dos descritores aprimorados de nível médio

    Get PDF
    Orientadores: Eduardo Alves do Valle Junior, Sandra Eliza Fontes de AvilaDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Melanoma é o tipo de câncer de pele que mais leva à morte, mesmo sendo o mais curável, se detectado precocemente. Considerando que a presença de um dermatologista em tempo integral não é economicamente viável para muitas cidades e especialmente em comunidades carentes, ferramentas de auxílio ao diagnóstico para a triagem do melanoma têm sido um tópico de pesquisa ativo. Muitos trabalhos existentes são baseados no modelo Bag-of-Visual-Words (BoVW), combinando descritores de cor e textura. No entanto, o modelo BoVW vem se aprimorando e hoje existem várias extensões que levam a melhores taxas de acerto em tarefas gerais de classificação de imagens. Estes modelos avançados ainda não foram explorados para rastreio de melanoma, motivando assim este trabalho. Aqui nós apresentamos uma nova abordagem para rastreio de melanoma baseado nos descritores BossaNova, que são estado-da-arte, mostrando resultados muito promissores, com uma AUC de 93,7%. Este trabalho também propõe uma nova estratégia de pooling espacial especialmente desenhada para rastreio de melanoma. Outra contribuição dessa pesquisa é o uso inédito do BossaNova na classificação de melanoma. Isso abre oportunidades de exploração deste descritor em outros contextos médicosAbstract: Melanoma is the type of skin cancer that most leads to death, even being the most curable, if detected early. Since the presence of a full time dermatologist is not economical feasible for many small cities and specially in underserved communities, computer-aided diagnosis for melanoma screening has been a topic of active research. Much of the existing art is based on the Bag-of-Visual-Words (BoVW) model, combining color and texture descriptors. However, the BoVW model has been improving and nowadays there are several extensions that perform better classification rates in general image classification tasks. These enhanced models were not explored yet for melanoma screening, thus motivating our work. Here we present a new approach for melanoma screening, based upon the state-of-the-art BossaNova descriptors, showing very promising results for screening, reaching an AUC of up to 93.7%. This work also proposes a new spatial pooling strategy specially designed for melanoma screening. Other contribution of this research is the unprecedented use of BossaNova in melanoma classification. This opens the opportunity to explore this enhanced mid-level descriptors in other medical contextsMestradoEngenharia de ComputaçãoMestre em Engenharia Elétric

    melNET: A Deep Learning Based Model For Melanoma Detection

    Get PDF
    Melanoma is identified as the deadliest in the skin cancer category. However, early-stage detection may enhance the treatment result. In this research, a deep learning-based model, named “melNET”, has been developed to detect melanoma in both dermoscopic and digital images. melNET uses the Inception-v3 architecture to handle the deep learning part. To ensure quality optimization, the architectural aspects of Inception-v3 were designed using the Hebbian principle as well as taking the intuition of multi-scale processing. This architecture takes advantage of parallel computing across multiple GPUs to employ RMSprop as the optimizer. While going through the training phase, melNET uses the back-propagation method to retrain this Inception-v3 network by feeding the errors from each iteration, resulting in the fine-tuning of network weights. After the completion of the training step, melNET can be used to predict the diagnosis of a mole by taking the lesion image as an input to the system. With a dermoscopic dataset of 200 images, provided by PH2, melNET outperforms the work with YOLO-v2 network by improving the sensitivity value from 86.35% to 97.50%. Also, the specificity and accuracy values are found to be improved from 85.90% to 87.50%, and, from 86.00% to 89.50% respectively. melNET has also been evaluated on a digital dataset of 170 images, provided by UMCG, showing an accuracy of 84.71%, which outperforms the 81.00% accuracy of the MED-NODE model. In both cases, melNET got treated as a binary classifier and a five-fold cross validation method was applied for the evaluation. In addition, melNET has been found to perform the detections in real-time by leveraging the end-to-end Inception-v3 architecture

    Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets

    Get PDF
    Funding We would like to acknowledge eurekaSD: Enhancing University Research and Education in Areas Useful for Sustainable Development - grants EK14AC0037 and EK15AC0264. We thank Araucária Foundation for the Support of the Scientific and Technological Development of Paraná through a Research and Technological Productivity Scholarship for H. D. Lee (grant 028/2019). We also thank the Brazilian National Council for Scientific and Technological Development (CNPq) through the grant number 142050/2019-9 for A. R. S. Parmezan. The Portuguese team was partially supported by Fundação para a Ciência e a Tecnologia (FCT). R. Fonseca-Pinto was financed by the projects UIDB/50008/2020, UIDP/50008/2020, UIDB/05704/2020 and UIDP/05704/2020 and C. V. Nogueira was financed by the projects UIDB/00013/2020 and UIDP/00013/2020. The funding agencies did not have any further involvement in this paper.Convolutional neural networks have been effective in several applications, arising as a promising supporting tool in a relevant Dermatology problem: skin cancer diagnosis. However, generalizing well can be difficult when little training data is available. The fine-tuning transfer learning strategy has been employed to differentiate properly malignant from non-malignant lesions in dermoscopic images. Fine-tuning a pre-trained network allows one to classify data in the target domain, occasionally with few images, using knowledge acquired in another domain. This work proposes eight fine-tuning settings based on convolutional networks previously trained on ImageNet that can be employed mainly in limited data samples to reduce overfitting risk. They differ on the architecture, the learning rate and the number of unfrozen layer blocks. We evaluated the settings in two public datasets with 104 and 200 dermoscopic images. By finding competitive configurations in small datasets, this paper illustrates that deep learning can be effective if one has only a few dozen malignant and non-malignant lesion images to study and differentiate in Dermatology. The proposal is also flexible and potentially useful for other domains. In fact, it performed satisfactorily in an assessment conducted in a larger dataset with 746 computerized tomographic images associated with the coronavirus disease.info:eu-repo/semantics/publishedVersio

    Aprendizado profundo em triagem de melanoma

    Get PDF
    Orientadores: Eduardo Alves do Valle Junior, Lin Tzy LiDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: De todos os cânceres de pele, melanoma representa apenas 1% dos casos, mas 75% das mortes. O prognóstico do melanoma é bom quando detectado cedo, mas deteriora rápido ao longo que a doença progride. Ferramentas automatizadas podem prover triagem mais rápida, ajudando médicos a focar em pacientes ou lesões de risco. As características da doença --- raridade, letalidade, rápida progressão, e diagnóstico sutil --- fazem a triagem de melanoma automática particularmente desafiadora. O objetivo deste trabalho é melhor compreender como Deep Learning pode ser utilizado --- mais precisamente, Redes Neurais Convolucionais --- para classificar corretamente imagens de lesões de pele. Para isso, este trabalho está dividido em duas linhas de pesquisa. Primeiro, o estudo está focado na transferibilidade de características das redes CNN pré-treinadas. O objetivo principal desse tópico é estudar como as características transferidas se comportam em diferentes esquemas, com o objetivo de gerar melhores características para a camada de decisão. Em um segundo tópico, esse estudo incidirá na melhoria das métricas de classificação, que é o objetivo geral. Sobre a transferibilidade das características, foram realizados experimentos para analisar a forma como os diferentes esquemas de transferência afetariam a Área sob a Curva ROC (AUC): treinar uma CNN a partir do zero; transferir o conhecimento de uma CNN pré-treinada com imagens gerais ou específicas; realizar uma transferência dupla, que é uma sequência de treinamento onde em um primeiro momento a rede é treinada com imagens gerais, em um segundo momento com as imagens específicas, e, finalmente, em um terceiro momento com as imagens de melanoma. A partir desses experimentos, aprendemos que a transferência de aprendizagem é uma boa prática, assim como é o ajuste fino. Os resultados também sugerem que modelos mais profundos conduzem a melhores resultados. Hipotetizamos que a transferência de aprendizagem de uma tarefa relacionada sob ponto de vista médico (no caso, a partir de um dataset de imagens de retinopatia) levaria a melhores resultados, especialmente no esquema de transferência dupla, mas os resultados mostraram o oposto, sugerindo que a adaptação de tarefas muito específicas representa desafios específicos. Sobre a melhoria das métricas, discute-se o pipeline vencedor utilizado no International Skin Imaging Collaboration (ISIC) Challenge 2017, alcançando o estado da arte na classificação de melanoma com 87.4% AUC. A solução é baseada em stacking/meta learning dos modelos Inception v4 e Resnet101, realizando fine tuning enquanto executa a aumentação de dados nos conjuntos de treino e teste. Também comparamos diferentes técnicas de segmentação --- multiplicação elemento a elemento da imagem da lesão de pele e sua máscara de segmentação, e utilizar a máscara de segmentação como quarto canal --- com uma rede treinada sem segmentação. A rede sem segmentação é a que obteve melhor desemepnho (96.0% AUC) contra a máscara de segmentação como quarto canal (94.5% AUC). Nós também disponibilizamos uma implementação de referência reprodutível com todo o código desenvolvido para as contribuições desta dissertaçãoAbstract: From all skin cancers, melanoma represents just 1% of cases, but 75% of deaths. Melanoma¿s prognosis is good when detected early, but deteriorates fast as the disease progresses. Automated tools may play an essential role in providing timely screening, helping doctors focus on patients or lesions at risk. However, due to the disease¿s characteristics --- rarity, lethality, fast progression, and diagnosis subtlety --- automated screening for melanoma is particularly challenging. The objective of this work is to understand better how can we use Deep Learning --- more precisely, Convolutional Neural Networks --- to correctly classify images of skin lesions. This work is divided into two lines of investigation to achieve the objective. First, the study is focused on the transferability of features from pretrained CNN networks. The primary objective of that thread is to study how the transferred features behave in different schemas, aiming at generating better features to the classifier layer. Second, this study will also improve the classification metrics, which is the overall objective of this line of research. On the transferability of features, we performed experiments to analyze how different transfer schemas would impact the overall Area Under the ROC Curve (AUC) training a CNN from scratch; transferring from pretrained CNN on general and specific image databases; performing a double transfer, in a sequence from general to specific and finally melanoma databases. From those experiments, we learned that transfer learning is a good practice, as is fine tuning. The results also suggest that deeper models lead to better results. We expected that transfer learning from a related task (in the case, from a retinopathy image database) would lead to better outcomes, but results showed the opposite, suggesting that adaptation from particular tasks poses specific challenges. On the improvement of metrics, we discussed the winner pipeline used in the International Skin Imaging Collaboration (ISIC) Challenge 2017, reaching state-of-the-art results on melanoma classification with 87.4% AUC. The solution is based on the stacking/meta-learning from Inception v4 and Resnet101 models, fine tuning them while performing data augmentation on the train and test sets. Also, we compare different segmentation techniques - elementwise multiplication of the skin lesion image and its mask, and input the segmentation mask as a fourth channel - with a network trained without segmentation. The network with no segmentation is the one who performs better (96.0% AUC) against segmentation mask as a fourth channel (94.5% AUC). We made available a reproducible reference implementation with all developed source code for the contributions of this thesisMestradoEngenharia de ComputaçãoMestre em Engenharia Elétrica133530/2016-7CNP

    Skin Cancer Detection in Dermoscopy Images Using Sub-Region Features

    Get PDF
    Abstract. In the medical field, the identification of skin cancer (Malignant Melanoma) in dermoscopy images is still a challenging task for radiologists and researchers. Due to its rapid increase, the need for decision support systems to assist the radiologists to detect it in early stages becomes essential and necessary. Computer Aided Diagnosis (CAD) systems have significant potential to increase the accuracy of its early detection. Typically, CAD systems use various types of features to characterize skin lesions. The features are often concatenated into one vector (early fusion) to represent the image. In this paper, we present a novel method for melanoma detection from images. First the lesions are segmented by combining Particle Swarm Optimization and Markov Random Field methods. Then the K-means is applied on the segmented lesions to separate them into homogeneous clusters, from which important features are extracted. Finally, an Artificial Neural Network with Radial Basis Function is applied for the detection of melanoma. The method was tested on 200 dermoscopy images. The experimental results show that the proposed method achieved higher accuracy in terms of melanoma detection, compared to alternative methods

    Malignant skin melanoma detection using image augmentation by oversampling in nonlinear lower-dimensional embedding manifold

    Get PDF
    The continuous rise in skin cancer cases, especially in malignant melanoma, has resulted in a high mortality rate of the affected patients due to late detection. Some challenges affecting the success of skin cancer detection include small datasets or data scarcity problem, noisy data, imbalanced data, inconsistency in image sizes and resolutions, unavailability of data, reliability of labeled data (ground truth), and imbalance of skin cancer datasets. This study presents a novel data augmentation technique based on covariant Synthetic Minority Oversampling Technique (SMOTE) to address the data scarcity and class imbalance problem. We propose an improved data augmentation model for effective detection of melanoma skin cancer. Our method is based on data oversampling in a nonlinear lower-dimensional embedding manifold for creating synthetic melanoma images. The proposed data augmentation technique is used to generate a new skin melanoma dataset using dermoscopic images from the publicly available P H2 dataset. The augmented images were used to train the SqueezeNet deep learning model. The experimental results in binary classification scenario show a significant improvement in detection of melanoma with respect to accuracy (92.18%), sensitivity (80.77%), specificity (95.1%), and F1-score (80.84%). We also improved the multiclass classification results in melanoma detection to 89.2% (sensitivity), 96.2% (specificity) for atypical nevus detection, 65.4% (sensitivity), 72.2% (specificity), and for common nevus detection 66% (sensitivity), 77.2% (specificity). The proposed classification framework outperforms some of the state-of-the-art methods in detecting skin melanoma.publishedVersio
    corecore