73 research outputs found

    MFSNet: A Multi Focus Segmentation Network for Skin Lesion Segmentation

    Full text link
    Segmentation is essential for medical image analysis to identify and localize diseases, monitor morphological changes, and extract discriminative features for further diagnosis. Skin cancer is one of the most common types of cancer globally, and its early diagnosis is pivotal for the complete elimination of malignant tumors from the body. This research develops an Artificial Intelligence (AI) framework for supervised skin lesion segmentation employing the deep learning approach. The proposed framework, called MFSNet (Multi-Focus Segmentation Network), uses differently scaled feature maps for computing the final segmentation mask using raw input RGB images of skin lesions. In doing so, initially, the images are preprocessed to remove unwanted artifacts and noises. The MFSNet employs the Res2Net backbone, a recently proposed convolutional neural network (CNN), for obtaining deep features used in a Parallel Partial Decoder (PPD) module to get a global map of the segmentation mask. In different stages of the network, convolution features and multi-scale maps are used in two boundary attention (BA) modules and two reverse attention (RA) modules to generate the final segmentation output. MFSNet, when evaluated on three publicly available datasets: PH2PH^2, ISIC 2017, and HAM10000, outperforms state-of-the-art methods, justifying the reliability of the framework. The relevant codes for the proposed approach are accessible at https://github.com/Rohit-Kundu/MFSNe

    Graph-Based Intercategory and Intermodality Network for Multilabel Classification and Melanoma Diagnosis of Skin Lesions in Dermoscopy and Clinical Images

    Full text link
    The identification of melanoma involves an integrated analysis of skin lesion images acquired using the clinical and dermoscopy modalities. Dermoscopic images provide a detailed view of the subsurface visual structures that supplement the macroscopic clinical images. Melanoma diagnosis is commonly based on the 7-point visual category checklist (7PC). The 7PC contains intrinsic relationships between categories that can aid classification, such as shared features, correlations, and the contributions of categories towards diagnosis. Manual classification is subjective and prone to intra- and interobserver variability. This presents an opportunity for automated methods to improve diagnosis. Current state-of-the-art methods focus on a single image modality and ignore information from the other, or do not fully leverage the complementary information from both modalities. Further, there is not a method to exploit the intercategory relationships in the 7PC. In this study, we address these issues by proposing a graph-based intercategory and intermodality network (GIIN) with two modules. A graph-based relational module (GRM) leverages intercategorical relations, intermodal relations, and prioritises the visual structure details from dermoscopy by encoding category representations in a graph network. The category embedding learning module (CELM) captures representations that are specialised for each category and support the GRM. We show that our modules are effective at enhancing classification performance using a public dataset of dermoscopy-clinical images, and show that our method outperforms the state-of-the-art at classifying the 7PC categories and diagnosis

    Skin Lesion Segmentation in Dermoscopic Images with Noisy Data

    Get PDF
    We Propose a Deep Learning Approach to Segment the Skin Lesion in Dermoscopic Images. the Proposed Network Architecture Uses a Pretrained Efficient Net Model in the Encoder and Squeeze-And-Excitation Residual Structures in the Decoder. We Applied This Approach on the Publicly Available International Skin Imaging Collaboration (ISIC) 2017 Challenge Skin Lesion Segmentation Dataset. This Benchmark Dataset Has Been Widely Used in Previous Studies. We Observed Many Inaccurate or Noisy Ground Truth Labels. to Reduce Noisy Data, We Manually Sorted All Ground Truth Labels into Three Categories — Good, Mildly Noisy, and Noisy Labels. Furthermore, We Investigated the Effect of Such Noisy Labels in Training and Test Sets. Our Test Results Show that the Proposed Method Achieved Jaccard Scores of 0.807 on the Official ISIC 2017 Test Set and 0.832 on the Curated ISIC 2017 Test Set, Exhibiting Better Performance Than Previously Reported Methods. Furthermore, the Experimental Results Showed that the Noisy Labels in the Training Set Did Not Lower the Segmentation Performance. However, the Noisy Labels in the Test Set Adversely Affected the Evaluation Scores. We Recommend that the Noisy Labels Should Be Avoided in the Test Set in Future Studies for Accurate Evaluation of the Segmentation Algorithms

    Fusing fine-tuned deep features for skin lesion classification

    Get PDF
    © 2018 Elsevier Ltd Malignant melanoma is one of the most aggressive forms of skin cancer. Early detection is important as it significantly improves survival rates. Consequently, accurate discrimination of malignant skin lesions from benign lesions such as seborrheic keratoses or benign nevi is crucial, while accurate computerised classification of skin lesion images is of great interest to support diagnosis. In this paper, we propose a fully automatic computerised method to classify skin lesions from dermoscopic images. Our approach is based on a novel ensemble scheme for convolutional neural networks (CNNs) that combines intra-architecture and inter-architecture network fusion. The proposed method consists of multiple sets of CNNs of different architecture that represent different feature abstraction levels. Each set of CNNs consists of a number of pre-trained networks that have identical architecture but are fine-tuned on dermoscopic skin lesion images with different settings. The deep features of each network were used to train different support vector machine classifiers. Finally, the average prediction probability classification vectors from different sets are fused to provide the final prediction. Evaluated on the 600 test images of the ISIC 2017 skin lesion classification challenge, the proposed algorithm yields an area under receiver operating characteristic curve of 87.3% for melanoma classification and an area under receiver operating characteristic curve of 95.5% for seborrheic keratosis classification, outperforming the top-ranked methods of the challenge while being simpler compared to them. The obtained results convincingly demonstrate our proposed approach to represent a reliable and robust method for feature extraction, model fusion and classification of dermoscopic skin lesion images

    Chimeranet: U-Net for Hair Detection in Dermoscopic Skin Lesion Images

    Get PDF
    Hair and ruler mark structures in dermoscopic images are an obstacle preventing accurate image segmentation and detection of critical network features. Recognition and removal of hairs from images can be challenging, especially for hairs that are thin, overlapping, faded, or of similar color as skin or overlaid on a textured lesion. This paper proposes a novel deep learning (DL) technique to detect hair and ruler marks in skin lesion images. Our proposed ChimeraNet is an encoder-decoder architecture that employs pretrained EfficientNet in the encoder and squeeze-and-excitation residual (SERes) structures in the decoder. We applied this approach at multiple image sizes and evaluated it using the publicly available HAM10000 (ISIC2018 Task 3) skin lesion dataset. Our test results show that the largest image size (448 x 448) gave the highest accuracy of 98.23 and Jaccard index of 0.65 on the HAM10000 (ISIC 2018 Task 3) skin lesion dataset, exhibiting better performance than for two well-known deep learning approaches, U-Net and ResUNet-a. We found the Dice loss function to give the best results for all measures. Further evaluated on 25 additional test images, the technique yields state-of-the-art accuracy compared to 8 previously reported classical techniques. We conclude that the proposed ChimeraNet architecture may enable improved detection of fine image structures. Further application of DL techniques to detect dermoscopy structures is warranted

    A Review on Skin Disease Classification and Detection Using Deep Learning Techniques

    Get PDF
    Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches

    Bi-directional Dermoscopic Feature Learning and Multi-scale Consistent Decision Fusion for Skin Lesion Segmentation

    Full text link
    Accurate segmentation of skin lesion from dermoscopic images is a crucial part of computer-aided diagnosis of melanoma. It is challenging due to the fact that dermoscopic images from different patients have non-negligible lesion variation, which causes difficulties in anatomical structure learning and consistent skin lesion delineation. In this paper, we propose a novel bi-directional dermoscopic feature learning (biDFL) framework to model the complex correlation between skin lesions and their informative context. By controlling feature information passing through two complementary directions, a substantially rich and discriminative feature representation is achieved. Specifically, we place biDFL module on the top of a CNN network to enhance high-level parsing performance. Furthermore, we propose a multi-scale consistent decision fusion (mCDF) that is capable of selectively focusing on the informative decisions generated from multiple classification layers. By analysis of the consistency of the decision at each position, mCDF automatically adjusts the reliability of decisions and thus allows a more insightful skin lesion delineation. The comprehensive experimental results show the effectiveness of the proposed method on skin lesion segmentation, achieving state-of-the-art performance consistently on two publicly available dermoscopic image databases.Comment: Accepted to TI
    • …
    corecore