89 research outputs found

    A Review on Skin Disease Classification and Detection Using Deep Learning Techniques

    Get PDF
    Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches

    Agreement Between Experts and an Untrained Crowd for Identifying Dermoscopic Features Using a Gamified App: Reader Feasibility Study

    Full text link
    Background Dermoscopy is commonly used for the evaluation of pigmented lesions, but agreement between experts for identification of dermoscopic structures is known to be relatively poor. Expert labeling of medical data is a bottleneck in the development of machine learning (ML) tools, and crowdsourcing has been demonstrated as a cost- and time-efficient method for the annotation of medical images. Objective The aim of this study is to demonstrate that crowdsourcing can be used to label basic dermoscopic structures from images of pigmented lesions with similar reliability to a group of experts. Methods First, we obtained labels of 248 images of melanocytic lesions with 31 dermoscopic “subfeatures” labeled by 20 dermoscopy experts. These were then collapsed into 6 dermoscopic “superfeatures” based on structural similarity, due to low interrater reliability (IRR): dots, globules, lines, network structures, regression structures, and vessels. These images were then used as the gold standard for the crowd study. The commercial platform DiagnosUs was used to obtain annotations from a nonexpert crowd for the presence or absence of the 6 superfeatures in each of the 248 images. We replicated this methodology with a group of 7 dermatologists to allow direct comparison with the nonexpert crowd. The Cohen Îș value was used to measure agreement across raters. Results In total, we obtained 139,731 ratings of the 6 dermoscopic superfeatures from the crowd. There was relatively lower agreement for the identification of dots and globules (the median Îș values were 0.526 and 0.395, respectively), whereas network structures and vessels showed the highest agreement (the median Îș values were 0.581 and 0.798, respectively). This pattern was also seen among the expert raters, who had median Îș values of 0.483 and 0.517 for dots and globules, respectively, and 0.758 and 0.790 for network structures and vessels. The median Îș values between nonexperts and thresholded average–expert readers were 0.709 for dots, 0.719 for globules, 0.714 for lines, 0.838 for network structures, 0.818 for regression structures, and 0.728 for vessels. Conclusions This study confirmed that IRR for different dermoscopic features varied among a group of experts; a similar pattern was observed in a nonexpert crowd. There was good or excellent agreement for each of the 6 superfeatures between the crowd and the experts, highlighting the similar reliability of the crowd for labeling dermoscopic images. This confirms the feasibility and dependability of using crowdsourcing as a scalable solution to annotate large sets of dermoscopic images, with several potential clinical and educational applications, including the development of novel, explainable ML tools

    Graph-Ensemble Learning Model for Multi-label Skin Lesion Classification using Dermoscopy and Clinical Images

    Full text link
    Many skin lesion analysis (SLA) methods recently focused on developing a multi-modal-based multi-label classification method due to two factors. The first is multi-modal data, i.e., clinical and dermoscopy images, which can provide complementary information to obtain more accurate results than single-modal data. The second one is that multi-label classification, i.e., seven-point checklist (SPC) criteria as an auxiliary classification task can not only boost the diagnostic accuracy of melanoma in the deep learning (DL) pipeline but also provide more useful functions to the clinical doctor as it is commonly used in clinical dermatologist's diagnosis. However, most methods only focus on designing a better module for multi-modal data fusion; few methods explore utilizing the label correlation between SPC and skin disease for performance improvement. This study fills the gap that introduces a Graph Convolution Network (GCN) to exploit prior co-occurrence between each category as a correlation matrix into the DL model for the multi-label classification. However, directly applying GCN degraded the performances in our experiments; we attribute this to the weak generalization ability of GCN in the scenario of insufficient statistical samples of medical data. We tackle this issue by proposing a Graph-Ensemble Learning Model (GELN) that views the prediction from GCN as complementary information of the predictions from the fusion model and adaptively fuses them by a weighted averaging scheme, which can utilize the valuable information from GCN while avoiding its negative influences as much as possible. To evaluate our method, we conduct experiments on public datasets. The results illustrate that our GELN can consistently improve the classification performance on different datasets and that the proposed method can achieve state-of-the-art performance in SPC and diagnosis classification.Comment: Submitted to TNNLS in 1st July 202

    Towards an Effective Imaging-Based Decision Support System for Skin Cancer

    Get PDF
    The usage of expert systems to aid in medical decisions has been employed since 1980s in distinct ap plications. With the high demands of medical care and limited human resources, these technologies are required more than ever. Skin cancer has been one of the pathologies with higher growth, which suf fers from lack of dermatology experts in most of the affected geographical areas. A permanent record of examination that can be further analyzed are medical imaging modalities. Most of these modalities were also assessed along with machine learning classification methods. It is the aim of this research to provide background information about skin cancer types, medical imaging modalities, data mining and machine learning methods, and their application on skin cancer imaging, as well as the disclosure of a proposal of a multi-imaging modality decision support system for skin cancer diagnosis and treatment assessment based in the most recent available technology. This is expected to be a reference for further implementation of imaging-based clinical support systems.info:eu-repo/semantics/publishedVersio

    Computer-Aided Diagnosis for Melanoma using Ontology and Deep Learning Approaches

    Get PDF
    The emergence of deep-learning algorithms provides great potential to enhance the prediction performance of computer-aided supporting diagnosis systems. Recent research efforts indicated that well-trained algorithms could achieve the accuracy level of experienced senior clinicians in the Dermatology field. However, the lack of interpretability and transparency hinders the algorithms’ utility in real-life. Physicians and patients require a certain level of interpretability for them to accept and trust the results. Another limitation of AI algorithms is the lack of consideration of other information related to the disease diagnosis, for example some typical dermoscopic features and diagnostic guidelines. Clinical guidelines for skin disease diagnosis are designed based on dermoscopic features. However, a structured and standard representation of the relevant knowledge in the skin disease domain is lacking. To address the above challenges, this dissertation builds an ontology capable of formally representing the knowledge of dermoscopic features and develops an explainable deep learning model able to diagnose skin diseases and dermoscopic features. Additionally, large-scale, unlabeled datasets can learn from the trained model and automate the feature generation process. The computer vision aided feature extraction algorithms are combined with the deep learning model to improve the overall classification accuracy and save manual annotation efforts

    Towards PACE-CAD Systems

    Get PDF
    Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored. Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed. Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied. Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier. This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare

    Dermatoscopy

    Get PDF
    This book is a collection of chapters on dermatoscopy, which is a fast, easy-to-learn, low-cost, and non-invasive diagnostic method utilizing the Rayleigh scattering phenomenon to visualize epidermal and subepidermal structures. Dermatoscopy has become increasingly popular for allowing visualization of structures that are impossible to see with the naked eye. Its use provides insight into the biological potential of skin lesions, enabling efficient management and follow-up. The book focuses on the features of some of the most common skin neoplasms, such as combined nevi, as well as those that are more challenging to assess, such as pigmented lesions of the eyelid margins. It also provides novel insights into the role of dermatoscopy in palmoplantar dermatoses and discusses precautions in dermatoscopy during the SARS-CoV2 pandemic

    Diagnosis of skin cancer using novel computer vision and deep learning techniques

    Get PDF
    Recent years have noticed an increase in the total number of skin cancer cases and it is projected to grow exponentially, however mortality rate of malignant melanoma can be decreased if it is diagnosed and treated in its early stage. Notwithstanding the fact that visual similarity between benign and malignant lesions makes the task of diagnosis difficult even for an expert dermatologist, thereby increasing the chances of false prediction. This dissertation proposes two novel methods of computer-aided diagnosis for the classification of malignant lesion. The first method pre-processes the acquired image by the Dull razor method (for digital hair removal) and histogram equalisation. Henceforth the image is segmented by the proposed method using LR-fuzzy logic and it achieves an accuracy, sensitivity and specificity of 96.50%, 97.50% and 96.25% for the PH2 dataset; 96.16%, 91.88% and 98.26% for the ISIC 2017 dataset; 95.91%, 91.62% and 97.37% for ISIC 2018 dataset respectively. Furthermore, the image is classified by the modified You Only Look Once (YOLO v3) classifier and it yields an accuracy, sensitivity and specificity of 98.16%, 95.43%, and 99.50% respectively. The second method enhances the images by removing digital artefacts and histogram equalisation. Thereafter, triangular neutrosophic number (TNN) is used for segmentation of lesion, which achieves an accuracy, sensitivity, and specificity of 99.00%, 97.50%, 99.38% for PH2; 98.83%, 98.48%, 99.01% for ISIC 2017; 98.56%, 98.50%, 98.58% for ISIC 2018; and 97.86%, 97.56%, 97.97% for ISIC 2019 dataset respectively. Furthermore, data augmentation is performed by the addition of artefacts and noise to the training dataset and rotating the images at an angle of 650, 1350, and 2150 such that the training dataset is increased to 92838 from 30946 images. Additionally, a novel classifier based on inception and residual module is trained over augmented dataset and it is able to achieve an accuracy, sensitivity and specificity of 99.50%, 100%, 99.38% for PH2; 99.33%, 98.48%, 99.75% for ISIC 2017; 98.56%, 97.61%, 98.88% for ISIC 2018 and 98.04%, 96.67%, 98.52% for ISIC 2019 dataset respectively. Later in our dissertation, the proposed methods are deployed into real-time mobile applications, therefore enabling the users to diagnose the suspected lesion with ease and accuracy

    Segmentation of the melanoma lesion and its border

    Get PDF
    Segmentation of the border of the human pigmented lesions has a direct impact on the diagnosis of malignant melanoma. In this work, we examine performance of (i) morphological segmentation of a pigmented lesion by region growing with the adaptive threshold and density-based DBSCAN clustering algorithm, and (ii) morphological segmentation of the pigmented lesion border by region growing of the lesion and the background skin. Research tasks (i) and (ii) are evaluated by a human expert and tested on two data sets, A and B, of different origins, resolution, and image quality. The preprocessing step consists of removing the black frame around the lesion and reducing noise and artifacts. The halo is removed by cutting out the dark circular region and filling it with an average skin color. Noise is reduced by a family of Gaussian filters 3×3−7×7 to improve the contrast and smooth out possible distortions. Some other filters are also tested. Artifacts like dark thick hair or ruler/ink markers are removed from the images by using the DullRazor closing images for all RGB colors for a hair brightness threshold below a value of 25 or, alternatively, by the BTH transform. For the segmentation, JFIF luminance representation is used. In the analysis (i), out of each dermoscopy image, a lesion segmentation mask is produced. For the region growing we get a sensitivity of 0.92/0.85, a precision of 0.98/0.91, and a border error of 0.08/0.15 for data sets A/B, respectively. For the density-based DBSCAN algorithm, we get a sensitivity of 0.91/0.89, a precision of 0.95/0.93, and a border error of 0.09/0.12 for data sets A/B, respectively. In the analysis (ii), out of each dermoscopy image, a series of lesion, background, and border segmentation images are derived. We get a sensitivity of about 0.89, a specificity of 0.94 and an accuracy of 0.91 for data set A, and a sensitivity of about 0.85, specificity of 0.91 and an accuracy of 0.89 for data set B. Our analyses show that the improved methods of region growing and density-based clustering performed after proper preprocessing may be good tools for the computer-aided melanoma diagnosis

    Clinical-view versus ELM: an investigation into image types in the context of skin lesion screening

    Get PDF
    Melanoma, the most serious form of skin cancer, is increasing in incidence in countries with predominantly white skinned populations. Automated tools have been proposed to help detect this most visible of cancers. Current automated systems for detecting melanoma analyse images of skin lesions for relevant image features, and classify the images based on those features. There are two types of image available to be used in such systems, Clinical-view or Epiluminescent microscopy (ELM) images. ELM images reportedly allow more accurate assessment of skin lesions in the clinical setting, but this finding has not been proven in the context of an automated system. This research has evaluated the question of Clinical-view versus ELM images in an automated screening system. Two methods of implementing a screening system were considered in this research. Firstly, the ‘diagnosis system’, which is based on previous work in this field, and secondly, a ‘dermatologist assessment system’, which is an original method of implementing an automated screening system. The Clinical-view versus ELM question was considered for both of these systems. Specifically, two automated systems were developed. The first analysed Clinical-view images, while the second processed ELM images. From the analysis, each system attempted to classify lesion images into two groups. For the diagnosis problem, the lesion was either ‘melanoma’ or ‘benign’. For the ‘dermatologist assessment’ problem, the groups were ‘excised’ or ‘not excised’. The results raise doubts over the current emphasis on ELM images in the automated diagnosis case. Similarly, it appears that Clinical-view images are of more use for reproducing ‘dermatologists assessment’. We have also shown that the ‘dermatologist assessment’ approach to screening skin lesions is a viable and potentially useful alternative to the current emphasis on the diagnosis approach
    • 

    corecore