37 research outputs found

    A Review on Skin Disease Classification and Detection Using Deep Learning Techniques

    Get PDF
    Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches

    Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression. For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired. In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database

    Detection of melanoma from dermoscopic images of naevi acquired under uncontrolled conditions.

    No full text
    International audienceBACKGROUND AND OBJECTIVE: Several systems for the diagnosis of melanoma from images of naevi obtained under controlled conditions have demonstrated comparable efficiency with dermatologists. However, their robustness to analyze daily routine images was sometimes questionable. The purpose of this work is to investigate to what extent the automatic melanoma diagnosis may be achieved from the analysis of uncontrolled images of pigmented skin lesions. MATERIALS AND METHODS: Images were acquired during regular practice by two dermatologists using Reflex 24 x 36 cameras combined with Heine Delta 10 dermascopes. The images were then digitalized using a scanner. In addition, five senior dermatologists were asked to give the diagnosis and therapeutic decision (exeresis) for 227 images of naevi, together with an opinion about the existence of malignancy-predictive features. Meanwhile, a learning by sample classifier for the diagnosis of melanoma was constructed, which combines image-processing with machine-learning techniques. After an automatic segmentation, geometric and colorimetric parameters were extracted from images and selected according to their efficiency in predicting malignancy features. A diagnosis was subsequently provided based on selected parameters. An extensive comparison of dermatologists' and computer results was subsequently performed. RESULTS AND CONCLUSION: The KL-PLS-based classifier shows comparable performances with respect to dermatologists (sensitivity: 95% and specificity: 60%). The algorithm provides an original insight into the clinical knowledge of pigmented skin lesions

    Skin Lesion Segmentation Ensemble with Diverse Training Strategies

    Get PDF
    This paper presents a novel strategy to perform skin lesion segmentation from dermoscopic images. We design an effective segmentation pipeline, and explore several pre-training methods to initialize the features extractor, highlighting how different procedures lead the Convolutional Neural Network (CNN) to focus on different features. An encoder-decoder segmentation CNN is employed to take advantage of each pre-trained features extractor. Experimental results reveal how multiple initialization strategies can be exploited, by means of an ensemble method, to obtain state-of-the-art skin lesion segmentation accuracy

    Computational methods for the image segmentation of pigmented skin lesions: a review

    Get PDF
    Background and objectives: Because skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation. Methods: Techniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle. Results: The techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results. Conclusions: The image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency

    Leveraging Computer Vision for Applications in Biomedicine and Geoscience

    Get PDF
    Skin cancer is one of the most common types of cancer and is usually classified as either non-melanoma and melanoma skin cancer. Melanoma skin cancer accounts for about half of all skin cancer-related deaths. The 5-year survival rate is 99% when the cancer is detected early but drops to 25% once it becomes metastatic. In other words, the key to preventing death is early detection. Foraminifera are microscopic single-celled organisms that exist in marine environments and are classified as living a benthic or planktic lifestyle. In total, roughly 50,000 species are known to have existed, of which about 9,000 are still living today. Foraminifera are important proxies for reconstructing past ocean and climate conditions and as bio-indicators of anthropogenic pollution. Since the 1800s, the identification and counting of foraminifera have been performed manually. The process is resource-intensive. In this dissertation, we leverage recent advances in computer vision, driven by breakthroughs in deep learning methodologies and scale-space theory, to make progress towards both early detection of melanoma skin cancer and automation of the identification and counting of microscopic foraminifera. First, we investigate the use of hyperspectral images in skin cancer detection by performing a critical review of relevant, peer-reviewed research. Second, we present a novel scale-space methodology for detecting changes in hyperspectral images. Third, we develop a deep learning model for classifying microscopic foraminifera. Finally, we present a deep learning model for instance segmentation of microscopic foraminifera. The works presented in this dissertation are valuable contributions in the fields of biomedicine and geoscience, more specifically, towards the challenges of early detection of melanoma skin cancer and automation of the identification, counting, and picking of microscopic foraminifera

    melNET: A Deep Learning Based Model For Melanoma Detection

    Get PDF
    Melanoma is identified as the deadliest in the skin cancer category. However, early-stage detection may enhance the treatment result. In this research, a deep learning-based model, named “melNET”, has been developed to detect melanoma in both dermoscopic and digital images. melNET uses the Inception-v3 architecture to handle the deep learning part. To ensure quality optimization, the architectural aspects of Inception-v3 were designed using the Hebbian principle as well as taking the intuition of multi-scale processing. This architecture takes advantage of parallel computing across multiple GPUs to employ RMSprop as the optimizer. While going through the training phase, melNET uses the back-propagation method to retrain this Inception-v3 network by feeding the errors from each iteration, resulting in the fine-tuning of network weights. After the completion of the training step, melNET can be used to predict the diagnosis of a mole by taking the lesion image as an input to the system. With a dermoscopic dataset of 200 images, provided by PH2, melNET outperforms the work with YOLO-v2 network by improving the sensitivity value from 86.35% to 97.50%. Also, the specificity and accuracy values are found to be improved from 85.90% to 87.50%, and, from 86.00% to 89.50% respectively. melNET has also been evaluated on a digital dataset of 170 images, provided by UMCG, showing an accuracy of 84.71%, which outperforms the 81.00% accuracy of the MED-NODE model. In both cases, melNET got treated as a binary classifier and a five-fold cross validation method was applied for the evaluation. In addition, melNET has been found to perform the detections in real-time by leveraging the end-to-end Inception-v3 architecture

    Towards PACE-CAD Systems

    Get PDF
    Despite phenomenal advancements in the availability of medical image datasets and the development of modern classification algorithms, Computer-Aided Diagnosis (CAD) has had limited practical exposure in the real-world clinical workflow. This is primarily because of the inherently demanding and sensitive nature of medical diagnosis that can have far-reaching and serious repercussions in case of misdiagnosis. In this work, a paradigm called PACE (Pragmatic, Accurate, Confident, & Explainable) is presented as a set of some of must-have features for any CAD. Diagnosis of glaucoma using Retinal Fundus Images (RFIs) is taken as the primary use case for development of various methods that may enrich an ordinary CAD system with PACE. However, depending on specific requirements for different methods, other application areas in ophthalmology and dermatology have also been explored. Pragmatic CAD systems refer to a solution that can perform reliably in day-to-day clinical setup. In this research two, of possibly many, aspects of a pragmatic CAD are addressed. Firstly, observing that the existing medical image datasets are small and not representative of images taken in the real-world, a large RFI dataset for glaucoma detection is curated and published. Secondly, realising that a salient attribute of a reliable and pragmatic CAD is its ability to perform in a range of clinically relevant scenarios, classification of 622 unique cutaneous diseases in one of the largest publicly available datasets of skin lesions is successfully performed. Accuracy is one of the most essential metrics of any CAD system's performance. Domain knowledge relevant to three types of diseases, namely glaucoma, Diabetic Retinopathy (DR), and skin lesions, is industriously utilised in an attempt to improve the accuracy. For glaucoma, a two-stage framework for automatic Optic Disc (OD) localisation and glaucoma detection is developed, which marked new state-of-the-art for glaucoma detection and OD localisation. To identify DR, a model is proposed that combines coarse-grained classifiers with fine-grained classifiers and grades the disease in four stages with respect to severity. Lastly, different methods of modelling and incorporating metadata are also examined and their effect on a model's classification performance is studied. Confidence in diagnosing a disease is equally important as the diagnosis itself. One of the biggest reasons hampering the successful deployment of CAD in the real-world is that medical diagnosis cannot be readily decided based on an algorithm's output. Therefore, a hybrid CNN architecture is proposed with the convolutional feature extractor trained using point estimates and a dense classifier trained using Bayesian estimates. Evaluation on 13 publicly available datasets shows the superiority of this method in terms of classification accuracy and also provides an estimate of uncertainty for every prediction. Explainability of AI-driven algorithms has become a legal requirement after Europe’s General Data Protection Regulations came into effect. This research presents a framework for easy-to-understand textual explanations of skin lesion diagnosis. The framework is called ExAID (Explainable AI for Dermatology) and relies upon two fundamental modules. The first module uses any deep skin lesion classifier and performs detailed analysis on its latent space to map human-understandable disease-related concepts to the latent representation learnt by the deep model. The second module proposes Concept Localisation Maps, which extend Concept Activation Vectors by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier. This thesis probes many viable solutions to equip a CAD system with PACE. However, it is noted that some of these methods require specific attributes in datasets and, therefore, not all methods may be applied on a single dataset. Regardless, this work anticipates that consolidating PACE into a CAD system can not only increase the confidence of medical practitioners in such tools but also serve as a stepping stone for the further development of AI-driven technologies in healthcare

    Diagnosis of skin cancer using novel computer vision and deep learning techniques

    Get PDF
    Recent years have noticed an increase in the total number of skin cancer cases and it is projected to grow exponentially, however mortality rate of malignant melanoma can be decreased if it is diagnosed and treated in its early stage. Notwithstanding the fact that visual similarity between benign and malignant lesions makes the task of diagnosis difficult even for an expert dermatologist, thereby increasing the chances of false prediction. This dissertation proposes two novel methods of computer-aided diagnosis for the classification of malignant lesion. The first method pre-processes the acquired image by the Dull razor method (for digital hair removal) and histogram equalisation. Henceforth the image is segmented by the proposed method using LR-fuzzy logic and it achieves an accuracy, sensitivity and specificity of 96.50%, 97.50% and 96.25% for the PH2 dataset; 96.16%, 91.88% and 98.26% for the ISIC 2017 dataset; 95.91%, 91.62% and 97.37% for ISIC 2018 dataset respectively. Furthermore, the image is classified by the modified You Only Look Once (YOLO v3) classifier and it yields an accuracy, sensitivity and specificity of 98.16%, 95.43%, and 99.50% respectively. The second method enhances the images by removing digital artefacts and histogram equalisation. Thereafter, triangular neutrosophic number (TNN) is used for segmentation of lesion, which achieves an accuracy, sensitivity, and specificity of 99.00%, 97.50%, 99.38% for PH2; 98.83%, 98.48%, 99.01% for ISIC 2017; 98.56%, 98.50%, 98.58% for ISIC 2018; and 97.86%, 97.56%, 97.97% for ISIC 2019 dataset respectively. Furthermore, data augmentation is performed by the addition of artefacts and noise to the training dataset and rotating the images at an angle of 650, 1350, and 2150 such that the training dataset is increased to 92838 from 30946 images. Additionally, a novel classifier based on inception and residual module is trained over augmented dataset and it is able to achieve an accuracy, sensitivity and specificity of 99.50%, 100%, 99.38% for PH2; 99.33%, 98.48%, 99.75% for ISIC 2017; 98.56%, 97.61%, 98.88% for ISIC 2018 and 98.04%, 96.67%, 98.52% for ISIC 2019 dataset respectively. Later in our dissertation, the proposed methods are deployed into real-time mobile applications, therefore enabling the users to diagnose the suspected lesion with ease and accuracy
    corecore