187 research outputs found

    Contributions to the segmentation of dermoscopic images

    Get PDF
    Tese de mestrado. Mestrado em Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 201

    Computer-Aided Diagnosis for Melanoma using Ontology and Deep Learning Approaches

    Get PDF
    The emergence of deep-learning algorithms provides great potential to enhance the prediction performance of computer-aided supporting diagnosis systems. Recent research efforts indicated that well-trained algorithms could achieve the accuracy level of experienced senior clinicians in the Dermatology field. However, the lack of interpretability and transparency hinders the algorithms’ utility in real-life. Physicians and patients require a certain level of interpretability for them to accept and trust the results. Another limitation of AI algorithms is the lack of consideration of other information related to the disease diagnosis, for example some typical dermoscopic features and diagnostic guidelines. Clinical guidelines for skin disease diagnosis are designed based on dermoscopic features. However, a structured and standard representation of the relevant knowledge in the skin disease domain is lacking. To address the above challenges, this dissertation builds an ontology capable of formally representing the knowledge of dermoscopic features and develops an explainable deep learning model able to diagnose skin diseases and dermoscopic features. Additionally, large-scale, unlabeled datasets can learn from the trained model and automate the feature generation process. The computer vision aided feature extraction algorithms are combined with the deep learning model to improve the overall classification accuracy and save manual annotation efforts

    ENHANCE (ENriching Health data by ANnotations of Crowd and Experts): A case study for skin lesion classification

    Get PDF
    We present ENHANCE, an open dataset with multiple annotations to complement the existing ISIC and PH2 skin lesion classification datasets. This dataset contains annotations of visual ABC (asymmetry, border, colour) features from non-expert annotation sources: undergraduate students, crowd workers from Amazon MTurk and classic image processing algorithms. In this paper we first analyse the correlations between the annotations and the diagnostic label of the lesion, as well as study the agreement between different annotation sources. Overall we find weak correlations of non-expert annotations with the diagnostic label, and low agreement between different annotation sources. We then study multi-task learning (MTL) with the annotations as additional labels, and show that non-expert annotations can improve (ensembles of) state-of-the-art convolutional neural networks via MTL. We hope that our dataset can be used in further research into multiple annotations and/or MTL. All data and models are available on Github: https://github.com/raumannsr/ENHANCE

    Agreement Between Experts and an Untrained Crowd for Identifying Dermoscopic Features Using a Gamified App: Reader Feasibility Study

    Full text link
    Background Dermoscopy is commonly used for the evaluation of pigmented lesions, but agreement between experts for identification of dermoscopic structures is known to be relatively poor. Expert labeling of medical data is a bottleneck in the development of machine learning (ML) tools, and crowdsourcing has been demonstrated as a cost- and time-efficient method for the annotation of medical images. Objective The aim of this study is to demonstrate that crowdsourcing can be used to label basic dermoscopic structures from images of pigmented lesions with similar reliability to a group of experts. Methods First, we obtained labels of 248 images of melanocytic lesions with 31 dermoscopic “subfeatures” labeled by 20 dermoscopy experts. These were then collapsed into 6 dermoscopic “superfeatures” based on structural similarity, due to low interrater reliability (IRR): dots, globules, lines, network structures, regression structures, and vessels. These images were then used as the gold standard for the crowd study. The commercial platform DiagnosUs was used to obtain annotations from a nonexpert crowd for the presence or absence of the 6 superfeatures in each of the 248 images. We replicated this methodology with a group of 7 dermatologists to allow direct comparison with the nonexpert crowd. The Cohen Îș value was used to measure agreement across raters. Results In total, we obtained 139,731 ratings of the 6 dermoscopic superfeatures from the crowd. There was relatively lower agreement for the identification of dots and globules (the median Îș values were 0.526 and 0.395, respectively), whereas network structures and vessels showed the highest agreement (the median Îș values were 0.581 and 0.798, respectively). This pattern was also seen among the expert raters, who had median Îș values of 0.483 and 0.517 for dots and globules, respectively, and 0.758 and 0.790 for network structures and vessels. The median Îș values between nonexperts and thresholded average–expert readers were 0.709 for dots, 0.719 for globules, 0.714 for lines, 0.838 for network structures, 0.818 for regression structures, and 0.728 for vessels. Conclusions This study confirmed that IRR for different dermoscopic features varied among a group of experts; a similar pattern was observed in a nonexpert crowd. There was good or excellent agreement for each of the 6 superfeatures between the crowd and the experts, highlighting the similar reliability of the crowd for labeling dermoscopic images. This confirms the feasibility and dependability of using crowdsourcing as a scalable solution to annotate large sets of dermoscopic images, with several potential clinical and educational applications, including the development of novel, explainable ML tools
    • 

    corecore