10 research outputs found

    Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task

    No full text
    BACKGROUND: Recent studies have successfully demonstrated the use of deep-learning algorithms for dermatologist-level classification of suspicious lesions by the use of excessive proprietary image databases and limited numbers of dermatologists. For the first time, the performance of a deep-learning algorithm trained by open-source images exclusively is compared to a large number of dermatologists covering all levels within the clinical hierarchy. METHODS: We used methods from enhanced deep learning to train a convolutional neural network (CNN) with 12,378 open-source dermoscopic images. We used 100 images to compare the performance of the CNN to that of the 157 dermatologists from 12 university hospitals in Germany. Outperformance of dermatologists by the deep neural network was measured in terms of sensitivity, specificity and receiver operating characteristics. FINDINGS: The mean sensitivity and specificity achieved by the dermatologists with dermoscopic images was 74.1% (range 40.0%-100%) and 60% (range 21.3%-91.3%), respectively. At a mean sensitivity of 74.1%, the CNN exhibited a mean specificity of 86.5% (range 70.8%-91.3%). At a mean specificity of 60%, a mean sensitivity of 87.5% (range 80%-95%) was achieved by our algorithm. Among the dermatologists, the chief physicians showed the highest mean specificity of 69.2% at a mean sensitivity of 73.3%. With the same high specificity of 69.2%, the CNN had a mean sensitivity of 84.5%. INTERPRETATION: A CNN trained by open-source images exclusively outperformed 136 of the 157 dermatologists and all the different levels of experience (from junior to chief physicians) in terms of average specificity and sensitivity

    A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task

    No full text
    Background: Recent studies have demonstrated the use of convolutional neural networks (CNNs) to classify images of melanoma with accuracies comparable to those achieved by board-certified dermatologists. However, the performance of a CNN exclusively trained with dermoscopic images in a clinical image classification task in direct competition with a large number of dermatologists has not been measured to date. This study compares the performance of a convolutional neuronal network trained with dermoscopic images exclusively for identifying melanoma in clinical photographs with the manual grading of the same images by dermatologists. Methods: We compared automatic digital melanoma classification with the performance of 145 dermatologists of 12 German university hospitals. We used methods from enhanced deep learning to train a CNN with 12,378 open-source dermoscopic images. We used 100 clinical images to compare the performance of the CNN to that of the dermatologists. Dermatologists were compared with the deep neural network in terms of sensitivity, specificity and receiver operating characteristics. Findings: The mean sensitivity and specificity achieved by the dermatologists with clinical images was 89.4% (range: 55.0%-100%) and 64.4% (range: 22.5%-92.5%). At the same sensitivity, the CNN exhibited a mean specificity of 68.2% (range 47.5%-86.25%). Among the dermatologists, the attendings showed the highest mean sensitivity of 92.8% at a mean specificity of 57.7%. With the same high sensitivity of 92.8%, the CNN had a mean specificity of 61.1%. Interpretation: For the first time, dermatologist-level image classification was achieved on a clinical image classification task without training on clinical images. The CNN had a smaller variance of results indicating a higher robustness of computer vision compared with human assessment for dermatologic image classification tasks. (C) 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

    Harnessing the complexity of DNA-damage response pathways to improve cancer treatment outcomes

    No full text
    The DNA-damage response (DDR) pathways consist of interconnected components that respond to DNA damage to allow repair and promote cell survival. The DNA repair pathways and downstream cellular responses have diverged in cancer cells compared with normal cells because of genetic alterations that underlie drug resistance, disabled repair and resistance to apoptosis. Consequently, abrogating DDR pathways represents an important mechanism for enhancing the therapeutic index of DNA-damaging anticancer agents. In this review, we discuss the DDR pathways that determine antitumor effects of DNA-damaging agents with a specific focus on treatment outcomes in tumors carrying a defective p53 pathway. Finely tuned survival and death pathways govern the cellular responses downstream of the cytotoxic insults inherent in anticancer treatment. The significance and relative contributions of cellular responses including apoptosis, mitotic catastrophe and senescence are discussed in relation to the web of molecular interactions that affect such outcomes. We propose that promising combinations of DNA-damaging anticancer treatments with DDR-pathway inhibition would be further enhanced by activating downstream apoptotic pathways. The proposed rationale ensures that actual cell death is the preferred outcome of cancer treatment instead of other responses, including reversible cell cycle arrest, autophagy or senescence. Finally, to better measure the contribution of different cellular responses to anticancer treatments, multiplex in vivo assessments of therapy-induced response pathways such as cell death, senescence and mitotic catastrophe is desirable rather than the current reliance on the measurement of a single response pathway such as apoptosis. © 2010 Macmillan Publishers Limited All rights reserved

    A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task

    No full text
    Background: Recent studies have demonstrated the use of convolutional neural networks (CNNs) to classify images of melanoma with accuracies comparable to those achieved by board-certified dermatologists. However, the performance of a CNN exclusively trained with dermoscopic images in a clinical image classification task in direct competition with a large number of dermatologists has not been measured to date. This study compares the performance of a convolutional neuronal network trained with dermoscopic images exclusively for identifying melanoma in clinical photographs with the manual grading of the same images by dermatologists. Methods: We compared automatic digital melanoma classification with the performance of 145 dermatologists of 12 German university hospitals. We used methods from enhanced deep learning to train a CNN with 12,378 open-source dermoscopic images. We used 100 clinical images to compare the performance of the CNN to that of the dermatologists. Dermatologists were compared with the deep neural network in terms of sensitivity, specificity and receiver operating characteristics. Findings: The mean sensitivity and specificity achieved by the dermatologists with clinical images was 89.4% (range: 55.0%-100%) and 64.4% (range: 22.5%-92.5%). At the same sensitivity, the CNN exhibited a mean specificity of 68.2% (range 47.5%-86.25%). Among the dermatologists, the attendings showed the highest mean sensitivity of 92.8% at a mean specificity of 57.7%. With the same high sensitivity of 92.8%, the CNN had a mean specificity of 61.1%. Interpretation: For the first time, dermatologist-level image classification was achieved on a clinical image classification task without training on clinical images. The CNN had a smaller variance of results indicating a higher robustness of computer vision compared with human assessment for dermatologic image classification tasks. (C) 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

    Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task

    No full text
    Background: Recent studies have successfully demonstrated the use of deep-learning algorithms for dermatologist-level classification of suspicious lesions by the use of excessive proprietary image databases and limited numbers of dermatologists. For the first time, the performance of a deep-learning algorithm trained by open-source images exclusively is compared to a large number of dermatologists covering all levels within the clinical hierarchy. Methods: We used methods from enhanced deep learning to train a convolutional neural network (CNN) with 12,378 open-source dermoscopic images. We used 100 images to compare the performance of the CNN to that of the 157 dermatologists from 12 university hospitals in Germany. Outperformance of dermatologists by the deep neural network was measured in terms of sensitivity, specificity and receiver operating characteristics. Findings: The mean sensitivity and specificity achieved by the dermatologists with dermoscopic images was 74.1% (range 40.0%-100%) and 60% (range 21.3%-91.3%), respectively. At a mean sensitivity of 74.1%, the CNN exhibited a mean specificity of 86.5% (range 70.8%-91.3%). At a mean specificity of 60%, a mean sensitivity of 87.5% (range 80%-95%) was achieved by our algorithm. Among the dermatologists, the chief physicians showed the highest mean specificity of 69.2% at a mean sensitivity of 73.3%. With the same high specificity of 69.2%, the CNN had a mean sensitivity of 84.5%. Interpretation: A CNN trained by open-source images exclusively outperformed 136 of the 157 dermatologists and all the different levels of experience (from junior to chief physicians) in terms of average specificity and sensitivity. (C) 2019 The Author(s). Published by Elsevier Ltd

    Micronucleus Assays

    No full text
    corecore