10 research outputs found
Early Exanthema Upon Vemurafenib Plus Cobimetinib Is Associated With a Favorable Treatment Outcome in Metastatic Melanoma: A Retrospective Multicenter DeCOG Study
Background
The combination of BRAF and MEK inhibitors has become standard of care in the treatment of metastatic BRAF V600-mutated melanoma. Clinical factors for an early prediction of tumor response are rare. The present study investigated the association between the development of an early exanthema induced by vemurafenib or vemurafenib plus cobimetinib and therapy outcome.
Methods
This multicenter retrospective study included patients with BRAF V600-mutated irresectable AJCC-v8 stage IIIC/D to IV metastatic melanoma who received treatment with vemurafenib (VEM) or vemurafenib plus cobimetinib (COBIVEM). The development of an early exanthema within six weeks after therapy start and its grading according to CTCAEv4.0 criteria was correlated to therapy outcome in terms of best overall response, progression-free (PFS), and overall survival (OS).
Results
A total of 422 patients from 16 centers were included (VEM, n=299; COBIVEM, n=123). 20.4% of VEM and 43.1% of COBIVEM patients developed an early exanthema. In the VEM cohort, objective responders (CR/PR) more frequently presented with an early exanthema than non-responders (SD/PD); 59.0% versus 38.7%; p=0.0027. However, median PFS and OS did not differ between VEM patients with or without an early exanthema (PFS, 6.9 versus 6.0 months, p=0.65; OS, 11.0 versus 12.4 months, p=0.69). In the COBIVEM cohort, 66.0% of objective responders had an early exanthema compared to 54.3% of non-responders (p=0.031). Median survival times were significantly longer for patients who developed an early exanthema compared to patients who did not (PFS, 9.7 versus 5.6 months, p=0.013; OS, not reached versus 11.6 months, p=0.0061). COBIVEM patients with a mild early exanthema (CTCAEv4.0 grade 1-2) had a superior survival outcome as compared to COBIVEM patients with a severe (CTCAEv4.0 grade 3-4) or non early exanthema, respectively (p=0.047). This might be caused by the fact that 23.6% of patients with severe exanthema underwent a dose reduction or discontinuation of COBIVEM compared to only 8.9% of patients with mild exanthema.
Conclusions
The development of an early exanthema within 6 weeks after treatment start indicates a favorable therapy outcome upon vemurafenib plus cobimetinib. Patients presenting with an early exanthema should therefore be treated with adequate supportive measures to provide that patients can stay on treatment
Tolerability of Narrow-band Ultraviolet-B Phototherapy for Different Dermatological Diseases in Relation to Co-medications
Phototherapy is an efficient therapy for a variety of skin diseases. Various drugs can cause photosensitivity and impact tolerability of phototherapy. The tolerability was investigated of narrowband ultraviolet-B 311 nm therapy in dependence on the underlying disease and long-term co-medication. A total of 534 narrowband ultraviolet-B therapy courses were examined. Compared with psoriasis, adverse events were observed more frequently in eczematous diseases and, in some cases, other indications. About two-thirds of all courses were carried out in patients taking at least one photosensitising drug, according to the summaries of product characteristics. Phototherapy was more frequently associated with adverse events when medication was taken concomitantly. When considering the tolerability of phototherapy in dependence on individual substances or drug classes, no statistically significant result was shown after adjustment
Superior skin cancer classification by the combination of human and artificial intelligence
Background: In recent studies, convolutional neural networks (CNNs) outperformed dermatologists in distinguishing dermoscopic images of melanoma and nevi. In these studies, dermatologists and artificial intelligence were considered as opponents. However, the combination of classifiers frequently yields superior results, both in machine learning and among humans. In this study, we investigated the potential benefit of combining human and artificial intelligence for skin cancer classification. Methods: Using 11,444 dermoscopic images, which were divided into five diagnostic categories, novel deep learning techniques were used to train a single CNN. Then, both 112 dermatologists of 13 German university hospitals and the trained CNN independently classified a set of 300 biopsy-verified skin lesions into those five classes. Taking into account the certainty of the decisions, the two independently determined diagnoses were combined to a new classifier with the help of a gradient boosting method. The primary end-point of the study was the correct classification of the images into five designated categories, whereas the secondary end-point was the correct classification of lesions as either benign or malignant (binary classification). Findings: Regarding the multiclass task, the combination of man and machine achieved an accuracy of 82.95%. This was 1.36% higher than the best of the two individual classifiers (81.59% achieved by the CNN). Owing to the class imbalance in the binary problem, sensitivity, but not accuracy, was examined and demonstrated to be superior (89%) to the best individual classifier (CNN with 86.1%). The specificity in the combined classifier decreased from 89.2% to 84%. However, at an equal sensitivity of 89%, the CNN achieved a specificity of only 81.5% Interpretation: Our findings indicate that the combination of human and artificial intelligence achieves superior results over the independent results of both of these systems. (C) 2019 The Author(s). Published by Elsevier Ltd
Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks
Background: Recently, convolutional neural networks (CNNs) systematically outperformed dermatologists in distinguishing dermoscopic melanoma and nevi images. However, such a binary classification does not reflect the clinical reality of skin cancer screenings in which multiple diagnoses need to be taken into account. Methods: Using 11,444 dermoscopic images, which covered dermatologic diagnoses comprising the majority of commonly pigmented skin lesions commonly faced in skin cancer screenings, a CNN was trained through novel deep learning techniques. A test set of 300 biopsy-verified images was used to compare the classifier's performance with that of 112 dermatologists from 13 German university hospitals. The primary end-point was the correct classification of the different lesions into benign and malignant. The secondary end-point was the correct classification of the images into one of the five diagnostic categories. Findings: Sensitivity and specificity of dermatologists for the primary end-point were 74.4% (95% confidence interval [CI]: 67.0-81.8%) and 59.8% (95% CI: 49.8-69.8%), respectively. At equal sensitivity, the algorithm achieved a specificity of 91.3% (95% CI: 85.5-97.1%). For the secondary end-point, the mean sensitivity and specificity of the dermatologists were at 56.5% (95% CI: 42.8-70.2%) and 89.2% (95% CI: 85.0-93.3%), respectively. At equal sensitivity, the algorithm achieved a specificity of 98.8%. Two-sided McNemar tests revealed significance for the primary end-point (p < 0.001). For the secondary end-point, outperformance (p < 0.001) was achieved except for basal cell carcinoma (on-par performance). Interpretation: Our findings show that automated classification of dermoscopic melanoma and nevi images is extendable to a multiclass classification problem, thus better reflecting clinical differential diagnoses, while still outperforming dermatologists at a significant level (p < 0.001). (C) 2019 The Author(s). Published by Elsevier Ltd
A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task
Background: Recent studies have demonstrated the use of convolutional neural networks (CNNs) to classify images of melanoma with accuracies comparable to those achieved by board-certified dermatologists. However, the performance of a CNN exclusively trained with dermoscopic images in a clinical image classification task in direct competition with a large number of dermatologists has not been measured to date. This study compares the performance of a convolutional neuronal network trained with dermoscopic images exclusively for identifying melanoma in clinical photographs with the manual grading of the same images by dermatologists. Methods: We compared automatic digital melanoma classification with the performance of 145 dermatologists of 12 German university hospitals. We used methods from enhanced deep learning to train a CNN with 12,378 open-source dermoscopic images. We used 100 clinical images to compare the performance of the CNN to that of the dermatologists. Dermatologists were compared with the deep neural network in terms of sensitivity, specificity and receiver operating characteristics. Findings: The mean sensitivity and specificity achieved by the dermatologists with clinical images was 89.4% (range: 55.0%-100%) and 64.4% (range: 22.5%-92.5%). At the same sensitivity, the CNN exhibited a mean specificity of 68.2% (range 47.5%-86.25%). Among the dermatologists, the attendings showed the highest mean sensitivity of 92.8% at a mean specificity of 57.7%. With the same high sensitivity of 92.8%, the CNN had a mean specificity of 61.1%. Interpretation: For the first time, dermatologist-level image classification was achieved on a clinical image classification task without training on clinical images. The CNN had a smaller variance of results indicating a higher robustness of computer vision compared with human assessment for dermatologic image classification tasks. (C) 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task
Background: Recent studies have successfully demonstrated the use of deep-learning algorithms for dermatologist-level classification of suspicious lesions by the use of excessive proprietary image databases and limited numbers of dermatologists. For the first time, the performance of a deep-learning algorithm trained by open-source images exclusively is compared to a large number of dermatologists covering all levels within the clinical hierarchy. Methods: We used methods from enhanced deep learning to train a convolutional neural network (CNN) with 12,378 open-source dermoscopic images. We used 100 images to compare the performance of the CNN to that of the 157 dermatologists from 12 university hospitals in Germany. Outperformance of dermatologists by the deep neural network was measured in terms of sensitivity, specificity and receiver operating characteristics. Findings: The mean sensitivity and specificity achieved by the dermatologists with dermoscopic images was 74.1% (range 40.0%-100%) and 60% (range 21.3%-91.3%), respectively. At a mean sensitivity of 74.1%, the CNN exhibited a mean specificity of 86.5% (range 70.8%-91.3%). At a mean specificity of 60%, a mean sensitivity of 87.5% (range 80%-95%) was achieved by our algorithm. Among the dermatologists, the chief physicians showed the highest mean specificity of 69.2% at a mean sensitivity of 73.3%. With the same high specificity of 69.2%, the CNN had a mean sensitivity of 84.5%. Interpretation: A CNN trained by open-source images exclusively outperformed 136 of the 157 dermatologists and all the different levels of experience (from junior to chief physicians) in terms of average specificity and sensitivity. (C) 2019 The Author(s). Published by Elsevier Ltd