5 research outputs found

    Linear patterns of the skin and their dermatoses

    Get PDF
    Knowledge about the linear patterns of the skin is a key competence of dermatologists. Four major groups of linear patterns can be distinguished: Langer lines, dermatomes, Blaschko lines and exogenous patterns. Langer lines run in the direction of the underlying collagen fibers (least skin tension) and play an important diagnostic role for some exanthematous skin diseases. In the thoracodorsal region, the distribution of the Langer lines gives rise to what is referred to as a ’Christmas tree pattern’. A dermatome is an area of skin that is supplied by a single spinal nerve. Disorders with a neuronal origin follow this pattern of distribution. The lines of Blaschko delineate the lines of migration of epidermal cells during embryogenesis. Exogenous linear patterns are caused by external factors. The present CME article will highlight important skin disorders that primarily present in the form of one of the aforementioned patterns. In addition, we will also address skin conditions that may secondarily follow with these patterns (or distinctly not do so) as the result of various mechanisms such as the Koebner phenomenon, reverse Koebner phenomenon, and Wolf’s isotopic response

    Treatment Motivations and Expectations in Patients with Actinic Keratosis: A German-Wide Multicenter, Cross-Sectional Trial

    Get PDF
    Patient-centered motives and expectations of the treatment of actinic keratoses (AK) have received little attention until now. Hence, we aimed to profile and cluster treatment motivations and expectations among patients with AK in a nationwide multicenter, cross-sectional study including patients from 14 German skin cancer centers. Patients were asked to complete a self-administered questionnaire. Treatment motives and expectations towards AK management were measured on a visual analogue scale from 1–10. Specific patient profiles were investigated with subgroup and correlation analysis. Overall, 403 patients were included. The highest motivation values were obtained for the items “avoid transition to invasive squamous cell carcinoma” (mean ± standard deviation; 8.98 ± 1.46), “AK are considered precancerous lesions” (8.72 ± 1.34) and “treating physician recommends treatment” (8.10 ± 2.37; p < 0.0001). The highest expectation values were observed for the items “effective lesion clearance” (8.36 ± 1.99), “safety” (8.20 ± 2.03) and “treatment-related costs are covered by health insurance” (8.00 ± 2.41; p < 0.0001). Patients aged ≥77 years and those with ≥7 lesions were identified at high risk of not undergoing any treatment due to intrinsic and extrinsic motivation deficits. Heat mapping of correlation analysis revealed four clusters with distinct motivation and expectation profiles. This study provides a patient-based heuristic tool for a personalized treatment decision in patients with AK

    Die Genauigkeit des T- und N-Deskriptors der 18F-FDG-PET/CT im Vergleich zum pathologischen Staging des Lungenkarzinoms im Stadium I bis III

    No full text
    In der Behandlung des Nichtkleinzelligen Lungenkarzinoms ist die kurative Resektion im Stadium I bis III primäres Therapieziel. Besteht klinische Operabilität, ist der Behandlungserfolg abhängig vom exakten Tumorstaging. Hierbei hat die Einführung des integrierten 18F-FDG-PET-CT einen entscheidenden Fortschritt dargestellt und der Einsatz im präoperativen Staging wird nach aktuellen Leitlinie empfohlen. In dieser retrospektiven Untersuchung wurde die Genauigkeit der PET-CT im thorakalen Staging des Lungenkarzinoms untersucht und mit dem pathologischen Staging verglichen. Zudem wurde bei diskordanten Befunden deren mögliche Ursachen sowie die therapeutische Relevanz beurteilt, wobei eine Fehleinordnung innerhalb der Stadien T1-3 und N0/1 als prinzipiell unerheblich für die Operabilität betrachtet wurde. Die Genauigkeit für das T-Staging betrug 57%. Dabei kam es zu einer Unterschätzung bzw. Überschätzung des T-Stadiums bei 22% bzw. 21% der Patienten. Ursachen für die Unterschätzungen waren in 46% das mangelnde Auflösungsvermögen sowie in 20% broncheoalveoläre Karzinome sowie Karzinoide. Die Überschätzungen waren in 38% ebenfalls dem mangelnden Auflösungsvermögen sowie in 32% den entzündlichen Veränderungen zuzuschreiben. Prinzipiell therapeutisch relevant waren 20% der Fälle. In 6% der Fälle wurde unnötig operiert. Die Genauigkeit für das N-Staging betrug 54%. Dabei kam es zu einer Unterschätzung bzw. Überschätzung des N-Stadiums bei 23% bzw. 24% der Patienten. Ursachen für die Unterschätzungen waren in 47% das mangelnde Auflösungsvermögen sowie in 8% broncheoalveoläre Karzinome sowie Karzinoide. 66% der Überschätzungen waren entzündlichen Veränderungen geschuldet. Prinzipiell therapeutisch relevant waren 29% der Fälle. Bei 4% der Patienten wäre, ohne weiterführende Diagnostik, eine kurative Operation vorenthalten worden, da eine Fehlklassifizierung durch die PET-CT in ein inoperables Stadium erfolgte. Grenzen der PET-CT liegen in der anatomischen Auflösung, die durch eine Verbesserung der CT-Komponente optimiert werden kann. Diskrepanz zum pathologischen Staging kann durch okkulte Läsionen, zeitliche Latenz zwischen PET-CT und Operation, durch langsam proliferierende Malignome sowie durch überlagernde entzündliche oder granulomatöse Erkrankungen auftreten. Weniger anreichernde Befunde korrelieren nicht zwangsläufig mit dem Ausmaß des Befalles. Da PET- negative Befunde (in dieser Arbeit die Unterschätzungen) eine Malignität nicht sicher ausschließen sowie PET- positive Befunde (in dieser Arbeit die Überschätzungen) eine Malignität nicht sicher nachweisen können, empfehlen wir eine zusätzliche histologische/ zytologische Sicherung unklarer PET-CT Befunde. Aufgrund des hohen Anteils der entdifferenzierten Malignome bei den Unterschätzungen sowie des hohen Anteils der entzündlichen Reaktionen bei den Überschätzungen, ist die diagnostische Beurteilbarkeit der PET-CT bei entzündlichen Reaktionen und entdifferenzierten Malignomen nur limitiert möglich

    A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task

    No full text
    Background: Recent studies have demonstrated the use of convolutional neural networks (CNNs) to classify images of melanoma with accuracies comparable to those achieved by board-certified dermatologists. However, the performance of a CNN exclusively trained with dermoscopic images in a clinical image classification task in direct competition with a large number of dermatologists has not been measured to date. This study compares the performance of a convolutional neuronal network trained with dermoscopic images exclusively for identifying melanoma in clinical photographs with the manual grading of the same images by dermatologists. Methods: We compared automatic digital melanoma classification with the performance of 145 dermatologists of 12 German university hospitals. We used methods from enhanced deep learning to train a CNN with 12,378 open-source dermoscopic images. We used 100 clinical images to compare the performance of the CNN to that of the dermatologists. Dermatologists were compared with the deep neural network in terms of sensitivity, specificity and receiver operating characteristics. Findings: The mean sensitivity and specificity achieved by the dermatologists with clinical images was 89.4% (range: 55.0%-100%) and 64.4% (range: 22.5%-92.5%). At the same sensitivity, the CNN exhibited a mean specificity of 68.2% (range 47.5%-86.25%). Among the dermatologists, the attendings showed the highest mean sensitivity of 92.8% at a mean specificity of 57.7%. With the same high sensitivity of 92.8%, the CNN had a mean specificity of 61.1%. Interpretation: For the first time, dermatologist-level image classification was achieved on a clinical image classification task without training on clinical images. The CNN had a smaller variance of results indicating a higher robustness of computer vision compared with human assessment for dermatologic image classification tasks. (C) 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

    Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task

    No full text
    Background: Recent studies have successfully demonstrated the use of deep-learning algorithms for dermatologist-level classification of suspicious lesions by the use of excessive proprietary image databases and limited numbers of dermatologists. For the first time, the performance of a deep-learning algorithm trained by open-source images exclusively is compared to a large number of dermatologists covering all levels within the clinical hierarchy. Methods: We used methods from enhanced deep learning to train a convolutional neural network (CNN) with 12,378 open-source dermoscopic images. We used 100 images to compare the performance of the CNN to that of the 157 dermatologists from 12 university hospitals in Germany. Outperformance of dermatologists by the deep neural network was measured in terms of sensitivity, specificity and receiver operating characteristics. Findings: The mean sensitivity and specificity achieved by the dermatologists with dermoscopic images was 74.1% (range 40.0%-100%) and 60% (range 21.3%-91.3%), respectively. At a mean sensitivity of 74.1%, the CNN exhibited a mean specificity of 86.5% (range 70.8%-91.3%). At a mean specificity of 60%, a mean sensitivity of 87.5% (range 80%-95%) was achieved by our algorithm. Among the dermatologists, the chief physicians showed the highest mean specificity of 69.2% at a mean sensitivity of 73.3%. With the same high specificity of 69.2%, the CNN had a mean sensitivity of 84.5%. Interpretation: A CNN trained by open-source images exclusively outperformed 136 of the 157 dermatologists and all the different levels of experience (from junior to chief physicians) in terms of average specificity and sensitivity. (C) 2019 The Author(s). Published by Elsevier Ltd
    corecore