208 research outputs found
The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions
Training of neural networks for automated diagnosis of pigmented skin lesions
is hampered by the small size and lack of diversity of available datasets of
dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human
Against Machine with 10000 training images") dataset. We collected
dermatoscopic images from different populations acquired and stored by
different modalities. Given this diversity we had to apply different
acquisition and cleaning methods and developed semi-automatic workflows
utilizing specifically trained neural networks. The final dataset consists of
10015 dermatoscopic images which are released as a training set for academic
machine learning purposes and are publicly available through the ISIC archive.
This benchmark dataset can be used for machine learning and for comparisons
with human experts. Cases include a representative collection of all important
diagnostic categories in the realm of pigmented lesions. More than 50% of
lesions have been confirmed by pathology, while the ground truth for the rest
of the cases was either follow-up, expert consensus, or confirmation by in-vivo
confocal microscopy
Accuracy of mobile digital teledermoscopy for skin self-examinations in adults at high risk of skin cancer: an open-label, randomised controlled trial
Background: Skin self-examinations supplemented with mobile teledermoscopy might improve early detection of skin cancers compared with naked-eye skin self-examinations. We aimed to assess whether mobile teledermoscopy-enhanced skin self-examination can improve sensitivity and specificity of self-detection of skin cancers when compared with naked-eye skin self-examination. Methods: This randomised, controlled trial was done in Brisbane (QLD, Australia). Eligible participants (aged ≥18 years) had at least two skin cancer risk factors as self-reported in the eligibility survey and had to own or have access to an iPhone compatible with a dermatoscope attachment (iPhone versions 5–8). Participants were randomly assigned (1:1), via a computer-generated randomisation procedure, to the intervention group (mobile dermoscopy-enhanced self-skin examination) or the control group (naked-eye skin self-examination). Control group and intervention group participants received web-based instructions on how to complete a whole body skin self-examination. All participants completed skin examinations at baseline, 1 month, and 2 months; intervention group participants submitted photographs of suspicious lesions to a dermatologist for telediagnosis after each skin examination and control group participants noted lesions on a body chart that was sent to the research team after each skin examination. All participants had an in-person whole-body clinical skin examination within 3 months of their last skin self-examination. Primary outcomes were sensitivity and specificity of skin self-examination, patient selection of clinically atypical lesions suspicious for melanoma or keratinocyte skin cancers (body sites examined, number of lesions photographed, types of lesions, and lesions missed), and diagnostic concordance of telediagnosis versus in-person whole-body clinical skin examination diagnosis. All primary outcomes were analysed in the modified intention-to-treat population, which included all patients who had a clinical skin examination within 3 months of their last skin self-examination. This trial was registered with the Australian and New Zealand Clinical Trials Registry, ACTRN12616000989448. Findings: Between March 6, 2017, and June 7, 2018, 234 participants consented to enrol in the study, of whom 116 (50%) were assigned to the intervention group and 118 (50%) were assigned to the control group. 199 participants (98 participants in the intervention group and 101 participants in the control group) attended the clinical skin examination and thus were eligible for analyses. Participants in the intervention group submitted 615 lesions (median 6·0 per person; range 1–24) for telediagnosis and participants in the control group identified and recorded 673 lesions (median 6·0 per person; range 1–16). At the lesion level, sensitivity for lesions clinically suspicious for skin cancer was 75% (95% CI 63–84) in the intervention group and 88% (95% CI 80–91) in the control group (p=0·04). Specificity was 87% (95% CI 85–90) in the intervention group and 89% (95% CI 87–91) in the control group (p=0·42). At the individual level, the intervention group had a sensitivity of 87% (95% CI 76–99) compared with 97% (95% CI 91–100) in the control group (p=0·26), and a specificity of 95% (95% CI 90–100) compared with 96% (95% CI 91–100) in the control group. The overall diagnostic concordance between the telediagnosis and in-person clinical skin examination was 88%. Interpretation: The use of mobile teledermoscopy did not increase sensitivity for the detection of skin cancers compared with naked-eye skin self-examination; thus, further evidence is necessary for inclusion of skin self-examination technology for public health benefit. Funding: National Health and Medical Research Council (Australia)
Indications for Digital Monitoring of Patients With Multiple Nevi: Recommendations from the International Dermoscopy Society
Introduction: In patients with multiple nevi, sequential imaging using total body skin photography (TBSP) coupled with digital dermoscopy (DD) documentation reduces unnecessary excisions and improves the early detection of melanoma. Correct patient selection is essential for optimizing the efficacy of this diagnostic approach.
Objectives: The purpose of the study was to identify, via expert consensus, the best indications for TBSP and DD follow-up.
Methods: This study was performed on behalf of the International Dermoscopy Society (IDS). We attained consensus by using an e-Delphi methodology. The panel of participants included international experts in dermoscopy. In each Delphi round, experts were asked to select from a list of indications for TBSP and DD.
Results: Expert consensus was attained after 3 rounds of Delphi. Participants considered a total nevus count of 60 or more nevi or the presence of a CDKN2A mutation sufficient to refer the patient for digital monitoring. Â Patients with more than 40 nevi were only considered an indication in case of personal history of melanoma or red hair and/or a MC1R mutation or history of organ transplantation.
Conclusions: Our recommendations support clinicians in choosing appropriate follow-up regimens for patients with multiple nevi and in applying the time-consuming procedure of sequential imaging more efficiently. Further studies and real-life data are needed to confirm the usefulness of this list of indications in clinical practice
Skin Lesion Analyser: An Efficient Seven-Way Multi-Class Skin Cancer Classification Using MobileNet
Skin cancer, a major form of cancer, is a critical public health problem with
123,000 newly diagnosed melanoma cases and between 2 and 3 million non-melanoma
cases worldwide each year. The leading cause of skin cancer is high exposure of
skin cells to UV radiation, which can damage the DNA inside skin cells leading
to uncontrolled growth of skin cells. Skin cancer is primarily diagnosed
visually employing clinical screening, a biopsy, dermoscopic analysis, and
histopathological examination. It has been demonstrated that the dermoscopic
analysis in the hands of inexperienced dermatologists may cause a reduction in
diagnostic accuracy. Early detection and screening of skin cancer have the
potential to reduce mortality and morbidity. Previous studies have shown Deep
Learning ability to perform better than human experts in several visual
recognition tasks. In this paper, we propose an efficient seven-way automated
multi-class skin cancer classification system having performance comparable
with expert dermatologists. We used a pretrained MobileNet model to train over
HAM10000 dataset using transfer learning. The model classifies skin lesion
image with a categorical accuracy of 83.1 percent, top2 accuracy of 91.36
percent and top3 accuracy of 95.34 percent. The weighted average of precision,
recall, and f1-score were found to be 0.89, 0.83, and 0.83 respectively. The
model has been deployed as a web application for public use at
(https://saketchaturvedi.github.io). This fast, expansible method holds the
potential for substantial clinical impact, including broadening the scope of
primary care practice and augmenting clinical decision-making for dermatology
specialists.Comment: This is a pre-copyedited version of a contribution published in
Advances in Intelligent Systems and Computing, Hassanien A., Bhatnagar R.,
Darwish A. (eds) published by Chaturvedi S.S., Gupta K., Prasad P.S. The
definitive authentication version is available online via
https://doi.org/10.1007/978-981-15-3383-9_1
Human-computer collaboration for skin cancer recognition
The rapid increase in telemedicine coupled with recent advances in diagnostic artificial intelligence (AI) create the imperative to consider the opportunities and risks of inserting AI-based support into new paradigms of care. Here we build on recent achievements in the accuracy of image-based AI for skin cancer diagnosis to address the effects of varied representations of AI-based support across different levels of clinical expertise and multiple clinical workflows. We find that good quality AI-based support of clinical decision-making improves diagnostic accuracy over that of either AI or physicians alone, and that the least experienced clinicians gain the most from AI-based support. We further find that AI-based multiclass probabilities outperformed content-based image retrieval (CBIR) representations of AI in the mobile technology environment, and AI-based support had utility in simulations of second opinions and of telemedicine triage. In addition to demonstrating the potential benefits associated with good quality AI in the hands of non-expert clinicians, we find that faulty AI can mislead the entire spectrum of clinicians, including experts. Lastly, we show that insights derived from AI class-activation maps can inform improvements in human diagnosis. Together, our approach and findings offer a framework for future studies across the spectrum of image-based diagnostics to improve human-computer collaboration in clinical practice
- …