8 research outputs found
Robust T-Loss for Medical Image Segmentation
This paper presents a new robust loss function, the T-Loss, for medical image
segmentation. The proposed loss is based on the negative log-likelihood of the
Student-t distribution and can effectively handle outliers in the data by
controlling its sensitivity with a single parameter. This parameter is updated
during the backpropagation process, eliminating the need for additional
computation or prior information about the level and spread of noisy labels.
Our experiments show that the T-Loss outperforms traditional loss functions in
terms of dice scores on two public medical datasets for skin lesion and lung
segmentation. We also demonstrate the ability of T-Loss to handle different
types of simulated label noise, resembling human error. Our results provide
strong evidence that the T-Loss is a promising alternative for medical image
segmentation where high levels of noise or outliers in the dataset are a
typical phenomenon in practice. The project website can be found at
https://robust-tloss.github.ioComment: Early accepted to MICCAI 202
SelfClean: A Self-Supervised Data Cleaning Strategy
Most benchmark datasets for computer vision contain irrelevant images, near
duplicates, and label errors. Consequently, model performance on these
benchmarks may not be an accurate estimate of generalization capabilities. This
is a particularly acute concern in computer vision for medicine where datasets
are typically small, stakes are high, and annotation processes are expensive
and error-prone. In this paper we propose SelfClean, a general procedure to
clean up image datasets exploiting a latent space learned with
self-supervision. By relying on self-supervised learning, our approach focuses
on intrinsic properties of the data and avoids annotation biases. We formulate
dataset cleaning as either a set of ranking problems, which significantly
reduce human annotation effort, or a set of scoring problems, which enable
fully automated decisions based on score distributions. We demonstrate that
SelfClean achieves state-of-the-art performance in detecting irrelevant images,
near duplicates, and label errors within popular computer vision benchmarks,
retrieving both injected synthetic noise and natural contamination. In
addition, we apply our method to multiple image datasets and confirm an
improvement in evaluation reliability
Towards Reliable Dermatology Evaluation Benchmarks
Benchmark datasets for digital dermatology unwittingly contain inaccuracies
that reduce trust in model performance estimates. We propose a
resource-efficient data cleaning protocol to identify issues that escaped
previous curation. The protocol leverages an existing algorithmic cleaning
strategy and is followed by a confirmation process terminated by an intuitive
stopping criterion. Based on confirmation by multiple dermatologists, we remove
irrelevant samples and near duplicates and estimate the percentage of label
errors in six dermatology image datasets for model evaluation promoted by the
International Skin Imaging Collaboration. Along with this paper, we publish
revised file lists for each dataset which should be used for model evaluation.
Our work paves the way for more trustworthy performance assessment in digital
dermatology.Comment: Link to the revised file lists:
https://github.com/Digital-Dermatology/SelfClean-Revised-Benchmark
Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.</jats:p
Quantification of Efflorescences in Pustular Psoriasis Using Deep Learning
Objectives: Pustular psoriasis (PP) is one of the most severe and chronic skin conditions. Its treatment is difficult, and measurements of its severity are highly dependent on clinicians’ experience. Pustules and brown spots are the main efflorescences of the disease and directly correlate with its activity. We propose an automated deep learning model (DLM) to quantify lesions in terms of count and surface percentage from patient photographs. Methods: In this retrospective study, two dermatologists and a student labeled 151 photographs of PP patients for pustules and brown spots. The DLM was trained and validated with 121 photographs, keeping 30 photographs as a test set to assess the DLM performance on unseen data. We also evaluated our DLM on 213 unstandardized, out-of-distribution photographs of various pustular disorders (referred to as the pustular set), which were ranked from 0 (no disease) to 4 (very severe) by one dermatologist for disease severity. The agreement between the DLM predictions and experts’ labels was evaluated with the intraclass correlation coefficient (ICC) for the test set and Spearman correlation (SC) coefficient for the pustular set. Results: On the test set, the DLM achieved an ICC of 0.97 (95% confidence interval [CI], 0.97–0.98) for count and 0.93 (95% CI, 0.92–0.94) for surface percentage. On the pustular set, the DLM reached a SC coefficient of 0.66 (95% CI, 0.60–0.74) for count and 0.80 (95% CI, 0.75–0.83) for surface percentage. Conclusions: The proposed method quantifies efflorescences from PP photographs reliably and automatically, enabling a precise and objective evaluation of disease activity.ISSN:2093-3681ISSN:2093-369
Improved diagnosis by automated macro- and micro-anatomical region mapping of skin photographs
Background
The exact location of skin lesions is key in clinical dermatology. On one hand, it supports differential diagnosis (DD) since most skin conditions have specific predilection sites. On the other hand, location matters for dermatosurgical interventions. In practice, lesion evaluation is not well standardized and anatomical descriptions vary or lack altogether. Automated determination of anatomical location could benefit both situations.
Objective
Establish an automated method to determine anatomical regions in clinical patient pictures and evaluate the gain in DD performance of a deep learning model (DLM) when trained with lesion locations and images.
Methods
Retrospective study based on three datasets: macro-anatomy for the main body regions with 6000 patient pictures partially labelled by a student, micro-anatomy for the ear region with 182 pictures labelled by a student and DD with 3347 pictures of 16 diseases determined by dermatologists in clinical settings. For each dataset, a DLM was trained and evaluated on an independent test set. The primary outcome measures were the precision and sensitivity with 95% CI. For DD, we compared the performance of a DLM trained with lesion pictures only with a DLM trained with both pictures and locations.
Results
The average precision and sensitivity were 85% (CI 84–86), 84% (CI 83–85) for macro-anatomy, 81% (CI 80–83), 80% (CI 77–83) for micro-anatomy and 82% (CI 78–85), 81% (CI 77–84) for DD. We observed an improvement in DD performance of 6% (McNemar test P-value 0.0009) for both average precision and sensitivity when training with both lesion pictures and locations.
Conclusion
Including location can be beneficial for DD DLM performance. The proposed method can generate body region maps from patient pictures and even reach surgery relevant anatomical precision, e.g. the ear region. Our method enables automated search of large clinical databases and make targeted anatomical image retrieval possible.ISSN:0926-9959ISSN:1468-308
Co-design of a trustworthy AI system in healthcare : deep learning based skin lesion classifier
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments