59 research outputs found
Data efficient deep learning for medical image analysis: A survey
The rapid evolution of deep learning has significantly advanced the field of
medical image analysis. However, despite these achievements, the further
enhancement of deep learning models for medical image analysis faces a
significant challenge due to the scarcity of large, well-annotated datasets. To
address this issue, recent years have witnessed a growing emphasis on the
development of data-efficient deep learning methods. This paper conducts a
thorough review of data-efficient deep learning methods for medical image
analysis. To this end, we categorize these methods based on the level of
supervision they rely on, encompassing categories such as no supervision,
inexact supervision, incomplete supervision, inaccurate supervision, and only
limited supervision. We further divide these categories into finer
subcategories. For example, we categorize inexact supervision into multiple
instance learning and learning with weak annotations. Similarly, we categorize
incomplete supervision into semi-supervised learning, active learning, and
domain-adaptive learning and so on. Furthermore, we systematically summarize
commonly used datasets for data efficient deep learning in medical image
analysis and investigate future research directions to conclude this survey.Comment: Under Revie
Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives
Deep learning has demonstrated remarkable performance across various tasks in
medical imaging. However, these approaches primarily focus on supervised
learning, assuming that the training and testing data are drawn from the same
distribution. Unfortunately, this assumption may not always hold true in
practice. To address these issues, unsupervised domain adaptation (UDA)
techniques have been developed to transfer knowledge from a labeled domain to a
related but unlabeled domain. In recent years, significant advancements have
been made in UDA, resulting in a wide range of methodologies, including feature
alignment, image translation, self-supervision, and disentangled representation
methods, among others. In this paper, we provide a comprehensive literature
review of recent deep UDA approaches in medical imaging from a technical
perspective. Specifically, we categorize current UDA research in medical
imaging into six groups and further divide them into finer subcategories based
on the different tasks they perform. We also discuss the respective datasets
used in the studies to assess the divergence between the different domains.
Finally, we discuss emerging areas and provide insights and discussions on
future research directions to conclude this survey.Comment: Under Revie
Self-training with dual uncertainty for semi-supervised medical image segmentation
In the field of semi-supervised medical image segmentation, the shortage of
labeled data is the fundamental problem. How to effectively learn image
features from unlabeled images to improve segmentation accuracy is the main
research direction in this field. Traditional self-training methods can
partially solve the problem of insufficient labeled data by generating pseudo
labels for iterative training. However, noise generated due to the model's
uncertainty during training directly affects the segmentation results.
Therefore, we added sample-level and pixel-level uncertainty to stabilize the
training process based on the self-training framework. Specifically, we saved
several moments of the model during pre-training, and used the difference
between their predictions on unlabeled samples as the sample-level uncertainty
estimate for that sample. Then, we gradually add unlabeled samples from easy to
hard during training. At the same time, we added a decoder with different
upsampling methods to the segmentation network and used the difference between
the outputs of the two decoders as pixel-level uncertainty. In short, we
selectively retrained unlabeled samples and assigned pixel-level uncertainty to
pseudo labels to optimize the self-training process. We compared the
segmentation results of our model with five semi-supervised approaches on the
public 2017 ACDC dataset and 2018 Prostate dataset. Our proposed method
achieves better segmentation performance on both datasets under the same
settings, demonstrating its effectiveness, robustness, and potential
transferability to other medical image segmentation tasks. Keywords: Medical
image segmentation, semi-supervised learning, self-training, uncertainty
estimatio
A Chebyshev Confidence Guided Source-Free Domain Adaptation Framework for Medical Image Segmentation
Source-free domain adaptation (SFDA) aims to adapt models trained on a
labeled source domain to an unlabeled target domain without the access to
source data. In medical imaging scenarios, the practical significance of SFDA
methods has been emphasized due to privacy concerns. Recent State-of-the-art
SFDA methods primarily rely on self-training based on pseudo-labels (PLs).
Unfortunately, PLs suffer from accuracy deterioration caused by domain shift,
and thus limit the effectiveness of the adaptation process. To address this
issue, we propose a Chebyshev confidence guided SFDA framework to accurately
assess the reliability of PLs and generate self-improving PLs for
self-training. The Chebyshev confidence is estimated by calculating probability
lower bound of the PL confidence, given the prediction and the corresponding
uncertainty. Leveraging the Chebyshev confidence, we introduce two
confidence-guided denoising methods: direct denoising and prototypical
denoising. Additionally, we propose a novel teacher-student joint training
scheme (TJTS) that incorporates a confidence weighting module to improve PLs
iteratively. The TJTS, in collaboration with the denoising methods, effectively
prevents the propagation of noise and enhances the accuracy of PLs. Extensive
experiments in diverse domain scenarios validate the effectiveness of our
proposed framework and establish its superiority over state-of-the-art SFDA
methods. Our paper contributes to the field of SFDA by providing a novel
approach for precisely estimating the reliability of pseudo-labels and a
framework for obtaining high-quality PLs, resulting in improved adaptation
performance
Dynamic Data Augmentation via MCTS for Prostate MRI Segmentation
Medical image data are often limited due to the expensive acquisition and
annotation process. Hence, training a deep-learning model with only raw data
can easily lead to overfitting. One solution to this problem is to augment the
raw data with various transformations, improving the model's ability to
generalize to new data. However, manually configuring a generic augmentation
combination and parameters for different datasets is non-trivial due to
inconsistent acquisition approaches and data distributions. Therefore,
automatic data augmentation is proposed to learn favorable augmentation
strategies for different datasets while incurring large GPU overhead. To this
end, we present a novel method, called Dynamic Data Augmentation (DDAug), which
is efficient and has negligible computation cost. Our DDAug develops a
hierarchical tree structure to represent various augmentations and utilizes an
efficient Monte-Carlo tree searching algorithm to update, prune, and sample the
tree. As a result, the augmentation pipeline can be optimized for each dataset
automatically. Experiments on multiple Prostate MRI datasets show that our
method outperforms the current state-of-the-art data augmentation strategies
Probabilistic 3D surface reconstruction from sparse MRI information
Surface reconstruction from magnetic resonance (MR) imaging data is
indispensable in medical image analysis and clinical research. A reliable and
effective reconstruction tool should: be fast in prediction of accurate well
localised and high resolution models, evaluate prediction uncertainty, work
with as little input data as possible. Current deep learning state of the art
(SOTA) 3D reconstruction methods, however, often only produce shapes of limited
variability positioned in a canonical position or lack uncertainty evaluation.
In this paper, we present a novel probabilistic deep learning approach for
concurrent 3D surface reconstruction from sparse 2D MR image data and aleatoric
uncertainty prediction. Our method is capable of reconstructing large surface
meshes from three quasi-orthogonal MR imaging slices from limited training sets
whilst modelling the location of each mesh vertex through a Gaussian
distribution. Prior shape information is encoded using a built-in linear
principal component analysis (PCA) model. Extensive experiments on cardiac MR
data show that our probabilistic approach successfully assesses prediction
uncertainty while at the same time qualitatively and quantitatively outperforms
SOTA methods in shape prediction. Compared to SOTA, we are capable of properly
localising and orientating the prediction via the use of a spatially aware
neural network.Comment: MICCAI 202
DisAsymNet: Disentanglement of Asymmetrical Abnormality on Bilateral Mammograms using Self-adversarial Learning
Asymmetry is a crucial characteristic of bilateral mammograms (Bi-MG) when
abnormalities are developing. It is widely utilized by radiologists for
diagnosis. The question of 'what the symmetrical Bi-MG would look like when the
asymmetrical abnormalities have been removed ?' has not yet received strong
attention in the development of algorithms on mammograms. Addressing this
question could provide valuable insights into mammographic anatomy and aid in
diagnostic interpretation. Hence, we propose a novel framework, DisAsymNet,
which utilizes asymmetrical abnormality transformer guided self-adversarial
learning for disentangling abnormalities and symmetric Bi-MG. At the same time,
our proposed method is partially guided by randomly synthesized abnormalities.
We conduct experiments on three public and one in-house dataset, and
demonstrate that our method outperforms existing methods in abnormality
classification, segmentation, and localization tasks. Additionally,
reconstructed normal mammograms can provide insights toward better
interpretable visual cues for clinical diagnosis. The code will be accessible
to the public
Test-time augmentation-based active learning and self-training for label-efficient segmentation
Deep learning techniques depend on large datasets whose annotation is
time-consuming. To reduce annotation burden, the self-training (ST) and
active-learning (AL) methods have been developed as well as methods that
combine them in an iterative fashion. However, it remains unclear when each
method is the most useful, and when it is advantageous to combine them. In this
paper, we propose a new method that combines ST with AL using Test-Time
Augmentations (TTA). First, TTA is performed on an initial teacher network.
Then, cases for annotation are selected based on the lowest estimated Dice
score. Cases with high estimated scores are used as soft pseudo-labels for ST.
The selected annotated cases are trained with existing annotated cases and ST
cases with border slices annotations. We demonstrate the method on MRI fetal
body and placenta segmentation tasks with different data variability
characteristics. Our results indicate that ST is highly effective for both
tasks, boosting performance for in-distribution (ID) and out-of-distribution
(OOD) data. However, while self-training improved the performance of
single-sequence fetal body segmentation when combined with AL, it slightly
deteriorated performance of multi-sequence placenta segmentation on ID data. AL
was helpful for the high variability placenta data, but did not improve upon
random selection for the single-sequence body data. For fetal body segmentation
sequence transfer, combining AL with ST following ST iteration yielded a Dice
of 0.961 with only 6 original scans and 2 new sequence scans. Results using
only 15 high-variability placenta cases were similar to those using 50 cases.
Code is available at: https://github.com/Bella31/TTA-quality-estimation-ST-ALComment: Accepted to MICCAI MILLanD workshop 202
A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond
Over the past decade, deep learning technologies have greatly advanced the
field of medical image registration. The initial developments, such as
ResNet-based and U-Net-based networks, laid the groundwork for deep
learning-driven image registration. Subsequent progress has been made in
various aspects of deep learning-based registration, including similarity
measures, deformation regularizations, and uncertainty estimation. These
advancements have not only enriched the field of deformable image registration
but have also facilitated its application in a wide range of tasks, including
atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D
registration. In this paper, we present a comprehensive overview of the most
recent advancements in deep learning-based image registration. We begin with a
concise introduction to the core concepts of deep learning-based image
registration. Then, we delve into innovative network architectures, loss
functions specific to registration, and methods for estimating registration
uncertainty. Additionally, this paper explores appropriate evaluation metrics
for assessing the performance of deep learning models in registration tasks.
Finally, we highlight the practical applications of these novel techniques in
medical imaging and discuss the future prospects of deep learning-based image
registration
- …