228 research outputs found
HistoPerm: A Permutation-Based View Generation Approach for Improving Histopathologic Feature Representation Learning
Deep learning has been effective for histology image analysis in digital
pathology. However, many current deep learning approaches require large,
strongly- or weakly-labeled images and regions of interest, which can be
time-consuming and resource-intensive to obtain. To address this challenge, we
present HistoPerm, a view generation method for representation learning using
joint embedding architectures that enhances representation learning for
histology images. HistoPerm permutes augmented views of patches extracted from
whole-slide histology images to improve classification performance. We
evaluated the effectiveness of HistoPerm on two histology image datasets for
Celiac disease and Renal Cell Carcinoma, using three widely used joint
embedding architecture-based representation learning methods: BYOL, SimCLR, and
VICReg. Our results show that HistoPerm consistently improves patch- and
slide-level classification performance in terms of accuracy, F1-score, and AUC.
Specifically, for patch-level classification accuracy on the Celiac disease
dataset, HistoPerm boosts BYOL and VICReg by 8% and SimCLR by 3%. On the Renal
Cell Carcinoma dataset, patch-level classification accuracy is increased by 2%
for BYOL and VICReg, and by 1% for SimCLR. In addition, on the Celiac disease
dataset, models with HistoPerm outperform the fully-supervised baseline model
by 6%, 5%, and 2% for BYOL, SimCLR, and VICReg, respectively. For the Renal
Cell Carcinoma dataset, HistoPerm lowers the classification accuracy gap for
the models up to 10% relative to the fully-supervised baseline. These findings
suggest that HistoPerm can be a valuable tool for improving representation
learning of histopathology features when access to labeled data is limited and
can lead to whole-slide classification results that are comparable to or
superior to fully-supervised methods
Self-supervised learning methods and applications in medical imaging analysis: A survey
The scarcity of high-quality annotated medical imaging datasets is a major
problem that collides with machine learning applications in the field of
medical imaging analysis and impedes its advancement. Self-supervised learning
is a recent training paradigm that enables learning robust representations
without the need for human annotation which can be considered an effective
solution for the scarcity of annotated medical data. This article reviews the
state-of-the-art research directions in self-supervised learning approaches for
image data with a concentration on their applications in the field of medical
imaging analysis. The article covers a set of the most recent self-supervised
learning methods from the computer vision field as they are applicable to the
medical imaging analysis and categorize them as predictive, generative, and
contrastive approaches. Moreover, the article covers 40 of the most recent
research papers in the field of self-supervised learning in medical imaging
analysis aiming at shedding the light on the recent innovation in the field.
Finally, the article concludes with possible future research directions in the
field
Data efficient deep learning for medical image analysis: A survey
The rapid evolution of deep learning has significantly advanced the field of
medical image analysis. However, despite these achievements, the further
enhancement of deep learning models for medical image analysis faces a
significant challenge due to the scarcity of large, well-annotated datasets. To
address this issue, recent years have witnessed a growing emphasis on the
development of data-efficient deep learning methods. This paper conducts a
thorough review of data-efficient deep learning methods for medical image
analysis. To this end, we categorize these methods based on the level of
supervision they rely on, encompassing categories such as no supervision,
inexact supervision, incomplete supervision, inaccurate supervision, and only
limited supervision. We further divide these categories into finer
subcategories. For example, we categorize inexact supervision into multiple
instance learning and learning with weak annotations. Similarly, we categorize
incomplete supervision into semi-supervised learning, active learning, and
domain-adaptive learning and so on. Furthermore, we systematically summarize
commonly used datasets for data efficient deep learning in medical image
analysis and investigate future research directions to conclude this survey.Comment: Under Revie
SLPD: Slide-level Prototypical Distillation for WSIs
Improving the feature representation ability is the foundation of many whole
slide pathological image (WSIs) tasks. Recent works have achieved great success
in pathological-specific self-supervised learning (SSL). However, most of them
only focus on learning patch-level representations, thus there is still a gap
between pretext and slide-level downstream tasks, e.g., subtyping, grading and
staging. Aiming towards slide-level representations, we propose Slide-Level
Prototypical Distillation (SLPD) to explore intra- and inter-slide semantic
structures for context modeling on WSIs. Specifically, we iteratively perform
intra-slide clustering for the regions (4096x4096 patches) within each WSI to
yield the prototypes and encourage the region representations to be closer to
the assigned prototypes. By representing each slide with its prototypes, we
further select similar slides by the set distance of prototypes and assign the
regions by cross-slide prototypes for distillation. SLPD achieves
state-of-the-art results on multiple slide-level benchmarks and demonstrates
that representation learning of semantic structures of slides can make a
suitable proxy task for WSI analysis. Code will be available at
https://github.com/Carboxy/SLPD.Comment: International Conference on Medical Image Computing and Computer
Assisted Intervention (MICCAI
MoMA: Momentum Contrastive Learning with Multi-head Attention-based Knowledge Distillation for Histopathology Image Analysis
There is no doubt that advanced artificial intelligence models and high
quality data are the keys to success in developing computational pathology
tools. Although the overall volume of pathology data keeps increasing, a lack
of quality data is a common issue when it comes to a specific task due to
several reasons including privacy and ethical issues with patient data. In this
work, we propose to exploit knowledge distillation, i.e., utilize the existing
model to learn a new, target model, to overcome such issues in computational
pathology. Specifically, we employ a student-teacher framework to learn a
target model from a pre-trained, teacher model without direct access to source
data and distill relevant knowledge via momentum contrastive learning with
multi-head attention mechanism, which provides consistent and context-aware
feature representations. This enables the target model to assimilate
informative representations of the teacher model while seamlessly adapting to
the unique nuances of the target data. The proposed method is rigorously
evaluated across different scenarios where the teacher model was trained on the
same, relevant, and irrelevant classification tasks with the target model.
Experimental results demonstrate the accuracy and robustness of our approach in
transferring knowledge to different domains and tasks, outperforming other
related methods. Moreover, the results provide a guideline on the learning
strategy for different types of tasks and scenarios in computational pathology.
Code is available at: \url{https://github.com/trinhvg/MoMA}.Comment: Preprin
Label-Efficient Deep Learning in Medical Image Analysis: Challenges and Future Directions
Deep learning has seen rapid growth in recent years and achieved
state-of-the-art performance in a wide range of applications. However, training
models typically requires expensive and time-consuming collection of large
quantities of labeled data. This is particularly true within the scope of
medical imaging analysis (MIA), where data are limited and labels are expensive
to be acquired. Thus, label-efficient deep learning methods are developed to
make comprehensive use of the labeled data as well as the abundance of
unlabeled and weak-labeled data. In this survey, we extensively investigated
over 300 recent papers to provide a comprehensive overview of recent progress
on label-efficient learning strategies in MIA. We first present the background
of label-efficient learning and categorize the approaches into different
schemes. Next, we examine the current state-of-the-art methods in detail
through each scheme. Specifically, we provide an in-depth investigation,
covering not only canonical semi-supervised, self-supervised, and
multi-instance learning schemes, but also recently emerged active and
annotation-efficient learning strategies. Moreover, as a comprehensive
contribution to the field, this survey not only elucidates the commonalities
and unique features of the surveyed methods but also presents a detailed
analysis of the current challenges in the field and suggests potential avenues
for future research.Comment: Update Few-shot Method
- …