90 research outputs found
FSS-2019-nCov:A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection
The newly discovered coronavirus (COVID-19) pneumonia is providing major challenges to research in terms of diagnosis and disease quantification. Deep-learning (DL) techniques allow extremely precise image segmentation; yet, they necessitate huge volumes of manually labeled data to be trained in a supervised manner. Few-Shot Learning (FSL) paradigms tackle this issue by learning a novel category from a small number of annotated instances. We present an innovative semi-supervised few-shot segmentation (FSS) approach for efficient segmentation of 2019-nCov infection (FSS-2019-nCov) from only a few amounts of annotated lung CT scans. The key challenge of this study is to provide accurate segmentation of COVID-19 infection from a limited number of annotated instances. For that purpose, we propose a novel dual-path deep-learning architecture for FSS. Every path contains encoder–decoder (E-D) architecture to extract high-level information while maintaining the channel information of COVID-19 CT slices. The E-D architecture primarily consists of three main modules: a feature encoder module, a context enrichment (CE) module, and a feature decoder module. We utilize the pre-trained ResNet34 as an encoder backbone for feature extraction. The CE module is designated by a newly introduced proposed Smoothed Atrous Convolution (SAC) block and Multi-scale Pyramid Pooling (MPP) block. The conditioner path takes the pairs of CT images and their labels as input and produces a relevant knowledge representation that is transferred to the segmentation path to be used to segment the new images. To enable effective collaboration between both paths, we propose an adaptive recombination and recalibration (RR) module that permits intensive knowledge exchange between paths with a trivial increase in computational complexity. The model is extended to multi-class labeling for various types of lung infections. This contribution overcomes the limitation of the lack of large numbers of COVID-19 CT scans. It also provides a general framework for lung disease diagnosis in limited data situations
Adversarially Robust Prototypical Few-shot Segmentation with Neural-ODEs
Few-shot Learning (FSL) methods are being adopted in settings where data is
not abundantly available. This is especially seen in medical domains where the
annotations are expensive to obtain. Deep Neural Networks have been shown to be
vulnerable to adversarial attacks. This is even more severe in the case of FSL
due to the lack of a large number of training examples. In this paper, we
provide a framework to make few-shot segmentation models adversarially robust
in the medical domain where such attacks can severely impact the decisions made
by clinicians who use them. We propose a novel robust few-shot segmentation
framework, Prototypical Neural Ordinary Differential Equation (PNODE), that
provides defense against gradient-based adversarial attacks. We show that our
framework is more robust compared to traditional adversarial defense mechanisms
such as adversarial training. Adversarial training involves increased training
time and shows robustness to limited types of attacks depending on the type of
adversarial examples seen during training. Our proposed framework generalises
well to common adversarial attacks like FGSM, PGD and SMIA while having the
model parameters comparable to the existing few-shot segmentation models. We
show the effectiveness of our proposed approach on three publicly available
multi-organ segmentation datasets in both in-domain and cross-domain settings
by attacking the support and query sets without the need for ad-hoc adversarial
training.Comment: MICCAI 2022. arXiv admin note: substantial text overlap with
arXiv:2208.1242
Cross-Reference Transformer for Few-shot Medical Image Segmentation
Due to the contradiction of medical image processing, that is, the
application of medical images is more and more widely and the limitation of
medical images is difficult to label, few-shot learning technology has begun to
receive more attention in the field of medical image processing. This paper
proposes a Cross-Reference Transformer for medical image segmentation, which
addresses the lack of interaction between the existing Cross-Reference support
image and the query image. It can better mine and enhance the similar parts of
support features and query features in high-dimensional channels. Experimental
results show that the proposed model achieves good results on both CT dataset
and MRI dataset.Comment: 6 pages,4 figure
Meta-learning with implicit gradients in a few-shot setting for medical image segmentation
Widely used traditional supervised deep learning methods require a large number of training samples but often fail to generalize on unseen datasets. Therefore, a more general application of any trained model is quite limited for medical imaging for clinical practice. Using separately trained models for each unique lesion category or a unique patient population will require sufficiently large curated datasets, which is not practical to use in a real-world clinical set-up. Few-shot learning approaches can not only minimize the need for an enormous number of reliable ground truth labels that are labour-intensive and expensive, but can also be used to model on a dataset coming from a new population. To this end, we propose to exploit an optimization-based implicit model agnostic meta-learning (iMAML) algorithm under few-shot settings for medical image segmentation. Our approach can leverage the learned weights from diverse but small training samples to perform analysis on unseen datasets with high accuracy. We show that, unlike classical few-shot learning approaches, our method improves generalization capability. To our knowledge, this is the first work that exploits iMAML for medical image segmentation and explores the strength of the model on scenarios such as meta-training on unique and mixed instances of lesion datasets. Our quantitative results on publicly available skin and polyp datasets show that the proposed method outperforms the naive supervised baseline model and two recent few-shot segmentation approaches by large margins. In addition, our iMAML approach shows an improvement of 2%–4% in dice score compared to its counterpart MAML for most experiments
Self-supervised learning for few-shot medical image segmentation
Fully-supervised deep learning segmentation models are inflexible when encountering new unseen semantic classes and their fine-tuning often requires significant amounts of annotated data. Few-shot semantic segmentation (FSS) aims to solve this inflexibility by learning to segment an arbitrary unseen semantically meaningful class by referring to only a few labeled examples, without involving fine-tuning. State-of-the-art FSS methods are typically designed for segmenting natural images and rely on abundant annotated data of training classes to learn image representations that generalize well to unseen testing classes. However, such a training mechanism is impractical in annotation-scarce medical imaging scenarios. To address this challenge, in this work, we propose a novel self-supervised FSS framework for medical images, named SSL-ALPNet, in order to bypass the requirement for annotations during training. The proposed method exploits superpixel-based pseudo-labels to provide supervision signals. In addition, we propose a simple yet effective adaptive local prototype pooling module which is plugged into the prototype networks to further boost segmentation accuracy. We demonstrate the general applicability of the proposed approach using three different tasks: organ segmentation of abdominal CT and MRI images respectively, and cardiac segmentation of MRI images. The proposed method yields higher Dice scores than conventional FSS methods which require manual annotations for training in our experiments
Robust Prototypical Few-Shot Organ Segmentation with Regularized Neural-ODEs
Despite the tremendous progress made by deep learning models in image
semantic segmentation, they typically require large annotated examples, and
increasing attention is being diverted to problem settings like Few-Shot
Learning (FSL) where only a small amount of annotation is needed for
generalisation to novel classes. This is especially seen in medical domains
where dense pixel-level annotations are expensive to obtain. In this paper, we
propose Regularized Prototypical Neural Ordinary Differential Equation
(R-PNODE), a method that leverages intrinsic properties of Neural-ODEs,
assisted and enhanced by additional cluster and consistency losses to perform
Few-Shot Segmentation (FSS) of organs. R-PNODE constrains support and query
features from the same classes to lie closer in the representation space
thereby improving the performance over the existing Convolutional Neural
Network (CNN) based FSS methods. We further demonstrate that while many
existing Deep CNN based methods tend to be extremely vulnerable to adversarial
attacks, R-PNODE exhibits increased adversarial robustness for a wide array of
these attacks. We experiment with three publicly available multi-organ
segmentation datasets in both in-domain and cross-domain FSS settings to
demonstrate the efficacy of our method. In addition, we perform experiments
with seven commonly used adversarial attacks in various settings to demonstrate
R-PNODE's robustness. R-PNODE outperforms the baselines for FSS by significant
margins and also shows superior performance for a wide array of attacks varying
in intensity and design
- …