1,012 research outputs found
GAN Augmentation: Augmenting Training Data using Generative Adversarial Networks
One of the biggest issues facing the use of machine learning in medical
imaging is the lack of availability of large, labelled datasets. The annotation
of medical images is not only expensive and time consuming but also highly
dependent on the availability of expert observers. The limited amount of
training data can inhibit the performance of supervised machine learning
algorithms which often need very large quantities of data on which to train to
avoid overfitting. So far, much effort has been directed at extracting as much
information as possible from what data is available. Generative Adversarial
Networks (GANs) offer a novel way to unlock additional information from a
dataset by generating synthetic samples with the appearance of real images.
This paper demonstrates the feasibility of introducing GAN derived synthetic
data to the training datasets in two brain segmentation tasks, leading to
improvements in Dice Similarity Coefficient (DSC) of between 1 and 5 percentage
points under different conditions, with the strongest effects seen fewer than
ten training image stacks are available
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
Generalized Zero Shot Learning For Medical Image Classification
In many real world medical image classification settings we do not have
access to samples of all possible disease classes, while a robust system is
expected to give high performance in recognizing novel test data. We propose a
generalized zero shot learning (GZSL) method that uses self supervised learning
(SSL) for: 1) selecting anchor vectors of different disease classes; and 2)
training a feature generator. Our approach does not require class attribute
vectors which are available for natural images but not for medical images. SSL
ensures that the anchor vectors are representative of each class. SSL is also
used to generate synthetic features of unseen classes. Using a simpler
architecture, our method matches a state of the art SSL based GZSL method for
natural images and outperforms all methods for medical images. Our method is
adaptable enough to accommodate class attribute vectors when they are available
for natural images
Modeling the Probabilistic Distribution of Unlabeled Data forOne-shot Medical Image Segmentation
Existing image segmentation networks mainly leverage large-scale labeled
datasets to attain high accuracy. However, labeling medical images is very
expensive since it requires sophisticated expert knowledge. Thus, it is more
desirable to employ only a few labeled data in pursuing high segmentation
performance. In this paper, we develop a data augmentation method for one-shot
brain magnetic resonance imaging (MRI) image segmentation which exploits only
one labeled MRI image (named atlas) and a few unlabeled images. In particular,
we propose to learn the probability distributions of deformations (including
shapes and intensities) of different unlabeled MRI images with respect to the
atlas via 3D variational autoencoders (VAEs). In this manner, our method is
able to exploit the learned distributions of image deformations to generate new
authentic brain MRI images, and the number of generated samples will be
sufficient to train a deep segmentation network. Furthermore, we introduce a
new standard segmentation benchmark to evaluate the generalization performance
of a segmentation network through a cross-dataset setting (collected from
different sources). Extensive experiments demonstrate that our method
outperforms the state-of-the-art one-shot medical segmentation methods. Our
code has been released at
https://github.com/dyh127/Modeling-the-Probabilistic-Distribution-of-Unlabeled-Data.Comment: AAAI 202
- …