682 research outputs found
Neural Style Transfer Improves 3D Cardiovascular MR Image Segmentation on Inconsistent Data
Three-dimensional medical image segmentation is one of the most important
problems in medical image analysis and plays a key role in downstream diagnosis
and treatment. Recent years, deep neural networks have made groundbreaking
success in medical image segmentation problem. However, due to the high
variance in instrumental parameters, experimental protocols, and subject
appearances, the generalization of deep learning models is often hindered by
the inconsistency in medical images generated by different machines and
hospitals. In this work, we present StyleSegor, an efficient and easy-to-use
strategy to alleviate this inconsistency issue. Specifically, neural style
transfer algorithm is applied to unlabeled data in order to minimize the
differences in image properties including brightness, contrast, texture, etc.
between the labeled and unlabeled data. We also apply probabilistic adjustment
on the network output and integrate multiple predictions through ensemble
learning. On a publicly available whole heart segmentation benchmarking dataset
from MICCAI HVSMR 2016 challenge, we have demonstrated an elevated dice
accuracy surpassing current state-of-the-art method and notably, an improvement
of the total score by 29.91\%. StyleSegor is thus corroborated to be an
accurate tool for 3D whole heart segmentation especially on highly inconsistent
data, and is available at https://github.com/horsepurve/StyleSegor.Comment: 22nd International Conference on Medical Image Computing and Computer
Assisted Intervention (MICCAI 2019) early accep
XCAT-GAN for Synthesizing 3D Consistent Labeled Cardiac MR Images on Anatomically Variable XCAT Phantoms
Generative adversarial networks (GANs) have provided promising data
enrichment solutions by synthesizing high-fidelity images. However, generating
large sets of labeled images with new anatomical variations remains unexplored.
We propose a novel method for synthesizing cardiac magnetic resonance (CMR)
images on a population of virtual subjects with a large anatomical variation,
introduced using the 4D eXtended Cardiac and Torso (XCAT) computerized human
phantom. We investigate two conditional image synthesis approaches grounded on
a semantically-consistent mask-guided image generation technique: 4-class and
8-class XCAT-GANs. The 4-class technique relies on only the annotations of the
heart; while the 8-class technique employs a predicted multi-tissue label map
of the heart-surrounding organs and provides better guidance for our
conditional image synthesis. For both techniques, we train our conditional
XCAT-GAN with real images paired with corresponding labels and subsequently at
the inference time, we substitute the labels with the XCAT derived ones.
Therefore, the trained network accurately transfers the tissue-specific
textures to the new label maps. By creating 33 virtual subjects of synthetic
CMR images at the end-diastolic and end-systolic phases, we evaluate the
usefulness of such data in the downstream cardiac cavity segmentation task
under different augmentation strategies. Results demonstrate that even with
only 20% of real images (40 volumes) seen during training, segmentation
performance is retained with the addition of synthetic CMR images. Moreover,
the improvement in utilizing synthetic images for augmenting the real data is
evident through the reduction of Hausdorff distance up to 28% and an increase
in the Dice score up to 5%, indicating a higher similarity to the ground truth
in all dimensions.Comment: Accepted for MICCAI 202
Improving the domain generalization and robustness of neural networks for medical imaging
Deep neural networks are powerful tools to process medical images, with great potential to accelerate clinical workflows and facilitate large-scale studies. However, in order to achieve satisfactory performance at deployment, these networks generally require massive labeled data collected from various domains (e.g., hospitals, scanners), which is rarely available in practice. The main goal of this work is to improve the domain generalization and robustness of neural networks for medical imaging when labeled data is limited.
First, we develop multi-task learning methods to exploit auxiliary data to enhance networks. We first present a multi-task U-net that performs image classification and MR atrial segmentation simultaneously. We then present a shape-aware multi-view autoencoder together with a multi-view U-net, which enables extracting useful shape priors from complementary long-axis views and short-axis views in order to assist the left ventricular myocardium segmentation task on the short-axis MR images. Experimental results show that the proposed networks successfully leverage complementary information from auxiliary tasks to improve model generalization on the main segmentation task.
Second, we consider utilizing unlabeled data. We first present an adversarial data augmentation method with bias fields to improve semi-supervised learning for general medical image segmentation tasks. We further explore a more challenging setting where the source and the target images are from different data distributions. We demonstrate that an unsupervised image style transfer method can bridge the domain gap, successfully transferring the knowledge learned from labeled balanced Steady-State Free Precession (bSSFP) images to unlabeled Late Gadolinium Enhancement (LGE) images, achieving state-of-the-art performance on a public multi-sequence cardiac MR segmentation challenge.
For scenarios with limited training data from a single domain, we first propose a general training and testing pipeline to improve cardiac image segmentation across various unseen domains. We then present a latent space data augmentation method with a cooperative training framework to further enhance model robustness against unseen domains and imaging artifacts.Open Acces
Test-time Unsupervised Domain Adaptation
Convolutional neural networks trained on publicly available medical imaging
datasets (source domain) rarely generalise to different scanners or acquisition
protocols (target domain). This motivates the active field of domain
adaptation. While some approaches to the problem require labeled data from the
target domain, others adopt an unsupervised approach to domain adaptation
(UDA). Evaluating UDA methods consists of measuring the model's ability to
generalise to unseen data in the target domain. In this work, we argue that
this is not as useful as adapting to the test set directly. We therefore
propose an evaluation framework where we perform test-time UDA on each subject
separately. We show that models adapted to a specific target subject from the
target domain outperform a domain adaptation method which has seen more data of
the target domain but not this specific target subject. This result supports
the thesis that unsupervised domain adaptation should be used at test-time,
even if only using a single target-domain subjectComment: Accepted at MICCAI 202
Test-time unsupervised domain adaptation
Convolutional neural networks trained on publicly available medical imaging datasets (source domain) rarely generalise to different scanners or acquisition protocols (target domain). This motivates the active field of domain adaptation. While some approaches to the problem require labelled data from the target domain, others adopt an unsupervised approach to domain adaptation (UDA). Evaluating UDA methods consists of measuring the model’s ability to generalise to unseen data in the target domain. In this work, we argue that this is not as useful as adapting to the test set directly. We therefore propose an evaluation framework where we perform test-time UDA on each subject separately. We show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject. This result supports the thesis that unsupervised domain adaptation should be used at test-time, even if only using a single target-domain subject
Automatic cerebral hemisphere segmentation in rat MRI with lesions via attention-based convolutional neural networks
We present MedicDeepLabv3+, a convolutional neural network that is the first
completely automatic method to segment cerebral hemispheres in magnetic
resonance (MR) volumes of rats with lesions. MedicDeepLabv3+ improves the
state-of-the-art DeepLabv3+ with an advanced decoder, incorporating spatial
attention layers and additional skip connections that, as we show in our
experiments, lead to more precise segmentations. MedicDeepLabv3+ requires no MR
image preprocessing, such as bias-field correction or registration to a
template, produces segmentations in less than a second, and its GPU memory
requirements can be adjusted based on the available resources. We optimized
MedicDeepLabv3+ and six other state-of-the-art convolutional neural networks
(DeepLabv3+, UNet, HighRes3DNet, V-Net, VoxResNet, Demon) on a heterogeneous
training set comprised by MR volumes from 11 cohorts acquired at different
lesion stages. Then, we evaluated the trained models and two approaches
specifically designed for rodent MRI skull stripping (RATS and RBET) on a large
dataset of 655 MR rat brain volumes. In our experiments, MedicDeepLabv3+
outperformed the other methods, yielding an average Dice coefficient of 0.952
and 0.944 in the brain and contralateral hemisphere regions. Additionally, we
show that despite limiting the GPU memory and the training data, our
MedicDeepLabv3+ also provided satisfactory segmentations. In conclusion, our
method, publicly available at https://github.com/jmlipman/MedicDeepLabv3Plus,
yielded excellent results in multiple scenarios, demonstrating its capability
to reduce human workload in rat neuroimaging studies.Comment: Published in NeuroInformatic
Image Quality Assessment for Population Cardiac MRI: From Detection to Synthesis
Cardiac magnetic resonance (CMR) images play a growing role in diagnostic imaging of cardiovascular diseases. Left Ventricular (LV) cardiac anatomy and function are widely used for diagnosis and monitoring disease progression in cardiology and to assess the patient's response to cardiac surgery and interventional procedures. For population imaging studies, CMR is arguably the most comprehensive imaging modality for non-invasive and non-ionising imaging of the heart and great vessels and, hence, most suited for population imaging cohorts. Due to insufficient radiographer's experience in planning a scan, natural cardiac muscle contraction, breathing motion, and imperfect triggering, CMR can display incomplete LV coverage, which hampers quantitative LV characterization and diagnostic accuracy.
To tackle this limitation and enhance the accuracy and robustness of the automated cardiac volume and functional assessment, this thesis focuses on the development and application of state-of-the-art deep learning (DL) techniques in cardiac imaging. Specifically, we propose new image feature representation types that are learnt with DL models and aimed at highlighting the CMR image quality cross-dataset. These representations are also intended to estimate the CMR image quality for better interpretation and analysis. Moreover, we investigate how quantitative analysis can benefit when these learnt image representations are used in image synthesis.
Specifically, a 3D fisher discriminative representation is introduced to identify CMR image quality in the UK Biobank cardiac data. Additionally, a novel adversarial learning (AL) framework is introduced for the cross-dataset CMR image quality assessment and we show that the common representations learnt by AL can be useful and informative for cross-dataset CMR image analysis. Moreover, we utilize the dataset invariance (DI) representations for CMR volumes interpolation by introducing a novel generative adversarial nets (GANs) based image synthesis framework, which enhance the CMR image quality cross-dataset
- …