446 research outputs found
A Conditional Flow Variational Autoencoder for Controllable Synthesis of Virtual Populations of Anatomy
The generation of virtual populations (VPs) of anatomy is essential for
conducting in silico trials of medical devices. Typically, the generated VP
should capture sufficient variability while remaining plausible and should
reflect the specific characteristics and demographics of the patients observed
in real populations. In several applications, it is desirable to synthesise
virtual populations in a \textit{controlled} manner, where relevant covariates
are used to conditionally synthesise virtual populations that fit a specific
target population/characteristics. We propose to equip a conditional
variational autoencoder (cVAE) with normalising flows to boost the flexibility
and complexity of the approximate posterior learnt, leading to enhanced
flexibility for controllable synthesis of VPs of anatomical structures. We
demonstrate the performance of our conditional flow VAE using a data set of
cardiac left ventricles acquired from 2360 patients, with associated
demographic information and clinical measurements (used as
covariates/conditional information). The results obtained indicate the
superiority of the proposed method for conditional synthesis of virtual
populations of cardiac left ventricles relative to a cVAE. Conditional
synthesis performance was evaluated in terms of generalisation and specificity
errors and in terms of the ability to preserve clinically relevant biomarkers
in synthesised VPs, that is, the left ventricular blood pool and myocardial
volume, relative to the real observed population.Comment: Accepted at MICCAI 202
Partially Conditioned Generative Adversarial Networks
Generative models are undoubtedly a hot topic in Artificial Intelligence,
among which the most common type is Generative Adversarial Networks (GANs).
These architectures let one synthesise artificial datasets by implicitly
modelling the underlying probability distribution of a real-world training
dataset. With the introduction of Conditional GANs and their variants, these
methods were extended to generating samples conditioned on ancillary
information available for each sample within the dataset. From a practical
standpoint, however, one might desire to generate data conditioned on partial
information. That is, only a subset of the ancillary conditioning variables
might be of interest when synthesising data. In this work, we argue that
standard Conditional GANs are not suitable for such a task and propose a new
Adversarial Network architecture and training strategy to deal with the ensuing
problems. Experiments illustrating the value of the proposed approach in digit
and face image synthesis under partial conditioning information are presented,
showing that the proposed method can effectively outperform the standard
approach under these circumstances.Comment: 10 pages, 9 figure
Double Diffusion Encoding Prevents Degeneracy in Parameter Estimation of Biophysical Models in Diffusion MRI
Purpose: Biophysical tissue models are increasingly used in the
interpretation of diffusion MRI (dMRI) data, with the potential to provide
specific biomarkers of brain microstructural changes. However, the general
Standard Model has recently shown that model parameter estimation from dMRI
data is ill-posed unless very strong magnetic gradients are used. We analyse
this issue for the Neurite Orientation Dispersion and Density Imaging with
Diffusivity Assessment (NODDIDA) model and demonstrate that its extension from
Single Diffusion Encoding (SDE) to Double Diffusion Encoding (DDE) solves the
ill-posedness and increases the accuracy of the parameter estimation. Methods:
We analyse theoretically the cumulant expansion up to fourth order in b of SDE
and DDE signals. Additionally, we perform in silico experiments to compare SDE
and DDE capabilities under similar noise conditions. Results: We prove
analytically that DDE provides invariant information non-accessible from SDE,
which makes the NODDIDA parameter estimation injective. The in silico
experiments show that DDE reduces the bias and mean square error of the
estimation along the whole feasible region of 5D model parameter space.
Conclusions: DDE adds additional information for estimating the model
parameters, unexplored by SDE, which is enough to solve the degeneracy in the
NODDIDA model parameter estimation.Comment: 22 pages, 7 figure
Multi-stage Biomarker Models for Progression Estimation in Alzheimer’s Disease
The estimation of disease progression in Alzheimer’s disease
(AD) based on a vector of quantitative biomarkers is of high interest
to clinicians, patients, and biomedical researchers alike. In this work,
quantile regression is employed to learn statistical models describing the
evolution of such biomarkers. Two separate models are constructed using
(1) subjects that progress from a cognitively normal (CN) stage to mild
cognitive impairment (MCI) and (2) subjects that progress from MCI
to AD during the observation window of a longitudinal study. These
models are then automatically combined to develop a multi-stage disease
progression model for the whole disease course. A probabilistic approach
is derived to estimate the current disease progress (DP) and the disease
progression rate (DPR) of a given individual by fitting any acquired
biomarkers to these models. A particular strength of this method is that
it is applicable even if individual biomarker measurements are missing
for the subject. Employing cognitive scores and image-based biomarkers,
the presented method is used to estimate DP and DPR for subjects from
the Alzheimer’s Disease Neuroimaging Initiative (ADNI). Further, the
potential use of these values as features for different classification tasks
is demonstrated. For example, accuracy of 64% is reached for CN vs.
MCI vs. AD classification
A Generative Shape Compositional Framework to Synthesise Populations of Virtual Chimaeras
Generating virtual populations of anatomy that capture sufficient variability while remaining plausible is essential for conducting in-silico trials of medical devices. However, not all anatomical shapes of interest are always available for each individual in a population. Hence, missing/partially-overlapping anatomical information is often available across individuals in a population. We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets. The proposed generative model can synthesise complete whole complex shape assemblies coined virtual chimaeras, as opposed to natural human chimaeras. We applied this framework to build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures. Specifically, we propose a generative shape compositional framework which comprises two components - a part-aware generative shape model which captures the variability in shape observed for each structure of interest in the training population; and a spatial composition network which assembles/composes the structures synthesised by the former into multi-part shape assemblies (viz. virtual chimaeras). We also propose a novel self supervised learning scheme that enables the spatial composition network to be trained with partially overlapping data and weak labels. We trained and validated our approach using shapes of cardiac structures derived from cardiac magnetic resonance images available in the UK Biobank. Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity. This demonstrates the superiority of the proposed approach as the synthesised cardiac virtual populations are more plausible and capture a greater degree of variability in shape than those generated by the PCA-based shape model
CAR-Net:Unsupervised Co-Attention Guided Registration Network for Joint Registration and Structure Learning
Image registration is a fundamental building block for various applications in medical image analysis. To better explore the correlation between the fixed and moving images and improve registration performance, we propose a novel deep learning network, Co-Attention guided Registration Network (CAR-Net). CAR-Net employs a co-attention block to learn a new representation of the inputs, which drives the registration of the fixed and moving images. Experiments on UK Biobank cardiac cine-magnetic resonance image data demonstrate that CAR-Net obtains higher registration accuracy and smoother deformation fields than state-of-the-art unsupervised registration methods, while achieving comparable or better registration performance than corresponding weakly-supervised variants. In addition, our approach can provide critical structural information of the input fixed and moving images simultaneously in a completely unsupervised manner
A Generative Shape Compositional Framework: Towards Representative Populations of Virtual Heart Chimaeras
Generating virtual populations of anatomy that capture sufficient variability
while remaining plausible is essential for conducting in-silico trials of
medical devices. However, not all anatomical shapes of interest are always
available for each individual in a population. Hence,
missing/partially-overlapping anatomical information is often available across
individuals in a population. We introduce a generative shape model for complex
anatomical structures, learnable from datasets of unpaired datasets. The
proposed generative model can synthesise complete whole complex shape
assemblies coined virtual chimaeras, as opposed to natural human chimaeras. We
applied this framework to build virtual chimaeras from databases of whole-heart
shape assemblies that each contribute samples for heart substructures.
Specifically, we propose a generative shape compositional framework which
comprises two components - a part-aware generative shape model which captures
the variability in shape observed for each structure of interest in the
training population; and a spatial composition network which assembles/composes
the structures synthesised by the former into multi-part shape assemblies (viz.
virtual chimaeras). We also propose a novel self supervised learning scheme
that enables the spatial composition network to be trained with partially
overlapping data and weak labels. We trained and validated our approach using
shapes of cardiac structures derived from cardiac magnetic resonance images
available in the UK Biobank. Our approach significantly outperforms a PCA-based
shape model (trained with complete data) in terms of generalisability and
specificity. This demonstrates the superiority of the proposed approach as the
synthesised cardiac virtual populations are more plausible and capture a
greater degree of variability in shape than those generated by the PCA-based
shape model.Comment: 15 pages, 4 figure
Joint segmentation and discontinuity-preserving deformable registration: Application to cardiac cine-MR images
Medical image registration is a challenging task involving the estimation of
spatial transformations to establish anatomical correspondence between pairs or
groups of images. Recently, deep learning-based image registration methods have
been widely explored, and demonstrated to enable fast and accurate image
registration in a variety of applications. However, most deep learning-based
registration methods assume that the deformation fields are smooth and
continuous everywhere in the image domain, which is not always true, especially
when registering images whose fields of view contain discontinuities at
tissue/organ boundaries. In such scenarios, enforcing smooth, globally
continuous deformation fields leads to incorrect/implausible registration
results. We propose a novel discontinuity-preserving image registration method
to tackle this challenge, which ensures globally discontinuous and locally
smooth deformation fields, leading to more accurate and realistic registration
results. The proposed method leverages the complementary nature of image
segmentation and registration and enables joint segmentation and pair-wise
registration of images. A co-attention block is proposed in the segmentation
component of the network to learn the structural correlations in the input
images, while a discontinuity-preserving registration strategy is employed in
the registration component of the network to ensure plausibility in the
estimated deformation fields at tissue/organ interfaces. We evaluate our method
on the task of intra-subject spatio-temporal image registration using
large-scale cinematic cardiac magnetic resonance image sequences, and
demonstrate that our method achieves significant improvements over the
state-of-the-art for medical image registration, and produces high-quality
segmentation masks for the regions of interest
- …