30,527 research outputs found
A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration
Image registration, the process of aligning two or more images, is the core
technique of many (semi-)automatic medical image analysis tasks. Recent studies
have shown that deep learning methods, notably convolutional neural networks
(ConvNets), can be used for image registration. Thus far training of ConvNets
for registration was supervised using predefined example registrations.
However, obtaining example registrations is not trivial. To circumvent the need
for predefined examples, and thereby to increase convenience of training
ConvNets for image registration, we propose the Deep Learning Image
Registration (DLIR) framework for \textit{unsupervised} affine and deformable
image registration. In the DLIR framework ConvNets are trained for image
registration by exploiting image similarity analogous to conventional
intensity-based image registration. After a ConvNet has been trained with the
DLIR framework, it can be used to register pairs of unseen images in one shot.
We propose flexible ConvNets designs for affine image registration and for
deformable image registration. By stacking multiple of these ConvNets into a
larger architecture, we are able to perform coarse-to-fine image registration.
We show for registration of cardiac cine MRI and registration of chest CT that
performance of the DLIR framework is comparable to conventional image
registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie
Development of registration methods for cardiovascular anatomy and function using advanced 3T MRI, 320-slice CT and PET imaging
Different medical imaging modalities provide complementary anatomical and
functional information. One increasingly important use of such information is in
the clinical management of cardiovascular disease. Multi-modality data is helping
improve diagnosis accuracy, and individualize treatment. The Clinical Research
Imaging Centre at the University of Edinburgh, has been involved in a number
of cardiovascular clinical trials using longitudinal computed tomography (CT) and
multi-parametric magnetic resonance (MR) imaging. The critical image processing
technique that combines the information from all these different datasets is known
as image registration, which is the topic of this thesis. Image registration, especially
multi-modality and multi-parametric registration, remains a challenging field in
medical image analysis. The new registration methods described in this work were
all developed in response to genuine challenges in on-going clinical studies. These
methods have been evaluated using data from these studies.
In order to gain an insight into the building blocks of image registration methods,
the thesis begins with a comprehensive literature review of state-of-the-art algorithms.
This is followed by a description of the first registration method I developed to help
track inflammation in aortic abdominal aneurysms. It registers multi-modality and
multi-parametric images, with new contrast agents. The registration framework uses a
semi-automatically generated region of interest around the aorta. The aorta is aligned
based on a combination of the centres of the regions of interest and intensity matching.
The method achieved sub-voxel accuracy.
The second clinical study involved cardiac data. The first framework failed to
register many of these datasets, because the cardiac data suffers from a common
artefact of magnetic resonance images, namely intensity inhomogeneity. Thus I
developed a new preprocessing technique that is able to correct the artefacts in the
functional data using data from the anatomical scans. The registration framework,
with this preprocessing step and new particle swarm optimizer, achieved significantly
improved registration results on the cardiac data, and was validated quantitatively
using neuro images from a clinical study of neonates. Although on average
the new framework achieved accurate results, when processing data corrupted
by severe artefacts and noise, premature convergence of the optimizer is still a
common problem. To overcome this, I invented a new optimization method, that
achieves more robust convergence by encoding prior knowledge of registration. The
registration results from this new registration-oriented optimizer are more accurate
than other general-purpose particle swarm optimization methods commonly applied
to registration problems.
In summary, this thesis describes a series of novel developments to an image
registration framework, aimed to improve accuracy, robustness and speed. The
resulting registration framework was applied to, and validated by, different types of
images taken from several ongoing clinical trials. In the future, this framework could
be extended to include more diverse transformation models, aided by new machine
learning techniques. It may also be applied to the registration of other types and
modalities of imaging data
Informative sample generation using class aware generative adversarial networks for classification of chest Xrays
Training robust deep learning (DL) systems for disease detection from medical
images is challenging due to limited images covering different disease types
and severity. The problem is especially acute, where there is a severe class
imbalance. We propose an active learning (AL) framework to select most
informative samples for training our model using a Bayesian neural network.
Informative samples are then used within a novel class aware generative
adversarial network (CAGAN) to generate realistic chest xray images for data
augmentation by transferring characteristics from one class label to another.
Experiments show our proposed AL framework is able to achieve state-of-the-art
performance by using about of the full dataset, thus saving significant
time and effort over conventional methods
Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach
Deep learning approaches have achieved state-of-the-art performance in
cardiac magnetic resonance (CMR) image segmentation. However, most approaches
have focused on learning image intensity features for segmentation, whereas the
incorporation of anatomical shape priors has received less attention. In this
paper, we combine a multi-task deep learning approach with atlas propagation to
develop a shape-constrained bi-ventricular segmentation pipeline for short-axis
CMR volumetric images. The pipeline first employs a fully convolutional network
(FCN) that learns segmentation and landmark localisation tasks simultaneously.
The architecture of the proposed FCN uses a 2.5D representation, thus combining
the computational advantage of 2D FCNs networks and the capability of
addressing 3D spatial consistency without compromising segmentation accuracy.
Moreover, the refinement step is designed to explicitly enforce a shape
constraint and improve segmentation quality. This step is effective for
overcoming image artefacts (e.g. due to different breath-hold positions and
large slice thickness), which preclude the creation of anatomically meaningful
3D cardiac shapes. The proposed pipeline is fully automated, due to network's
ability to infer landmarks, which are then used downstream in the pipeline to
initialise atlas propagation. We validate the pipeline on 1831 healthy subjects
and 649 subjects with pulmonary hypertension. Extensive numerical experiments
on the two datasets demonstrate that our proposed method is robust and capable
of producing accurate, high-resolution and anatomically smooth bi-ventricular
3D models, despite the artefacts in input CMR volumes
GridNet with automatic shape prior registration for automatic MRI cardiac segmentation
In this paper, we propose a fully automatic MRI cardiac segmentation method
based on a novel deep convolutional neural network (CNN) designed for the 2017
ACDC MICCAI challenge. The novelty of our network comes with its embedded shape
prior and its loss function tailored to the cardiac anatomy. Our model includes
a cardiac centerof-mass regression module which allows for an automatic shape
prior registration. Also, since our method processes raw MR images without any
manual preprocessing and/or image cropping, our CNN learns both high-level
features (useful to distinguish the heart from other organs with a similar
shape) and low-level features (useful to get accurate segmentation results).
Those features are learned with a multi-resolution conv-deconv "grid"
architecture which can be seen as an extension of the U-Net. Experimental
results reveal that our method can segment the left and right ventricles as
well as the myocardium from a 3D MRI cardiac volume in 0.4 second with an
average Dice coefficient of 0.90 and an average Hausdorff distance of 10.4 mm.Comment: 8 pages, 1 tables, 2 figure
Segmentation of the left ventricle of the heart in 3-D+t MRI data using an optimized nonrigid temporal model
Modern medical imaging modalities provide large amounts of information in both the spatial and temporal domains and the incorporation of this information in a coherent algorithmic framework is a significant challenge. In this paper, we present a novel and intuitive approach to combine 3-D spatial and temporal (3-D + time) magnetic resonance imaging (MRI) data in an integrated segmentation algorithm to extract the myocardium of the left ventricle. A novel level-set segmentation process is developed that simultaneously delineates and tracks the boundaries of the left ventricle muscle. By encoding prior knowledge about cardiac temporal evolution in a parametric framework, an expectation-maximization algorithm optimally tracks the myocardial deformation over the cardiac cycle. The expectation step deforms the level-set function while the maximization step updates the prior temporal model parameters to perform the segmentation in a nonrigid sense
- âŠ