1,164 research outputs found
A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration
Image registration, the process of aligning two or more images, is the core
technique of many (semi-)automatic medical image analysis tasks. Recent studies
have shown that deep learning methods, notably convolutional neural networks
(ConvNets), can be used for image registration. Thus far training of ConvNets
for registration was supervised using predefined example registrations.
However, obtaining example registrations is not trivial. To circumvent the need
for predefined examples, and thereby to increase convenience of training
ConvNets for image registration, we propose the Deep Learning Image
Registration (DLIR) framework for \textit{unsupervised} affine and deformable
image registration. In the DLIR framework ConvNets are trained for image
registration by exploiting image similarity analogous to conventional
intensity-based image registration. After a ConvNet has been trained with the
DLIR framework, it can be used to register pairs of unseen images in one shot.
We propose flexible ConvNets designs for affine image registration and for
deformable image registration. By stacking multiple of these ConvNets into a
larger architecture, we are able to perform coarse-to-fine image registration.
We show for registration of cardiac cine MRI and registration of chest CT that
performance of the DLIR framework is comparable to conventional image
registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie
Deep learning-based affine and deformable 3D medical image registration
In medical image registration, medical scans are transformed to align their image content. Traditionally, image registration is performed manually by clinicians or using optimization-based algorithms, but in the past few years, deep learning has been successfully applied to the problem. In this work, deep learning image registration (DLIR) methods were compared on the task of aligning inter- and intra-patient male pelvic full field-of-view 3D Computed Tomography (CT) scans. The multistage registration pipeline used consisted of a cascade of an affine (global) registration and a deformable (local) registration.
For the affine registration step, a 3D ResNet model was used. The two deformable methods that were investigated are VoxelMorph, the most commonly used DLIR framework, and LapIRN, a recent multi-resolution DLIR method. The two registration steps were trained separately; For the affine registration step, both supervised and unsupervised learning methods were employed. For the deformable step, unsupervised learning and weakly supervised learning using masks of regions of interest (ROIs) were used. The training was done on synthetically augmented CT scans. The results were compared to results obtained with two top-performing iterative image registration frameworks. The evaluation was based on ROI similarity of the registered scans, as well as diffeomorphic properties and runtime of the registration.
Overall, the DLIR methods were not able to outperform the baseline iterative methods. The affine step followed by deformable registration with LaPIRN managed to perform similarly to or slightly worse than the baseline methods, managing to outperform them on 7 out of 12 ROIs on the intra-patient scans. The inter-patient registration task turned out to be challenging, with none of the methods performing well consistently. For both tasks, the DLIR methods achieve a very significant time speedup compared to the baseline methods
Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration
We propose an unsupervised deep learning method for atlas based registration
to achieve segmentation and spatial alignment of the embryonic brain in a
single framework. Our approach consists of two sequential networks with a
specifically designed loss function to address the challenges in 3D first
trimester ultrasound. The first part learns the affine transformation and the
second part learns the voxelwise nonrigid deformation between the target image
and the atlas. We trained this network end-to-end and validated it against a
ground truth on synthetic datasets designed to resemble the challenges present
in 3D first trimester ultrasound. The method was tested on a dataset of human
embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed
alignment of the brain in some cases and gave insight in open challenges for
the proposed method. We conclude that our method is a promising approach
towards fully automated spatial alignment and segmentation of embryonic brains
in 3D ultrasound
Label-driven weakly-supervised learning for multimodal deformable image registration
Spatially aligning medical images from different modalities remains a
challenging task, especially for intraoperative applications that require fast
and robust algorithms. We propose a weakly-supervised, label-driven formulation
for learning 3D voxel correspondence from higher-level label correspondence,
thereby bypassing classical intensity-based image similarity measures. During
training, a convolutional neural network is optimised by outputting a dense
displacement field (DDF) that warps a set of available anatomical labels from
the moving image to match their corresponding counterparts in the fixed image.
These label pairs, including solid organs, ducts, vessels, point landmarks and
other ad hoc structures, are only required at training time and can be
spatially aligned by minimising a cross-entropy function of the warped moving
label and the fixed label. During inference, the trained network takes a new
image pair to predict an optimal DDF, resulting in a fully-automatic,
label-free, real-time and deformable registration. For interventional
applications where large global transformation prevails, we also propose a
neural network architecture to jointly optimise the global- and local
displacements. Experiment results are presented based on cross-validating
registrations of 111 pairs of T2-weighted magnetic resonance images and 3D
transrectal ultrasound images from prostate cancer patients with a total of
over 4000 anatomical labels, yielding a median target registration error of 4.2
mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201
- …