473,701 research outputs found
A Deep Learning Framework for Unsupervised Affine and Deformable Image Registration
Image registration, the process of aligning two or more images, is the core
technique of many (semi-)automatic medical image analysis tasks. Recent studies
have shown that deep learning methods, notably convolutional neural networks
(ConvNets), can be used for image registration. Thus far training of ConvNets
for registration was supervised using predefined example registrations.
However, obtaining example registrations is not trivial. To circumvent the need
for predefined examples, and thereby to increase convenience of training
ConvNets for image registration, we propose the Deep Learning Image
Registration (DLIR) framework for \textit{unsupervised} affine and deformable
image registration. In the DLIR framework ConvNets are trained for image
registration by exploiting image similarity analogous to conventional
intensity-based image registration. After a ConvNet has been trained with the
DLIR framework, it can be used to register pairs of unseen images in one shot.
We propose flexible ConvNets designs for affine image registration and for
deformable image registration. By stacking multiple of these ConvNets into a
larger architecture, we are able to perform coarse-to-fine image registration.
We show for registration of cardiac cine MRI and registration of chest CT that
performance of the DLIR framework is comparable to conventional image
registration while being several orders of magnitude faster.Comment: Accepted: Medical Image Analysis - Elsevie
Validating Dose Uncertainty Estimates Produced by AUTODIRECT: An Automated Program to Evaluate Deformable Image Registration Accuracy.
Deformable image registration is a powerful tool for mapping information, such as radiation therapy dose calculations, from one computed tomography image to another. However, deformable image registration is susceptible to mapping errors. Recently, an automated deformable image registration evaluation of confidence tool was proposed to predict voxel-specific deformable image registration dose mapping errors on a patient-by-patient basis. The purpose of this work is to conduct an extensive analysis of automated deformable image registration evaluation of confidence tool to show its effectiveness in estimating dose mapping errors. The proposed format of automated deformable image registration evaluation of confidence tool utilizes 4 simulated patient deformations (3 B-spline-based deformations and 1 rigid transformation) to predict the uncertainty in a deformable image registration algorithm's performance. This workflow is validated for 2 DIR algorithms (B-spline multipass from Velocity and Plastimatch) with 1 physical and 11 virtual phantoms, which have known ground-truth deformations, and with 3 pairs of real patient lung images, which have several hundred identified landmarks. The true dose mapping error distributions closely followed the Student t distributions predicted by automated deformable image registration evaluation of confidence tool for the validation tests: on average, the automated deformable image registration evaluation of confidence tool-produced confidence levels of 50%, 68%, and 95% contained 48.8%, 66.3%, and 93.8% and 50.1%, 67.6%, and 93.8% of the actual errors from Velocity and Plastimatch, respectively. Despite the sparsity of landmark points, the observed error distribution from the 3 lung patient data sets also followed the expected error distribution. The dose error distributions from automated deformable image registration evaluation of confidence tool also demonstrate good resemblance to the true dose error distributions. Automated deformable image registration evaluation of confidence tool was also found to produce accurate confidence intervals for the dose-volume histograms of the deformed dose
Image Registration Techniques: A Survey
Image Registration is the process of aligning two or more images of the same
scene with reference to a particular image. The images are captured from
various sensors at different times and at multiple view-points. Thus to get a
better picture of any change of a scene or object over a considerable period of
time image registration is important. Image registration finds application in
medical sciences, remote sensing and in computer vision. This paper presents a
detailed review of several approaches which are classified accordingly along
with their contributions and drawbacks. The main steps of an image registration
procedure are also discussed. Different performance measures are presented that
determine the registration quality and accuracy. The scope for the future
research are presented as well
Recommended from our members
Fully automated convolutional neural network-based affine algorithm improves liver registration and lesion co-localization on hepatobiliary phase T1-weighted MR images.
BackgroundLiver alignment between series/exams is challenged by dynamic morphology or variability in patient positioning or motion. Image registration can improve image interpretation and lesion co-localization. We assessed the performance of a convolutional neural network algorithm to register cross-sectional liver imaging series and compared its performance to manual image registration.MethodsThree hundred fourteen patients, including internal and external datasets, who underwent gadoxetate disodium-enhanced magnetic resonance imaging for clinical care from 2011 to 2018, were retrospectively selected. Automated registration was applied to all 2,663 within-patient series pairs derived from these datasets. Additionally, 100 within-patient series pairs from the internal dataset were independently manually registered by expert readers. Liver overlap, image correlation, and intra-observation distances for manual versus automated registrations were compared using paired t tests. Influence of patient demographics, imaging characteristics, and liver uptake function was evaluated using univariate and multivariate mixed models.ResultsCompared to the manual, automated registration produced significantly lower intra-observation distance (p < 0.001) and higher liver overlap and image correlation (p < 0.001). Intra-exam automated registration achieved 0.88 mean liver overlap and 0.44 mean image correlation for the internal dataset and 0.91 and 0.41, respectively, for the external dataset. For inter-exam registration, mean overlap was 0.81 and image correlation 0.41. Older age, female sex, greater inter-series time interval, differing uptake, and greater voxel size differences independently reduced automated registration performance (p ≤ 0.020).ConclusionA fully automated algorithm accurately registered the liver within and between examinations, yielding better liver and focal observation co-localization compared to manual registration
The Influence of Intensity Standardization on Medical Image Registration
Acquisition-to-acquisition signal intensity variations (non-standardness) are
inherent in MR images. Standardization is a post processing method for
correcting inter-subject intensity variations through transforming all images
from the given image gray scale into a standard gray scale wherein similar
intensities achieve similar tissue meanings. The lack of a standard image
intensity scale in MRI leads to many difficulties in tissue characterizability,
image display, and analysis, including image segmentation. This phenomenon has
been documented well; however, effects of standardization on medical image
registration have not been studied yet. In this paper, we investigate the
influence of intensity standardization in registration tasks with systematic
and analytic evaluations involving clinical MR images. We conducted nearly
20,000 clinical MR image registration experiments and evaluated the quality of
registrations both quantitatively and qualitatively. The evaluations show that
intensity variations between images degrades the accuracy of registration
performance. The results imply that the accuracy of image registration not only
depends on spatial and geometric similarity but also on the similarity of the
intensity values for the same tissues in different images.Comment: SPIE Medical Imaging 2010 conference paper, and the complete version
of this paper was published in Elsevier Pattern Recognition Letters, volume
31, 201
- …
