1,008 research outputs found

    An Unsupervised Learning Model for Deformable Medical Image Registration

    Full text link
    We present a fast learning-based algorithm for deformable, pairwise 3D medical image registration. Current registration methods optimize an objective function independently for each pair of images, which can be time-consuming for large data. We define registration as a parametric function, and optimize its parameters given a set of images from a collection of interest. Given a new pair of scans, we can quickly compute a registration field by directly evaluating the function using the learned parameters. We model this function using a convolutional neural network (CNN), and use a spatial transform layer to reconstruct one image from another while imposing smoothness constraints on the registration field. The proposed method does not require supervised information such as ground truth registration fields or anatomical landmarks. We demonstrate registration accuracy comparable to state-of-the-art 3D image registration, while operating orders of magnitude faster in practice. Our method promises to significantly speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is available at https://github.com/balakg/voxelmorph .Comment: 9 pages, in CVPR 201

    Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    Get PDF
    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art

    REGISTRATION AND SEGMENTATION OF BRAIN MR IMAGES FROM ELDERLY INDIVIDUALS

    Get PDF
    Quantitative analysis of the MRI structural and functional images is a fundamental component in the assessment of brain anatomical abnormalities, in mapping functional activation onto human anatomy, in longitudinal evaluation of disease progression, and in computer-assisted neurosurgery or surgical planning. Image registration and segmentation is central in analyzing structural and functional MR brain images. However, due to increased variability in brain morphology and age-related atrophy, traditional methods for image registration and segmentation are not suitable for analyzing MR brain images from elderly individuals. The overall goal of this dissertation is to develop algorithms to improve the registration and segmentation accuracy in the geriatric population. The specific aims of this work includes 1) to implement a fully deformable registration pipeline to allow a higher degree of spatial deformation and produce more accurate deformation field, 2) to propose and validate an optimum template selection method for atlas-based segmentation, 3) to propose and validate a multi-template strategy for image normalization, which characterizes brain structural variations in the elderly, 4) to develop an automated segmentation and localization method to access white matter integrity (WMH) in the elderly population, and finally 5) to study the default-mode network (DMN) connectivity and white matter hyperintensity in late-life depression (LLD) with the developed registration and segmentation methods. Through a series of experiments, we have shown that the deformable registration pipeline and the template selection strategies lead to improved accuracy in the brain MR image registration and segmentation, and the automated WMH segmentation and localization method provides more specific and more accurate information about volume and spatial distribution of WMH than traditional visual grading methods. Using the developed methods, our clinical study provides evidence for altered DMN connectivity in LLD. The correlation between WMH volume and DMN connectivity emphasizes the role of vascular changes in LLD's etiopathogenesis
    corecore