15 research outputs found

    Parallel Computation of Nonrigid Image Registration

    Get PDF
    Automatic intensity-based nonrigid image registration brings significant impact in medical applications such as multimodality fusion of images, serial comparison for monitoring disease progression or regression, and minimally invasive image-guided interventions. However, due to memory and compute intensive nature of the operations, intensity-based image registration has remained too slow to be practical for clinical adoption, with its use limited primarily to as a pre-operative too. Efficient registration methods can lead to new possibilities for development of improved and interactive intraoperative tools and capabilities. In this thesis, we propose an efficient parallel implementation for intensity-based three-dimensional nonrigid image registration on a commodity graphics processing unit. Optimization techniques are developed to accelerate the compute-intensive mutual information computation. The study is performed on the hierarchical volume subdivision-based algorithm, which is inherently faster than other nonrigid registration algorithms and structurally well-suited for data-parallel computation platforms. The proposed implementation achieves more than 50-fold runtime improvement over a standard implementation on a CPU. The execution time of nonrigid image registration is reduced from hours to minutes while retaining the same level of registration accuracy

    Development of registration methods for cardiovascular anatomy and function using advanced 3T MRI, 320-slice CT and PET imaging

    Get PDF
    Different medical imaging modalities provide complementary anatomical and functional information. One increasingly important use of such information is in the clinical management of cardiovascular disease. Multi-modality data is helping improve diagnosis accuracy, and individualize treatment. The Clinical Research Imaging Centre at the University of Edinburgh, has been involved in a number of cardiovascular clinical trials using longitudinal computed tomography (CT) and multi-parametric magnetic resonance (MR) imaging. The critical image processing technique that combines the information from all these different datasets is known as image registration, which is the topic of this thesis. Image registration, especially multi-modality and multi-parametric registration, remains a challenging field in medical image analysis. The new registration methods described in this work were all developed in response to genuine challenges in on-going clinical studies. These methods have been evaluated using data from these studies. In order to gain an insight into the building blocks of image registration methods, the thesis begins with a comprehensive literature review of state-of-the-art algorithms. This is followed by a description of the first registration method I developed to help track inflammation in aortic abdominal aneurysms. It registers multi-modality and multi-parametric images, with new contrast agents. The registration framework uses a semi-automatically generated region of interest around the aorta. The aorta is aligned based on a combination of the centres of the regions of interest and intensity matching. The method achieved sub-voxel accuracy. The second clinical study involved cardiac data. The first framework failed to register many of these datasets, because the cardiac data suffers from a common artefact of magnetic resonance images, namely intensity inhomogeneity. Thus I developed a new preprocessing technique that is able to correct the artefacts in the functional data using data from the anatomical scans. The registration framework, with this preprocessing step and new particle swarm optimizer, achieved significantly improved registration results on the cardiac data, and was validated quantitatively using neuro images from a clinical study of neonates. Although on average the new framework achieved accurate results, when processing data corrupted by severe artefacts and noise, premature convergence of the optimizer is still a common problem. To overcome this, I invented a new optimization method, that achieves more robust convergence by encoding prior knowledge of registration. The registration results from this new registration-oriented optimizer are more accurate than other general-purpose particle swarm optimization methods commonly applied to registration problems. In summary, this thesis describes a series of novel developments to an image registration framework, aimed to improve accuracy, robustness and speed. The resulting registration framework was applied to, and validated by, different types of images taken from several ongoing clinical trials. In the future, this framework could be extended to include more diverse transformation models, aided by new machine learning techniques. It may also be applied to the registration of other types and modalities of imaging data

    Advances in Biomedical Applications and Assessment of Ultrasound Nonrigid Image Registration.

    Full text link
    Image volume based registration (IVBaR) is the process of determining a one-to-one transformation between points in two images that relates the information in one image to that in the other image quantitatively. IVBaR is done primarily to spatially align the two images in the same coordinate system in order to allow better comparison and visualization of changes. The potential use of IVBaR has been explored in three different contexts. In a preliminary study on identification of biometric from internal finger structure, semi-automated IVBaR-based study provided a sensitivity and specificity of 0.93 and 1.00 respectively. Visual matching of all image pairs by four readers yielded 96% successful match. IVBaR could potentially be useful for routine breast cancer screening and diagnosis. Nearly whole breast ultrasound (US) scanning with mammographic-style compression and successful IVBaR were achieved. The image volume was registered off-line with a mutual information cost function and global interpolation based on the non-rigid thin-plate spline deformation. This Institutional Review Board approved study was conducted on 10 patients undergoing chemotherapy and 14 patients with a suspicious/unknown mass scheduled to undergo biopsy. IVBaR was successful with mean registration error (MRE) of 5.2±2 mm in 12 of 17 ABU image pairs collected before, during or after 115±14 days of chemotherapy. Semi-automated tumor volume estimation was performed on registered image volumes giving 86±8% mean accuracy compared with a radiologist hand-segmented tumor volume on 7 cases with correlation coefficient of 0.99 (p<0.001). In a reader study by 3 radiologists assigned to mark the tumor boundary, significant reduction in time taken (p<0.03) was seen due to IVBaR in 6 cases. Three new methods were developed for independent validation of IVBaR based on Doppler US signals. Non-rigid registration tools were also applied in the field of interventional guidance of medical tools used in minimally invasive surgery. The mean positional error in a CT scanner environment improved from 3.9±1.5 mm to 1.0±0.3 mm (p<0.0002). These results show that 3D image volumes and data can be spatially aligned using non-rigid registration for comparison as well as quantification of changes.Ph.D.Applied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64802/1/gnarayan_1.pd

    A Heterogeneous and Multi-Range Soft-Tissue Deformation Model for Applications in Adaptive Radiotherapy

    Get PDF
    During fractionated radiotherapy, anatomical changes result in uncertainties in the applied dose distribution. With increasing steepness of applied dose gradients, the relevance of patient deformations increases. Especially in proton therapy, small anatomical changes in the order of millimeters can result in large range uncertainties and therefore in substantial deviations from the planned dose. To quantify the anatomical changes, deformation models are required. With upcoming MR-guidance, the soft-tissue deformations gain visibility, but so far only few soft-tissue models meeting the requirements of high-precision radiotherapy exist. Most state-of-the-art models either lack anatomical detail or exhibit long computation times. In this work, a fast soft-tissue deformation model is developed which is capable of considering tissue properties of heterogeneous tissue. The model is based on the chainmail (CM)-concept, which is improved by three basic features. For the first time, rotational degrees of freedom are introduced into the CM-concept to improve the characteristic deformation behavior. A novel concept for handling multiple deformation initiators is developed to cope with global deformation input. And finally, a concept for handling various shapes of deformation input is proposed to provide a high flexibility concerning the design of deformation input. To demonstrate the model flexibility, it was coupled to a kinematic skeleton model for the head and neck region, which provides anatomically correct deformation input for the bones. For exemplary patient CTs, the combined model was shown to be capable of generating artificially deformed CT images with realistic appearance. This was achieved for small-range deformations in the order of interfractional deformations, as well as for large-range deformations like an arms-up to arms-down deformation, as can occur between images of different modalities. The deformation results showed a strong improvement in biofidelity, compared to the original chainmail-concept, as well as compared to clinically used image-based deformation methods. The computation times for the model are in the order of 30 min for single-threaded calculations, by simple code parallelization times in the order of 1 min can be achieved. Applications that require realistic forward deformations of CT images will benefit from the improved biofidelity of the developed model. Envisioned applications are the generation of plan libraries and virtual phantoms, as well as data augmentation for deep learning approaches. Due to the low computation times, the model is also well suited for image registration applications. In this context, it will contribute to an improved calculation of accumulated dose, as is required in high-precision adaptive radiotherapy

    Registration of histology and magnetic resonance imaging of the brain

    Get PDF
    Combining histology and non-invasive imaging has been attracting the attention of the medical imaging community for a long time, due to its potential to correlate macroscopic information with the underlying microscopic properties of tissues. Histology is an invasive procedure that disrupts the spatial arrangement of the tissue components but enables visualisation and characterisation at a cellular level. In contrast, macroscopic imaging allows non-invasive acquisition of volumetric information but does not provide any microscopic details. Through the establishment of spatial correspondences obtained via image registration, it is possible to compare micro- and macroscopic information and to recover the original histological arrangement in three dimensions. In this thesis, I present: (i) a survey of the literature relative to methods for histology reconstruction with and without the help of 3D medical imaging; (ii) a graph-theoretic method for histology volume reconstruction from sets of 2D sections, without external information; (iii) a method for multimodal 2D linear registration between histology and MRI based on partial matching of shape-informative boundaries

    Landmark Localization, Feature Matching and Biomarker Discovery from Magnetic Resonance Images

    Get PDF
    The work presented in this thesis proposes several methods that can be roughly divided into three different categories: I) landmark localization in medical images, II) feature matching for image registration, and III) biomarker discovery in neuroimaging. The first part deals with the identification of anatomical landmarks. The motivation stems from the fact that the manual identification and labeling of these landmarks is very time consuming and prone to observer errors, especially when large datasets must be analyzed. In this thesis we present three methods to tackle this challenge: A landmark descriptor based on local self-similarities (SS), a subspace building framework based on manifold learning and a sparse coding landmark descriptor based on data-specific learned dictionary basis. The second part of this thesis deals with finding matching features between a pair of images. These matches can be used to perform a registration between them. Registration is a powerful tool that allows mapping images in a common space in order to aid in their analysis. Accurate registration can be challenging to achieve using intensity based registration algorithms. Here, a framework is proposed for learning correspondences in pairs of images by matching SS features and random sample and consensus (RANSAC) is employed as a robust model estimator to learn a deformation model based on feature matches. Finally, the third part of the thesis deals with biomarker discovery using machine learning. In this section a framework for feature extraction from learned low-dimensional subspaces that represent inter-subject variability is proposed. The manifold subspace is built using data-driven regions of interest (ROI). These regions are learned via sparse regression, with stability selection. Also, probabilistic distribution models for different stages in the disease trajectory are estimated for different class populations in the low-dimensional manifold and used to construct a probabilistic scoring function.Open Acces

    3D tooth surface reconstruction

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Image Registration Workshop Proceedings

    Get PDF
    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research

    Statistical models in medical image analysis

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (leaves 149-156).Computational tools for medical image analysis help clinicians diagnose, treat, monitor changes, and plan and execute procedures more safely and effectively. Two fundamental problems in analyzing medical imagery are registration, which brings two or more datasets into correspondence, and segmentation, which localizes the anatomical structures in an image. The noise and artifacts present in the scans, combined with the complexity and variability of patient anatomy, limit the effectiveness of simple image processing routines. Statistical models provide application-specific context to the problem by incorporating information derived from a training set consisting of instances of the problem along with the solution. In this thesis, we explore the benefits of statistical models for medical image registration and segmentation. We present a technique for computing the rigid registration of pairs of medical images of the same patient. The method models the expected joint intensity distribution of two images when correctly aligned. The registration of a novel set of images is performed by maximizing the log likelihood of the transformation, given the joint intensity model. Results aligning SPGR and dual-echo magnetic resonance scans demonstrate sub-voxel accuracy and large region of convergence. A novel segmentation method is presented that incorporates prior statistical models of intensity, local curvature, and global shape to direct the segmentation toward a likely outcome. Existing segmentation algorithms generally fit into one of the following three categories: boundary localization, voxel classification, and atlas matching, each with different strengths and weaknesses. Our algorithm unifies these approaches. A higher dimensional surface is evolved based on local and global priors such that the zero level set converges on the object boundary. Results segmenting images of the corpus callosum, knee, and spine illustrate the strength and diversity of this approach.by Michael Emmanuel Leventon.Ph.D
    corecore