81,015 research outputs found

    Multi Modal Medical Image Registration: A New Data Driven Approach

    Get PDF
    Image registration is a challenging task in building computer-based diagnostic systems. One type of image modality will not be able to provide all information needed for better diagnostic. Hence data from multiple sources/image modalities should be combined. In this work canonical correlation analysis (CCA) based image registration approach has been proposed. CCA provides the framework to integrate information from multiple sources. In this work, the information contained in both images is used for image registration task. T1-weighted, T2- weighted and FLAIR MRI images has Multimodal registration done on it. The algorithm provided better results when compared with mutual information based image registration approach. The work has been carried out using the 3D rigid registration of CT and MRI images. The work is carried out using the public datasets, and later performance is evaluated with the work carried out by Research scholars previously. Our algorithm performs better with mutual information based image registration. Medical image registration of multimodality images like MRI, MRI-CT, and MRI-CT-PET. In this paper for MRI-CT Medical Image Registration CT image is used as a fixed image and MRI image as moving image and later compared results with some benchmark algorithm presented in literature such as correlation coefficient, correlation ratio, and mutual information and normalized mutual information methods

    Medical image registration using Edgeworth-based approximation of Mutual Information

    Get PDF
    International audienceWe propose a new similarity measure for iconic medical image registration, an Edgeworth-based third order approximation of Mutual Information (MI) and named 3-EMI. Contrary to classical Edgeworth-based MI approximations, such as those proposed for inde- pendent component analysis, the 3-EMI measure is able to deal with potentially correlated variables. The performance of 3-EMI is then evaluated and compared with the Gaussian and B-Spline kernel-based estimates of MI, and the validation is leaded in three steps. First, we compare the intrinsic behavior of the measures as a function of the number of samples and the variance of an additive Gaussian noise. Then, they are evaluated in the context of multimodal rigid registration, using the RIRE data. We finally validate the use of our measure in the context of thoracic monomodal non-rigid registration, using the database proposed during the MICCAI EMPIRE10 challenge. The results show the wide range of clinical applications for which our measure can perform, including non-rigid registration which remains a challenging problem. They also demonstrate that 3-EMI outperforms classical estimates of MI for a low number of samples or a strong additive Gaussian noise. More generally, our measure gives competitive registration results, with a much lower numerical complexity compared to classical estimators such as the reference B-Spline kernel estimator, which makes 3-EMI a good candidate for fast and accurate registration tasks

    Image Registration Using Mutual Information

    Get PDF
    Almost all imaging systems require some form of registration. A few examples are aligning medical images for diagnosis, matching stereo images to recover shape, and comparing facial images in a database to recognize people. Given the difficulty of registering images taken at different times, using different sensors, from different positions, registration algorithms come in different shapes and sizes. Recently, a new type of solution to the registration problem has emerged, based on information theory. In particular, the mutual information similarity metric has been used to register multi-modal medical images. Mutual information compares the statistical dependence between the two images. Unlike many other registration techniques, mutual information makes few a priori assumptions about the surface properties of the object or the imaging process, making it adaptible to changes in lighting and changes between sensors. The method can be applied to larger dimensional registration and many other imaging situations. In this report, we compare two approaches taken towards the implementation of rigid 2D mutual information image registration. We look further at algorithm speedup and noise reduction efforts. A full background is provided

    Mutual Information Based Image Registration for Medical Imaging

    Get PDF
    Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. Image registration geometrically aligns two images (the reference and test images). This thesis discusses about the medical image registration by using correlation with mutual information. It also mentions about the problematic issue that arises during the feature matching stage in the mutual information based medical image registration. The matched results shows that by changing (can be decreasing or increasing depending on the image) the windows size of the image more matched points can be obtained. This thesis also focuses on registration of two images which are rotationally misaligned and gives the procedure to measure this angle of rotation using mutual information. Typically,registration application are in remote sensing, image mosaicing, weather forecasting, super-resolution image creation, in medicine (computerized tomography (CT) and magnetic resonance image (MRI) combining in order to obtain more complete information about the patient, monitoring tumor growth etc. This thesis describes image mosaicing by using mutual information.Here image mosaicing has been discussed for both medical and satellite images. The thesis concludes with results obtained by applying above mentioned techniques and also provides with the possible changes that could be made in future for obtaining better results

    Feature Neighbourhood Mutual Information for multi-modal image registration: An application to eye fundus imaging

    Get PDF
    © 2014 Elsevier Ltd. All rights reserved. Multi-modal image registration is becoming an increasingly powerful tool for medical diagnosis and treatment. The combination of different image modalities facilitates much greater understanding of the underlying condition, resulting in improved patient care. Mutual Information is a popular image similarity measure for performing multi-modal image registration. However, it is recognised that there are limitations with the technique that can compromise the accuracy of the registration, such as the lack of spatial information that is accounted for by the similarity measure. In this paper, we present a two-stage non-rigid registration process using a novel similarity measure, Feature Neighbourhood Mutual Information. The similarity measure efficiently incorporates both spatial and structural image properties that are not traditionally considered by MI. By incorporating such features, we find that this method is capable of achieving much greater registration accuracy when compared to existing methods, whilst also achieving efficient computational runtime. To demonstrate our method, we use a challenging medical image data set consisting of paired retinal fundus photographs and confocal scanning laser ophthalmoscope images. Accurate registration of these image pairs facilitates improved clinical diagnosis, and can be used for the early detection and prevention of glaucoma disease

    Intrasubject multimodal groupwise registration with the conditional template entropy

    Get PDF
    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information

    Mesh-to-raster based non-rigid registration of multi-modal images

    Full text link
    Region of interest (ROI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from CAT scanners as pixel or voxel data. Previously, we presented a 2D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a 3D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2D using ground truth provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem where the objective consists of a data term, which involves the signed distance function of the ROI from the reference image, and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The ROI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit (ITK) and Elastix

    Data registration and fusion for cardiac applications

    Get PDF
    The registration and fusion of information from multiple cardiac image modalities such as magnetic resonance imaging (MRI), X-ray computed tomography (CT), positron emission tomography (PET) and single photon emission computed tomography (SPECT) has been of increasing interest to the medical community as tools for furthering physiological understanding and for diagnostic of ischemic heart diseases. Ischemic heart diseases and their consequence, myocardial infarct, are the leading cause of mortality in industrial countries. In cardiac image registration and data fusion, the combination of structural information from MR images and functional information from PET and SPECT is of special interest in the estimation of myocardial function and viability. Cardiac image registration is a more complex problem than brain image registration. The non-rigid motion of the heart and the thorax structures introduce additional difficulties in registration. In this thesis the goal was develop methods for cardiac data registration and fusion. A rigid registration method was developed to register cardiac MR and PET images. The method was based on the registration of the segmented thorax structures from MR and PET transmission images. The thorax structures were segmented from images using deformable models. A MR short axis registration with PET emission image was also derived. The rigid registration method was evaluated using simulated images and clinical MR and PET images from ten patients with multivessel coronary artery diseases. Also an elastic registration method was developed to register intra-patient cardiac MR and PET images and inter-patient head MR images. In the elastic registration method, a combination of mutual information, gradient information and smoothness of transformation was used to guide the deformation of one image towards another image. An approach for the creation of 3-D functional maps of the heart was also developed. An individualized anatomical heart model was extracted from the MR images. A rigid registration of anatomical MR images and PET metabolic images was carried out using surface based registration, and the registration of MR images with magnetocardiography (MCG) data using external markers. The method resulted in a 3-D anatomical and functional model of the heart that included structural information from the MRI and functional information from the PET and MCG. Different error sources in the registration method of the MR images and MCG data was also evaluated in this thesis. The results of the rigid MR-PET registration method were also used in the comparison of multimodality MR imaging methods to PET.reviewe
    corecore