2,980 research outputs found
Hessian-based Similarity Metric for Multimodal Medical Image Registration
One of the fundamental elements of both traditional and certain deep learning
medical image registration algorithms is measuring the similarity/dissimilarity
between two images. In this work, we propose an analytical solution for
measuring similarity between two different medical image modalities based on
the Hessian of their intensities. First, assuming a functional dependence
between the intensities of two perfectly corresponding patches, we investigate
how their Hessians relate to each other. Secondly, we suggest a closed-form
expression to quantify the deviation from this relationship, given arbitrary
pairs of image patches. We propose a geometrical interpretation of the new
similarity metric and an efficient implementation for registration. We
demonstrate the robustness of the metric to intensity nonuniformities using
synthetic bias fields. By integrating the new metric in an affine registration
framework, we evaluate its performance for MRI and ultrasound registration in
the context of image-guided neurosurgery using target registration error and
computation time
Multimodal retinal image registration using a fast principal component analysis hybrid-based similarity measure
Multimodal retinal images (RI) are extensively used for analysing various eye diseases and conditions such as myopia and diabetic retinopathy. The incorporation of either two or more RI modalities provides complementary structure information in the presence of non-uniform illumination and low-contrast homogeneous regions. It also presents significant challenges for retinal image registration (RIR). This paper investigates how the Expectation Maximization for Principal Component Analysis with Mutual Information (EMPCA-MI) algorithm can effectively achieve multimodal RIR. This iterative hybrid-based similarity measure combines spatial features with mutual information to provide enhanced registration without recourse to either segmentation or feature extraction. Experimental results for clinical multimodal RI datasets comprising colour fundus and scanning laser ophthalmoscope images confirm EMPCA-MI is able to consistently afford superior numerical and qualitative registration performance compared with existing RIR techniques, such as the bifurcation structures method
A non-rigid registration approach for quantifying myocardial contraction in tagged MRI using generalized information measures.
International audienceWe address the problem of quantitatively assessing myocardial function from tagged MRI sequences. We develop a two-step method comprising (i) a motion estimation step using a novel variational non-rigid registration technique based on generalized information measures, and (ii) a measurement step, yielding local and segmental deformation parameters over the whole myocardium. Experiments on healthy and pathological data demonstrate that this method delivers, within a reasonable computation time and in a fully unsupervised way, reliable measurements for normal subjects and quantitative pathology-specific information. Beyond cardiac MRI, this work redefines the foundations of variational non-rigid registration for information-theoretic similarity criteria with potential interest in multimodal medical imaging
Higher-Order Momentum Distributions and Locally Affine LDDMM Registration
To achieve sparse parametrizations that allows intuitive analysis, we aim to
represent deformation with a basis containing interpretable elements, and we
wish to use elements that have the description capacity to represent the
deformation compactly. To accomplish this, we introduce in this paper
higher-order momentum distributions in the LDDMM registration framework. While
the zeroth order moments previously used in LDDMM only describe local
displacement, the first-order momenta that are proposed here represent a basis
that allows local description of affine transformations and subsequent compact
description of non-translational movement in a globally non-rigid deformation.
The resulting representation contains directly interpretable information from
both mathematical and modeling perspectives. We develop the mathematical
construction of the registration framework with higher-order momenta, we show
the implications for sparse image registration and deformation description, and
we provide examples of how the parametrization enables registration with a very
low number of parameters. The capacity and interpretability of the
parametrization using higher-order momenta lead to natural modeling of
articulated movement, and the method promises to be useful for quantifying
ventricle expansion and progressing atrophy during Alzheimer's disease
Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery
In this paper we discuss the potential and challenges regarding SAR-optical
stereogrammetry for urban areas, using very-high-resolution (VHR) remote
sensing imagery. Since we do this mainly from a geometrical point of view, we
first analyze the height reconstruction accuracy to be expected for different
stereogrammetric configurations. Then, we propose a strategy for simultaneous
tie point matching and 3D reconstruction, which exploits an epipolar-like
search window constraint. To drive the matching and ensure some robustness, we
combine different established handcrafted similarity measures. For the
experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and
MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR
imagery is generally feasible with 3D positioning accuracies in the
meter-domain, although the matching of these strongly hetereogeneous
multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar
(SAR), optical images, remote sensing, data fusion, stereogrammetr
Automated registration of multimodal optic disc images: clinical assessment of alignment accuracy
Purpose: To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography.
Materials and Methods: Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: “Fail” (no alignment of vessels with no vessel contact), “Weak” (vessels have slight contact), “Good” (vessels with 50% contact), and “Excellent” (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers.
Results: A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of “Good” or better in >95% of the image sets. NRFNMI had the highest percentage of “Excellent” (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%).
Conclusions: Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images
Fast Image and LiDAR alignment based on 3D rendering in sensor topology
Mobile Mapping Systems are now commonly used in large urban acquisition campaigns. They are often equiped with LiDAR sensors and optical cameras, providing very large multimodal datasets. The fusion of both modalities serves different purposes such as point cloud colorization, geometry enhancement or object detection. However, this fusion task cannot be done directly as both modalities are only coarsely registered. This paper presents a fully automatic approach for LiDAR projection and optical image registration refinement based on LiDAR point cloud 3D renderings. First, a coarse 3D mesh is generated from the LiDAR point cloud using the sensor topology. Then, the mesh is rendered in the image domain. After that, a variational approach is used to align the rendering with the optical image. This method achieves high quality results while performing in very low computational time. Results on real data demonstrate the efficiency of the model for aligning LiDAR projections and optical images
- …