26,886 research outputs found

    Image Similarity Metrics in Image Registration

    Get PDF
    Measures of image similarity that inspect the intensity probability distribution of the images have proved extremely popular in image registration applications. The joint entropy of the intensity distributions and the marginal entropies of the individual images are combined to produce properties such as resistance to loss of information in one image and invariance to changes in image overlap during registration. However information theoretic cost functions are largely used empirically. This work attempts to describe image similarity measures within a formal mathematical metric framework. Redefining mutual information as a metric is shown to lead naturally to the standardised variant, normalised mutual information

    Intrasubject multimodal groupwise registration with the conditional template entropy

    Get PDF
    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information

    Image similarity metrics suitable for infrared video stabilization during active wildfire monitoring : a comparative analysis

    Get PDF
    Aerial Thermal Infrared (TIR) imagery has demonstrated tremendous potential to monitor active forest fires and acquire detailed information about fire behavior. However, aerial video is usually unstable and requires inter-frame registration before further processing. Measurement of image misalignment is an essential operation for video stabilization. Misalignment can usually be estimated through image similarity, although image similarity metrics are also sensitive to other factors such as changes in the scene and lighting conditions. Therefore, this article presents a thorough analysis of image similarity measurement techniques useful for inter-frame registration in wildfire thermal video. Image similarity metrics most commonly and successfully employed in other fields were surveyed, adapted, benchmarked and compared. We investigated their response to different camera movement components as well as recording frequency and natural variations in fire, background and ambient conditions. The study was conducted in real video from six fire experimental scenarios, ranging from laboratory tests to large-scale controlled burns. Both Global and Local Sensitivity Analyses (GSA and LSA, respectively) were performed using state-of-the-art techniques. Based on the obtained results, two different similarity metrics are proposed to satisfy two different needs. A normalized version of Mutual Information is recommended as cost function during registration, whereas 2D correlation performed the best as quality control metric after registration. These results provide a sound basis for image alignment measurement and open the door to further developments in image registration, motion estimation and video stabilization for aerial monitoring of active wildland fires

    Image similarity metrics suitable for infrared video stabilization during active wildfire monitoring: a comparative analysis

    Get PDF
    Aerial Thermal Infrared (TIR) imagery has demonstrated tremendous potential to monitor active forest fires and acquire detailed information about fire behavior. However, aerial video is usually unstable and requires inter-frame registration before further processing. Measurement of image misalignment is an essential operation for video stabilization. Misalignment can usually be estimated through image similarity, although image similarity metrics are also sensitive to other factors such as changes in the scene and lighting conditions. Therefore, this article presents a thorough analysis of image similarity measurement techniques useful for inter-frame registration in wildfire thermal video. Image similarity metrics most commonly and successfully employed in other fields were surveyed, adapted, benchmarked and compared. We investigated their response to different camera movement components as well as recording frequency and natural variations in fire, background and ambient conditions. The study was conducted in real video from six fire experimental scenarios, ranging from laboratory tests to large-scale controlled burns. Both Global and Local Sensitivity Analyses (GSA and LSA, respectively) were performed using state-of-the-art techniques. Based on the obtained results, two different similarity metrics are proposed to satisfy two different needs. A normalized version of Mutual Information is recommended as cost function during registration, whereas 2D correlation performed the best as quality control metric after registration.Peer ReviewedPostprint (published version

    Evaluation of Image Registration Accuracy for Tumor and Organs at Risk in the Thorax for Compliance With TG 132 Recommendations

    Get PDF
    Purpose To evaluate accuracy for 2 deformable image registration methods (in-house B-spline and MIM freeform) using image pairs exhibiting changes in patient orientation and lung volume and to assess the appropriateness of registration accuracy tolerances proposed by the American Association of Physicists in Medicine Task Group 132 under such challenging conditions via assessment by expert observers. Methods and Materials Four-dimensional computed tomography scans for 12 patients with lung cancer were acquired with patients in prone and supine positions. Tumor and organs at risk were delineated by a physician on all data sets: supine inhale (SI), supine exhale, prone inhale, and prone exhale. The SI image was registered to the other images using both registration methods. All SI contours were propagated using the resulting transformations and compared with physician delineations using Dice similarity coefficient, mean distance to agreement, and Hausdorff distance. Additionally, propagated contours were anonymized along with ground-truth contours and rated for quality by physician-observers. Results Averaged across all patients, the accuracy metrics investigated remained within tolerances recommended by Task Group 132 (Dice similarity coefficient \u3e0.8, mean distance to agreement \u3c3 \u3emm). MIM performed better with both complex (vertebrae) and low-contrast (esophagus) structures, whereas the in-house method performed better with lungs (whole and individual lobes). Accuracy metrics worsened but remained within tolerances when propagating from supine to prone; however, the Jacobian determinant contained regions with negative values, indicating localized nonphysiologic deformations. For MIM and in-house registrations, 50% and 43.8%, respectively, of propagated contours were rated acceptable as is and 8.2% and 11.0% as clinically unacceptable. Conclusions The deformable image registration methods performed reliably and met recommended tolerances despite anatomically challenging cases exceeding typical interfraction variability. However, additional quality assurance measures are necessary for complex applications (eg, dose propagation). Human review rather than unsupervised implementation should always be part of the clinical registration workflow

    Primitive Simultaneous Optimization of Similarity Metrics for Image Registration

    Full text link
    Even though simultaneous optimization of similarity metrics represents a standard procedure in the field of semantic segmentation, surprisingly, this does not hold true for image registration. To close this unexpected gap in the literature, we investigate in a complex multi-modal 3D setting whether simultaneous optimization of registration metrics, here implemented by means of primitive summation, can benefit image registration. We evaluate two challenging datasets containing collections of pre- to post-operative and pre- to intra-operative Magnetic Resonance Imaging (MRI) of glioma. Employing the proposed optimization we demonstrate improved registration accuracy in terms of Target Registration Error (TRE) on expert neuroradiologists' landmark annotations

    Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics

    Get PDF
    People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration
    • 

    corecore