5 research outputs found

    Perceptual Image Fusion Using Wavelets

    Get PDF

    Three-Dimensional Medical Image Fusion with Deformable Cross-Attention

    Full text link
    Multimodal medical image fusion plays an instrumental role in several areas of medical image processing, particularly in disease recognition and tumor detection. Traditional fusion methods tend to process each modality independently before combining the features and reconstructing the fusion image. However, this approach often neglects the fundamental commonalities and disparities between multimodal information. Furthermore, the prevailing methodologies are largely confined to fusing two-dimensional (2D) medical image slices, leading to a lack of contextual supervision in the fusion images and subsequently, a decreased information yield for physicians relative to three-dimensional (3D) images. In this study, we introduce an innovative unsupervised feature mutual learning fusion network designed to rectify these limitations. Our approach incorporates a Deformable Cross Feature Blend (DCFB) module that facilitates the dual modalities in discerning their respective similarities and differences. We have applied our model to the fusion of 3D MRI and PET images obtained from 660 patients in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Through the application of the DCFB module, our network generates high-quality MRI-PET fusion images. Experimental results demonstrate that our method surpasses traditional 2D image fusion methods in performance metrics such as Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). Importantly, the capacity of our method to fuse 3D images enhances the information available to physicians and researchers, thus marking a significant step forward in the field. The code will soon be available online

    Multi-focus image fusion based on non-negative sparse representation and patch-level consistency rectification

    Get PDF
    Most existing sparse representation-based (SR) fusion methods consider the local information of each image patch independently during fusion. Some spatial artifacts are easily introduced to the fused image. A sliding window technology is often employed by these methods to overcome this issue. However, this comes at the cost of high computational complexity. Alternatively, we come up with a novel multi-focus image fusion method that takes full consideration of the strong correlations among spatially adjacent image patches with NO need for a sliding window. To this end, a non-negative SR model with local consistency constraint (CNNSR) on the representation coefficients is first constructed to encode each image patch. Then a patch-level consistency rectification strategy is presented to merge the input image patches, by which the spatial artifacts in the fused images are greatly reduced. As well, a compact non-negative dictionary is constructed for the CNNSR model. Experimental results demonstrate that the proposed fusion method outperforms some state-of-the art methods. Moreover, the proposed method is computationally efficient, thereby facilitating real-world applications

    Perceptual Image Fusion Using Wavelets

    No full text
    corecore