48 research outputs found

    FUSION OF LANDSAT- 8 THERMAL INFRARED AND VISIBLE BANDS WITH MULTI-RESOLUTION ANALYSIS CONTOURLET METHODS

    Get PDF
    Land surface temperature image is an important product in many lithosphere and atmosphere applications. This image is retrieved from the thermal infrared bands. These bands have lower spatial resolution than the visible and near infrared data. Therefore, the details of temperature variation can't be clearly identified in land surface temperature images. The aim of this study is to enhance spatial information in thermal infrared bands. Image fusion is one of the efficient methods that are employed to enhance spatial resolution of the thermal bands by fusing these data with high spatial resolution visible bands. Multi-resolution analysis is an effective pixel level image fusion approach. In this paper, we use contourlet, non-subsampled contourlet and sharp frequency localization contourlet transform in fusion due to their advantages, high directionality and anisotropy. The absolute average difference and RMSE values show that with small distortion in the thermal content, the spatial information of the thermal infrared and the land surface temperature images is enhanced

    An improved approach for medical image fusion using sparse representation and Siamese convolutional neural network

    Get PDF
    Multimodal image fusion is a contemporary branch of medical imaging that aims to increase the accuracy of clinical diagnosis of the disease stage development. The fusion of different image modalities can be a viable medical imaging approach. It combines the best features to produce a composite image with higher quality than its predecessors and can significantly improve medical diagnosis. Recently, sparse representation (SR) and Siamese Convolutional Neural Network (SCNN) methods have been introduced independently for image fusion. However, some of the results from these approaches have recorded defects, such as edge blur, less visibility, and blocking artifacts. To remedy these deficiencies, in this paper, a smart blending approach based on a combination of SR and SCNN is introduced for image fusion, which comprises three steps as follows. Firstly, entire source images are fed into the classical orthogonal matching pursuit (OMP), where the SR-fused image is obtained using the max-rule that aims to improve pixel localization. Secondly, a novel scheme of SCNN-based K-SVD dictionary learning is re-employed for each source image. The method has shown good non-linearity behavior, contributing to increasing the fused output's sparsity characteristics and demonstrating better extraction and transfer of image details to the output fused image. Lastly, the fusion rule step employs a linear combination between steps 1 and 2 to obtain the final fused image. The results depict that the proposed method is advantageous, compared to other previous methods, notably by suppressing the artifacts produced by the traditional SR and SCNN model

    CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion

    Full text link
    Infrared and visible image fusion targets to provide an informative image by combining complementary information from different sensors. Existing learning-based fusion approaches attempt to construct various loss functions to preserve complementary features from both modalities, while neglecting to discover the inter-relationship between the two modalities, leading to redundant or even invalid information on the fusion results. To alleviate these issues, we propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion in an end-to-end manner. Concretely, to simultaneously retain typical features from both modalities and remove unwanted information emerging on the fused result, we develop a coupled contrastive constraint in our loss function.In a fused imge, its foreground target/background detail part is pulled close to the infrared/visible source and pushed far away from the visible/infrared source in the representation space. We further exploit image characteristics to provide data-sensitive weights, which allows our loss function to build a more reliable relationship with source images. Furthermore, to learn rich hierarchical feature representation and comprehensively transfer features in the fusion process, a multi-level attention module is established. In addition, we also apply the proposed CoCoNet on medical image fusion of different types, e.g., magnetic resonance image and positron emission tomography image, magnetic resonance image and single photon emission computed tomography image. Extensive experiments demonstrate that our method achieves the state-of-the-art (SOTA) performance under both subjective and objective evaluation, especially in preserving prominent targets and recovering vital textural details.Comment: 25 pages, 16 figure

    A multimodal fusion method for Alzheimer’s disease based on DCT convolutional sparse representation

    Get PDF
    IntroductionThe medical information contained in magnetic resonance imaging (MRI) and positron emission tomography (PET) has driven the development of intelligent diagnosis of Alzheimer’s disease (AD) and multimodal medical imaging. To solve the problems of severe energy loss, low contrast of fused images and spatial inconsistency in the traditional multimodal medical image fusion methods based on sparse representation. A multimodal fusion algorithm for Alzheimer’ s disease based on the discrete cosine transform (DCT) convolutional sparse representation is proposed.MethodsThe algorithm first performs a multi-scale DCT decomposition of the source medical images and uses the sub-images of different scales as training images, respectively. Different sparse coefficients are obtained by optimally solving the sub-dictionaries at different scales using alternating directional multiplication method (ADMM). Secondly, the coefficients of high-frequency and low-frequency subimages are inverse DCTed using an improved L1 parametric rule combined with improved spatial frequency novel sum-modified SF (NMSF) to obtain the final fused images.Results and discussionThrough extensive experimental results, we show that our proposed method has good performance in contrast enhancement, texture and contour information retention

    Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain

    Get PDF
    Nonsubsampled contourlet transform (NSCT) provides °exible multiresolution, anisotropy and directional expansion for images. Compared with the original contourlet transform, it is shift-invariant and can overcome the pseudo-Gibbs phenomena around singularities. Pulse Coupled Neural Networks (PCNN) is a visual cortex-inspired neural network and characterized by the global coupling and pulse synchronization of neurons. It has been proven suitable for image processing and successfully employed in image fusion. In this paper, NSCT is associated with PCNN and employed in image fusion to make full use of the characteristics of them. Spatial frequency in NSCT domain is input to motivate PCNN and coe±cients in NSCT domain with large firing times are selected as coe±cients of the fused image. Experimental results demonstrate that the proposed algorithm outperforms typical wavelet-based, contourlet-based, PCNN-based and contourlet-PCNN-based fusion algorithms in term of objective criteria and visual appearance.Supported by Navigation Science Foundation of P. R. China (05F07001) and National Natural Science Foundation of P. R. China (60472081
    corecore