21 research outputs found

    Infrared and visible image fusion using two-layer generative adversarial network

    Get PDF
    Infrared (IR) images can distinguish targets from their backgrounds based on difference in thermal radiation, whereas visible images can provide texture details with high spatial resolution. The fusion of the IR and visible images has many advantages and can be applied to applications such as target detection and recognition. This paper proposes a two-layer generative adversarial network (GAN) to fuse these two types of images. In the first layer, the network generate fused images using two GANs: one uses the IR image as input and the visible image as ground truth, and the other with the visible as input and the IR as ground truth. In the second layer, the network transfer one of the two fused images generated in the first layer as input and the other as ground truth to GAN to generate the final fused image. We adopt TNO and INO data sets to verify our method, and by comparing with eight objective evaluation parameters obtained by other ten methods. It is demonstrated that our method is able to achieve better performance than state-of-arts on preserving both texture details and thermal information

    Multi-focus Image Fusion with Sparse Feature Based Pulse Coupled Neural Network

    Get PDF
    In order to better extract the focused regions and effectively improve the quality of the fused image, a novel multi-focus image fusion scheme with sparse feature based pulse coupled neural network (PCNN) is proposed. The registered source images are decomposed into principal matrices and sparse matrices by robust principal component analysis (RPCA). The salient features of the sparse matrices construct the sparse feature space of the source images. The sparse features are used to motivate the PCNN neurons. The focused regions of the source images are detected by the output of the PCNN and integrated to construct the final fused image. Experimental results show that the proposed scheme works better in extracting the focused regions and improving the fusion quality compared to the other existing fusion methods in both spatial and transform domain

    Infrared and Visible Image Fusion using a Deep Learning Framework

    Full text link
    In recent years, deep learning has become a very active research tool which is used in many image processing fields. In this paper, we propose an effective image fusion method using a deep learning framework to generate a single image which contains all the features from infrared and visible images. First, the source images are decomposed into base parts and detail content. Then the base parts are fused by weighted-averaging. For the detail content, we use a deep learning network to extract multi-layer features. Using these features, we use l_1-norm and weighted-average strategy to generate several candidates of the fused detail content. Once we get these candidates, the max selection strategy is used to get final fused detail content. Finally, the fused image will be reconstructed by combining the fused base part and detail content. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in both objective assessment and visual quality. The Code of our fusion method is available at https://github.com/hli1221/imagefusion_deeplearningComment: 6 pages, 6 figures, 2 tables, ICPR 2018(accepted

    Infrared and Visible Image Fusion Based on Oversampled Graph Filter Banks

    Get PDF
    The infrared image (RI) and visible image (VI) fusion method merges complementary information from the infrared and visible imaging sensors to provide an effective way for understanding the scene. The graph filter bank-based graph wavelet transform possesses the advantages of the classic wavelet filter bank and graph representation of a signal. Therefore, we propose an RI and VI fusion method based on oversampled graph filter banks. Specifically, we consider the source images as signals on the regular graph and decompose them into the multiscale representations with M-channel oversampled graph filter banks. Then, the fusion rule for the low-frequency subband is constructed using the modified local coefficient of variation and the bilateral filter. The fusion maps of detail subbands are formed using the standard deviation-based local properties. Finally, the fusion image is obtained by applying the inverse transform on the fusion subband coefficients. The experimental results on benchmark images show the potential of the proposed method in the image fusion applications
    corecore