139 research outputs found

    Multifocus Images Fusion Based On Homogenity and Edges Measures

    Get PDF
    Image fusion is one of the most important techniques in digital image processing, includes the development of software to make the integration of multiple sets of data for the same location; It is one of the new fields adopted in solve the problems of the digital image, and produce high-quality images contains on more information for the purposes of interpretation, classification, segmentation and compression, etc. In this research, there is a solution of problems faced by different digital images such as multi focus images through a simulation process using the camera to the work of the fuse of various digital images based on previously adopted fusion techniques such as arithmetic techniques (BT, CNT and MLT), statistical techniques (LMM, RVS and WT) and spatial techniques (HPFA, HFA and HFM). As these techniques have been developed and build programs using the language MATLAB (b 2010). In this work homogeneity criteria have been suggested for evaluation fused digital image's quality, especially fine details. This criterion is correlation criteria to guess homogeneity in different regions within the image by taking a number of blocks of different regions in the image and different sizes and work shifted blocks per pixel. As dependence was on traditional statistical criteria such as (mean, standard deviation, and signal to noise ratio, mutual information and spatial frequency) and compared with the suggested criteria to the work. The results showed that the evaluation process was effective and well because it took into measure the quality of the homogenous regions

    New applications of Spectral Edge image fusion

    Get PDF
    In this paper, we present new applications of the Spectral Edge image fusion method. The Spectral Edge image fusion algorithm creates a result which combines details from any number of multispectral input images with natural color information from a visible spectrum image. Spectral Edge image fusion is a derivative–based technique, which creates an output fused image with gradients which are an ideal combination of those of the multispectral input images and the input visible color image. This produces both maximum detail and natural colors. We present two new applications of Spectral Edge image fusion. Firstly, we fuse RGB–NIR information from a sensor with a modified Bayer pattern, which captures visible and near–infrared image information on a single CCD. We also present an example of RGB–thermal image fusion, using a thermal camera attached to a smartphone, which captures both visible and low–resolution thermal images. These new results may be useful for computational photography and surveillance applications. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    Infrared and Visible Image Fusion using a Deep Learning Framework

    Full text link
    In recent years, deep learning has become a very active research tool which is used in many image processing fields. In this paper, we propose an effective image fusion method using a deep learning framework to generate a single image which contains all the features from infrared and visible images. First, the source images are decomposed into base parts and detail content. Then the base parts are fused by weighted-averaging. For the detail content, we use a deep learning network to extract multi-layer features. Using these features, we use l_1-norm and weighted-average strategy to generate several candidates of the fused detail content. Once we get these candidates, the max selection strategy is used to get final fused detail content. Finally, the fused image will be reconstructed by combining the fused base part and detail content. The experimental results demonstrate that our proposed method achieves state-of-the-art performance in both objective assessment and visual quality. The Code of our fusion method is available at https://github.com/hli1221/imagefusion_deeplearningComment: 6 pages, 6 figures, 2 tables, ICPR 2018(accepted
    corecore