169 research outputs found

    Novel Approaches for Regional Multifocus Image Fusion

    Get PDF
    Image fusion is a research topic about combining information from multiple images into one fused image. Although a large number of methods have been proposed, many challenges remain in obtaining clearer resulting images with higher quality. This chapter addresses the multifocus image fusion problem about extending the depth of field by fusing several images of the same scene with different focuses. Existing research in multifocus image fusion tends to emphasis on the pixel-level image fusion using transform domain methods. The region-level image fusion methods, especially the ones using new coding techniques, are still limited. In this chapter, we provide an overview of regional multi-focus image fusion, and two different orthogonal matching pursuit-based sparse representation methods are adopted for regional multi-focus image fusion. Experiment results show that the regional image fusion using sparse representation can achieve a comparable even better performance for multifocus image fusion problems

    Texture Based Multifocus Image Fusion Using Interval Type 2 Fuzzy Logic

    Get PDF
    Multifocus image fusion is a process of fusing two or more images where region of focus in each image is different.   The objective is to obtain one image which contains the clear regions or in-focus regions of each image. Extracting the focused region in each image is a challenging task. Various techniques are available in literature to perform this task. Texture is one such feature which acts as a discriminating factor between focused and out-of-focus regions. Texture based image fusion has been used in our approach in combination with interval type 2 fuzzy logic and discrete wavelet transforms. Performance metrics obtained using this approach are better compared to other existing techniques. Gray Level Cooccurence Matrix (GLCM) method is used to extract the texture. Type 2 Sugeno fuzzy logic is used to combine the images. The fused image is compared with the reference image when it is available. It is also compared with the original images and performance metrics are computed and presented in this paper. Keywords: Discrete Wavelet Transform, Gray Level Cooccurence Matrix, Image Fusion, Multifocus Image, Type 2 Fuzzy Logic, Mamdani FLS, Sugeno FL

    Multifocus image fusion algorithm using iterative segmentation based on edge information and adaptive threshold

    Get PDF
    This paper presents algorithm for multifocus image fusion in spatial domain based on iterative segmentation and edge information of the source images. The basic idea is to divide the images into smaller blocks, gather edge information for each block and then select the region with greater edge information to construct the resultant 'all-in-focus' fused image. To improve the fusion quality further, an iterative approach is proposed. Each iteration selects the regions in focus with the help of an adaptive threshold while leaving the remaining regions for analysis in the next iteration. A further enhancement in the technique is achieved by making the number of blocks and size of blocks adaptive in each iteration. The pixels which remain unselected till the last iteration are then selected from the source images by comparison of the edge activities in the corresponding segments of the source images. The performance of the method have been extensively tested on several pairs of multifocus images and compared quantitatively with existing methods. Experimental results show that the proposed method improves fusion quality by reducing loss of information by almost 50% and noise by more than 99%

    Blending of Images Using Discrete Wavelet Transform

    Get PDF
    The project presents multi focus image fusion using discrete wavelet transform with local directional pattern and spatial frequency analysis. Multi focus image fusion in wireless visual sensor networks is a process of blending two or more images to get a new one which has a more accurate description of the scene than the individual source images. In this project, the proposed model utilizes the multi scale decomposition done by discrete wavelet transform for fusing the images in its frequency domain. It decomposes an image into two different components like structural and textural information. It doesn’t down sample the image while transforming into frequency domain. So it preserves the edge texture details while reconstructing image from its frequency domain. It is used to reduce the problems like blocking, ringing artifacts occurs because of DCT and DWT. The low frequency sub-band coefficients are fused by selecting coefficient having maximum spatial frequency. It indicates the overall active level of an image. The high frequency sub-band coefficients are fused by selecting coefficients having maximum LDP code value LDP computes the edge response values in all eight directions at each pixel position and generates a code from the relative strength magnitude. Finally, fused two different frequency sub-bands are inverse transformed to reconstruct fused image. The system performance will be evaluated by using the parameters such as Peak signal to noise ratio, correlation and entrop

    A Novel Region based Image Fusion Method using Highboost Filtering and Fuzzy Logic

    Get PDF
    This paper proposes a novel region based image fusion scheme based on high boost filtering concept using discrete wavelet transform. In the recent literature, region based image fusion methods show better performance than pixel based image fusion method. Proposed method is a novel idea which uses high boost filtering concept to get an accurate segmentation using discrete wavelet transform. This concept is used to extract regions form input registered source images which is than compared with different fusion rules. The fusion rule based on spatial frequency and standard deviation is also proposed to fuse multimodality images. The different fusion rules are applied on various categories of input source images and resultant fused image is generated. Proposed method is applied on registered images of multifocus and multimodality images and results are compared using standard reference based and non-reference based image fusion parameters. It has been observed from simulation results that our proposed algorithm is consistent and preserves more information compared to earlier reported pixel based and region based methods
    corecore