290 research outputs found

    Region based Multimodality Image Fusion Method

    Get PDF
    This paper proposes a novel region based image fusion scheme based on high boost filtering concept using discrete wavelet transform. In the recent literature, region based image fusion methods show better performance than pixel based image fusion method. The graph based normalized cutest algorithm is used for image segmentation. Proposed method is a novel idea which uses high boost filtering concept to get an accurate segmentation using discrete wavelet transform. This concept is used to extract regions from input registered source images which is then compared with different fusion rules. The new MMS fusion rule is also proposed to fuse multimodality images. The different fusion rules are applied on various categories of input source images and resultant fused image is generated. Proposed method is applied on large number of registered images of various categories of multifocus and multimodality images and results are compared using standard reference based and nonreference based image fusion parameters. It has been observed from simulation results that our proposed algorithm is consistent and preserves more information compared to earlier reported pixel based and region based methods

    A Novel Region based Image Fusion Method using Highboost Filtering and Fuzzy Logic

    Get PDF
    This paper proposes a novel region based image fusion scheme based on high boost filtering concept using discrete wavelet transform. In the recent literature, region based image fusion methods show better performance than pixel based image fusion method. Proposed method is a novel idea which uses high boost filtering concept to get an accurate segmentation using discrete wavelet transform. This concept is used to extract regions form input registered source images which is than compared with different fusion rules. The fusion rule based on spatial frequency and standard deviation is also proposed to fuse multimodality images. The different fusion rules are applied on various categories of input source images and resultant fused image is generated. Proposed method is applied on registered images of multifocus and multimodality images and results are compared using standard reference based and non-reference based image fusion parameters. It has been observed from simulation results that our proposed algorithm is consistent and preserves more information compared to earlier reported pixel based and region based methods

    Novel Approaches for Regional Multifocus Image Fusion

    Get PDF
    Image fusion is a research topic about combining information from multiple images into one fused image. Although a large number of methods have been proposed, many challenges remain in obtaining clearer resulting images with higher quality. This chapter addresses the multifocus image fusion problem about extending the depth of field by fusing several images of the same scene with different focuses. Existing research in multifocus image fusion tends to emphasis on the pixel-level image fusion using transform domain methods. The region-level image fusion methods, especially the ones using new coding techniques, are still limited. In this chapter, we provide an overview of regional multi-focus image fusion, and two different orthogonal matching pursuit-based sparse representation methods are adopted for regional multi-focus image fusion. Experiment results show that the regional image fusion using sparse representation can achieve a comparable even better performance for multifocus image fusion problems

    Multiexposure and multifocus image fusion with multidimensional camera shake compensation

    Get PDF
    Multiexposure image fusion algorithms are used for enhancing the perceptual quality of an image captured by sensors of limited dynamic range. This is achieved by rendering a single scene based on multiple images captured at different exposure times. Similarly, multifocus image fusion is used when the limited depth of focus on a selected focus setting of a camera results in parts of an image being out of focus. The solution adopted is to fuse together a number of multifocus images to create an image that is focused throughout. A single algorithm that can perform both multifocus and multiexposure image fusion is proposed. This algorithm is a new approach in which a set of unregistered multiexposure focus images is first registered before being fused to compensate for the possible presence of camera shake. The registration of images is done via identifying matching key-points in constituent images using scale invariant feature transforms. The random sample consensus algorithm is used to identify inliers of SIFT key-points removing outliers that can cause errors in the registration process. Finally, the coherent point drift algorithm is used to register the images, preparing them to be fused in the subsequent fusion stage. For the fusion of images, a new approach based on an improved version of a wavelet-based contourlet transform is used. The experimental results and the detailed analysis presented prove that the proposed algorithm is capable of producing high-dynamic range (HDR) or multifocus images by registering and fusing a set of multiexposure or multifocus images taken in the presence of camera shake. Further,comparison of the performance of the proposed algorithm with a number of state-of-the art algorithms and commercial software packages is provided. In particular, our literature review has revealed that this is one of the first attempts where the compensation of camera shake, a very likely practical problem that can result in HDR image capture using handheld devices, has been addressed as a part of a multifocus and multiexposure image enhancement system. © 2013 Society of Photo-Optical Instrumentatio Engineers (SPIE)

    Generation and Recombination for Multifocus Image Fusion with Free Number of Inputs

    Full text link
    Multifocus image fusion is an effective way to overcome the limitation of optical lenses. Many existing methods obtain fused results by generating decision maps. However, such methods often assume that the focused areas of the two source images are complementary, making it impossible to achieve simultaneous fusion of multiple images. Additionally, the existing methods ignore the impact of hard pixels on fusion performance, limiting the visual quality improvement of fusion image. To address these issues, a combining generation and recombination model, termed as GRFusion, is proposed. In GRFusion, focus property detection of each source image can be implemented independently, enabling simultaneous fusion of multiple source images and avoiding information loss caused by alternating fusion. This makes GRFusion free from the number of inputs. To distinguish the hard pixels from the source images, we achieve the determination of hard pixels by considering the inconsistency among the detection results of focus areas in source images. Furthermore, a multi-directional gradient embedding method for generating full focus images is proposed. Subsequently, a hard-pixel-guided recombination mechanism for constructing fused result is devised, effectively integrating the complementary advantages of feature reconstruction-based method and focused pixel recombination-based method. Extensive experimental results demonstrate the effectiveness and the superiority of the proposed method.The source code will be released on https://github.com/xxx/xxx

    An efficient adaptive fusion scheme for multifocus images in wavelet domain using statistical properties of neighborhood

    Get PDF
    In this paper we present a novel fusion rule which can efficiently fuse multifocus images in wavelet domain by taking weighted average of pixels. The weights are adaptively decided using the statistical properties of the neighborhood. The main idea is that the eigen value of unbiased estimate of the covariance matrix of an image block depends on the strength of edges in the block and thus makes a good choice for weight to be given to the pixel, giving more weightage to pixel with sharper neighborhood. The performance of the proposed method have been extensively tested on several pairs of multifocus images and also compared quantitatively with various existing methods with the help of well known parameters including Petrovic and Xydeas image fusion metric. Experimental results show that performance evaluation based on entropy, gradient, contrast or deviation, the criteria widely used for fusion analysis, may not be enough. This work demonstrates that in some cases, these evaluation criteria are not consistent with the ground truth. It also demonstrates that Petrovic and Xydeas image fusion metric is a more appropriate criterion, as it is in correlation with ground truth as well as visual quality in all the tested fused images. The proposed novel fusion rule significantly improves contrast information while preserving edge information. The major achievement of the work is that it significantly increases the quality of the fused image, both visually and in terms of quantitative parameters, especially sharpness with minimum fusion artifacts
    corecore