5 research outputs found

    Multi-focus image fusion using maximum symmetric surround saliency detection

    Get PDF
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method gives stable and promising performance when compared to that of the existing methods

    Multi-focus image fusion using maximum symmetric surround saliency detection

    Get PDF
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method outperforms the existing methods

    Dictionary Pair Learning on Grassmann Manifolds for Image Denoising

    Get PDF
    Abstract—Image denoising is a fundamental problem in com-puter vision and image processing that holds considerable prac-tical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert two-dimensional image patches into one-dimensional vectors for further processing. Thus, these methods inevitably break down the inherent two-dimensional geometric structure of natural images. To overcome this limitation pertaining to previous image denoising methods, we propose a two-dimensional image denoising model, namely, the Dictionary Pair Learning (DPL) model, and we design a corresponding algorithm called the Dictionary Pair Learning on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein th

    Fusion of Images and Videos using Multi-scale Transforms

    Get PDF
    This thesis deals with methods for fusion of images as well as videos using multi-scale transforms. First, a novel image fusion algorithm based primarily on an improved multi-scale coefficient decomposition framework is proposed. The proposed framework uses a combination of non-subsampled contourlet and wavelet transforms for the initial multi-scale decompositions. The decomposed multi-scale coefficients are then fused twice using various local activity measures. Experimental results show that the proposed approach performs better or on par with the existing state-of-the art image fusion algorithms in terms of quantitative and qualitative performance. In addition, the proposed image fusion algorithm can produce high quality fused images even with a computationally inexpensive two-scale decomposition. Finally, we extend the proposed framework to formulate a novel video fusion algorithm for camouflaged target detection from infrared and visible sensor inputs. The proposed framework consists of a novel target identification method based on conventional thresholding techniques proposed by Otsu and Kapur et al. These thresholding techniques are further extended to formulate novel region-based fusion rules using local statistical measures. The proposed video fusion algorithm, when used in target highlighting mode, can further enhance the hidden target, making it much easier to localize the hidden camouflaged target. Experimental results show that the proposed video fusion algorithm performs much better than its counterparts in terms of quantitative and qualitative results as well as in terms of time complexity. The relative low complexity of the proposed video fusion algorithm makes it an ideal candidate for real-time video surveillance applications
    corecore