706 research outputs found

    A Novel Multimodal Image Fusion Method Using Hybrid Wavelet-based Contourlet Transform

    Full text link
    Various image fusion techniques have been studied to meet the requirements of different applications such as concealed weapon detection, remote sensing, urban mapping, surveillance and medical imaging. Combining two or more images of the same scene or object produces a better application-wise visible image. The conventional wavelet transform (WT) has been widely used in the field of image fusion due to its advantages, including multi-scale framework and capability of isolating discontinuities at object edges. However, the contourlet transform (CT) has been recently adopted and applied to the image fusion process to overcome the drawbacks of WT with its own advantages. Based on the experimental studies in this dissertation, it is proven that the contourlet transform is more suitable than the conventional wavelet transform in performing the image fusion. However, it is important to know that the contourlet transform also has major drawbacks. First, the contourlet transform framework does not provide shift-invariance and structural information of the source images that are necessary to enhance the fusion performance. Second, unwanted artifacts are produced during the image decomposition process via contourlet transform framework, which are caused by setting some transform coefficients to zero for nonlinear approximation. In this dissertation, a novel fusion method using hybrid wavelet-based contourlet transform (HWCT) is proposed to overcome the drawbacks of both conventional wavelet and contourlet transforms, and enhance the fusion performance. In the proposed method, Daubechies Complex Wavelet Transform (DCxWT) is employed to provide both shift-invariance and structural information, and Hybrid Directional Filter Bank (HDFB) is used to achieve less artifacts and more directional information. DCxWT provides shift-invariance which is desired during the fusion process to avoid mis-registration problem. Without the shift-invariance, source images are mis-registered and non-aligned to each other; therefore, the fusion results are significantly degraded. DCxWT also provides structural information through its imaginary part of wavelet coefficients; hence, it is possible to preserve more relevant information during the fusion process and this gives better representation of the fused image. Moreover, HDFB is applied to the fusion framework where the source images are decomposed to provide abundant directional information, less complexity, and reduced artifacts. The proposed method is applied to five different categories of the multimodal image fusion, and experimental study is conducted to evaluate the performance of the proposed method in each multimodal fusion category using suitable quality metrics. Various datasets, fusion algorithms, pre-processing techniques and quality metrics are used for each fusion category. From every experimental study and analysis in each fusion category, the proposed method produced better fusion results than the conventional wavelet and contourlet transforms; therefore, its usefulness as a fusion method has been validated and its high performance has been verified

    改进拉普拉斯能量和的尖锐频率局部化Contourlet域多聚焦图像融合方法

    Get PDF
    In order to suppress the pseudo-Gibbs phenomena around singularities of fused images and to reduce significant amounts of aliasing components located far away from desired supports when the original Contourlet is employed in the image fusion,a multifocus image fusion method in Sharp Frequency Localized Contourlet Transform(SFLCT) domain based on a sum-modified-Laplacian is proposed.The SFLCT,instead of the original Contourlet,is utilized as the multiscale transform to decompose the original multifocus images into subbands.Then,typical measurements for the multifocus image fusion in a spatial domain are introduced to the Contourlet domain and Sum-modified-Laplacian(SML),and the criterion to distinguish SFLCT coefficients from the clear parts or from blurry parts of images are employed in SFCLT subbands to select the SFLCT transform coefficients.Finally,the inverse SFLCT is used to reconstruct fused images.Moreover,a cycle spinning method is applied to compensate for the lack of translation invariance property and to suppress the pseudo-Gibbs phenomena of fused images.Using the proposed fusion method,experimental results demonstrate that the mutual information has improved by 5.87% and transferred edge information QAB/F has improved by 2.70% as compared with those of the cycle spinning wavelet method,and has improved by 1.77% and 1.29% as compared with those of the cycle spinning Contourlet method.Meanwhile,the proposed fusion method has advantages of good visual effect over the block-based spatial SML method and shift-invariant wavelet method

    Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain

    Get PDF
    Nonsubsampled contourlet transform (NSCT) provides °exible multiresolution, anisotropy and directional expansion for images. Compared with the original contourlet transform, it is shift-invariant and can overcome the pseudo-Gibbs phenomena around singularities. Pulse Coupled Neural Networks (PCNN) is a visual cortex-inspired neural network and characterized by the global coupling and pulse synchronization of neurons. It has been proven suitable for image processing and successfully employed in image fusion. In this paper, NSCT is associated with PCNN and employed in image fusion to make full use of the characteristics of them. Spatial frequency in NSCT domain is input to motivate PCNN and coe±cients in NSCT domain with large firing times are selected as coe±cients of the fused image. Experimental results demonstrate that the proposed algorithm outperforms typical wavelet-based, contourlet-based, PCNN-based and contourlet-PCNN-based fusion algorithms in term of objective criteria and visual appearance.Supported by Navigation Science Foundation of P. R. China (05F07001) and National Natural Science Foundation of P. R. China (60472081

    Multi-Sensor Image Fusion Based on Moment Calculation

    Full text link
    An image fusion method based on salient features is proposed in this paper. In this work, we have concentrated on salient features of the image for fusion in order to preserve all relevant information contained in the input images and tried to enhance the contrast in fused image and also suppressed noise to a maximum extent. In our system, first we have applied a mask on two input images in order to conserve the high frequency information along with some low frequency information and stifle noise to a maximum extent. Thereafter, for identification of salience features from sources images, a local moment is computed in the neighborhood of a coefficient. Finally, a decision map is generated based on local moment in order to get the fused image. To verify our proposed algorithm, we have tested it on 120 sensor image pairs collected from Manchester University UK database. The experimental results show that the proposed method can provide superior fused image in terms of several quantitative fusion evaluation index.Comment: 5 pages, International Conferenc

    Multispectral Palmprint Encoding and Recognition

    Full text link
    Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z. Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral Palmprint Encoding for Human Recognition", International Conference on Computer Vision, 2011. MATLAB Code available: https://sites.google.com/site/zohaibnet/Home/code

    Enhancement of Single and Composite Images Based on Contourlet Transform Approach

    Get PDF
    Image enhancement is an imperative step in almost every image processing algorithms. Numerous image enhancement algorithms have been developed for gray scale images despite their absence in many applications lately. This thesis proposes hew image enhancement techniques of 8-bit single and composite digital color images. Recently, it has become evident that wavelet transforms are not necessarily best suited for images. Therefore, the enhancement approaches are based on a new 'true' two-dimensional transform called contourlet transform. The proposed enhancement techniques discussed in this thesis are developed based on the understanding of the working mechanisms of the new multiresolution property of contourlet transform. This research also investigates the effects of using different color space representations for color image enhancement applications. Based on this investigation an optimal color space is selected for both single image and composite image enhancement approaches. The objective evaluation steps show that the new method of enhancement not only superior to the commonly used transformation method (e.g. wavelet transform) but also to various spatial models (e.g. histogram equalizations). The results found are encouraging and the enhancement algorithms have proved to be more robust and reliable
    corecore