54 research outputs found

    Multispectral Palmprint Encoding and Recognition

    Full text link
    Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z. Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral Palmprint Encoding for Human Recognition", International Conference on Computer Vision, 2011. MATLAB Code available: https://sites.google.com/site/zohaibnet/Home/code

    Fast filtering image fusion

    Full text link
    Β© 2017 SPIE and IS & T. Image fusion aims at exploiting complementary information in multimodal images to create a single composite image with extended information content. An image fusion framework is proposed for different types of multimodal images with fast filtering in the spatial domain. First, image gradient magnitude is used to detect contrast and image sharpness. Second, a fast morphological closing operation is performed on image gradient magnitude to bridge gaps and fill holes. Third, the weight map is obtained from the multimodal image gradient magnitude and is filtered by a fast structure-preserving filter. Finally, the fused image is composed by using a weighed-sum rule. Experimental results on several groups of images show that the proposed fast fusion method has a better performance than the state-of-the-art methods, running up to four times faster than the fastest baseline algorithm

    MULTIMODAL CLINICAL PICTURE FUSION IN NON-SUBSAMPLED CONTOURLET DEVELOP INTO DOMAIN

    Get PDF
    Multimodal medical image fusion will not help in diagnosing illnesses, it cuts lower round the storage cost by reduction in storage one fused image rather than multiple-source images. Thus far, extensive work remains created on image fusion technique with a few other techniques devoted to multimodal medical image fusion. The primary motivation should be to capture best information from sources in a single output, which plays a vital role in medical diagnosis. During this paper, a manuscript fusion framework is suggested for multimodal medical images according to non-sub sampled contour let transform. Multimodal medical image fusion, as a good tool for people clinical programs, is marketing using the introduction of various imaging approaches to medical imaging. The building blocks medical images are first modified by NSCT adopted by mixing low- and-frequency components. Two different fusion rules according to phase congruency and directive contrast are suggested and acquainted with fuse low- and-frequency coefficients. Further, the success within the suggested framework is moved with the three clinical good examples of persons battling with Alzheimer, sub-acute stroke and recurrent tumor. Experimental results and comparative study show the suggested fusion framework provides a great way to permit better analysis of multimodality images. Finally, the fused image is made from the inverse NSCT wonderful composite coefficients

    A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition

    Get PDF
    The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum produces images that have improved face recognition performance under varying lighting conditions. This is because long-wave infrared images are the result of emitted, rather than reflected, light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images have also improved face recognition under varying pose and lighting. The opacity of glass to long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces the recognition performance. This thesis presents the design and performance evaluation of a novel camera system which is capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth video images via a common optical path requiring no spatial registration between sensors beyond scaling for differences in sensor sizes. Experiments using a range of established face recognition methods and multi-class SVM classifiers show that the fused output from our camera system not only outperforms the single modality images for face recognition, but that the adaptive fusion methods used produce consistent increases in recognition accuracy under varying pose, lighting and with the presence of eyeglasses

    A Novel Fusion Framework Based on Adaptive PCNN in NSCT Domain for Whole-Body PET and CT Images

    Get PDF
    The PET and CT fusion images, combining the anatomical and functional information, have important clinical meaning. This paper proposes a novel fusion framework based on adaptive pulse-coupled neural networks (PCNNs) in nonsubsampled contourlet transform (NSCT) domain for fusing whole-body PET and CT images. Firstly, the gradient average of each pixel is chosen as the linking strength of PCNN model to implement self-adaptability. Secondly, to improve the fusion performance, the novel sum-modified Laplacian (NSML) and energy of edge (EOE) are extracted as the external inputs of the PCNN models for low- and high-pass subbands, respectively. Lastly, the rule of max region energy is adopted as the fusion rule and different energy templates are employed in the low- and high-pass subbands. The experimental results on whole-body PET and CT data (239 slices contained by each modality) show that the proposed framework outperforms the other six methods in terms of the seven commonly used fusion performance metrics

    A parallel fusion method of remote sensing image based on NSCT

    Get PDF
    Remote sensing image fusion is very important for playing the advantages of a variety of remote sensing data. However, remote sensing image fusion is large in computing capacity and time consuming. In this paper, in order to fuse remote sensing images accurately and quickly, a parallel fusion algorithm of remote sensing image based on NSCT (nonsubsampled contourlet transform) is proposed. In the method, two important kinds of remote sensing image, multispectral image and panchromatic image are used, and the advantages of parallel computing in high performance computing and the advantages of NSCT in information processing are combined. In the method, based on parallel computing, some processes with large amount of calculation including IHS (Intensity, Hue, Saturation) transform, NSCT, inverse NSCT, inverse IHS transform, etc., are done. To realize the method, multispectral image is processed with IHS transform, and the three components, I, H, and S are gotten. The component I and the panchromatic image are decomposed with NSCT. The obtained low frequency components of NSCT are fused with the fusion rule based on the neighborhood energy feature matching, and the obtained high frequency components are fused with the fusion rule based on the subregion variance. Then the low frequency components and the high frequency components after fusion are processed with the inverse NSCT, and the fused component is gotten. Finally, the fused component, the component H and the component S are processed with the inverse IHS transform, and the fusion image is obtained. The experiment results show that the proposed method can get better fusion results and faster computing speed for multispectral image and panchromatic image.The work was supported in part supported by (1) the Fund Project of National Natural Science of China(U1204402), (2) the Foundation Project(21AT-2016-13) supported by the twenty-first century Aerospace technology Co., Ltd., China, (3) the Natural Science Research Program Project (18A520001) supported by the Department of Education in Henan Province, China
    • …
    corecore