277 research outputs found

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160

    AN IMPROVED INFRARED AND VISIBLE IMAGE FUSION ALGORITHM BASED ON CURVELET TRANSFORM

    Get PDF
    The fusion of infrared images and visible images can combine complementary information in an image, so we can better describe a scene, and it is helpful for some tasks such as target detection, target localization and environment recognition. In this paper, we use the Second Generation Curvelet Transform (SGCT) to decompose infrared images and grayscale visible images to propose a new image fusion algorithm. This algorithm uses a multi-resolution decomposition of different tools and different fusion rules implementation. The simulation results show that, compared with existing algorithms, this algorithm have improved to some extent in the evaluation of fused images

    Bispectral image fusion using multi-resolution transform for enhanced target detection in low ambient light conditions

    Get PDF
    Performing target detection/identification task using only visible spectrum information becomes extremely difficult during low ambient light conditions. Visible spectrum information consists of information available in the range of 400-700 nm wavelength. However, infrared spectrum carries information beyond 800 nm. To overcome the difficulty of target detection by human operator during the task of surveillance, fusion of visible and infrared spectral image information has been proposed. The image fusion has been performed using multi resolution transform based curvelet technique. The use of curvelet transform has been done because of its high directional sensitivity and reconstruction quality. Curvelet transform has been used to decompose source images to obtain coefficients at coarse, intermediate and fine scale. These coefficients have been fused as per respective decomposition level, followed by reconstruction of fused image using inverse curvelet transform. Bispectral fused image inherits scene information as well as target information both from visible and infrared spectrum images respectively. The proposed image fusion output images are visually and statistically compared with other fusion method outputs. The fused image obtained using proposed fusion method in comparison to other fusion results show clear background details, high target distinctiveness, better reconstruction and lesser clutter

    Bounded PCA based Multi Sensor Image Fusion Employing Curvelet Transform Coefficients

    Get PDF
    The fusion of thermal and visible images acts as an important device for target detection. The quality of the spectral content of the fused image improves with wavelet-based image fusion. However, compared to PCA-based fusion, most wavelet-based methods provide results with a lower spatial resolution. The outcome gets better when the two approaches are combined, but they may still be refined. Compared to wavelets, the curvelet transforms more accurately depict the edges in the image. Enhancing the edges is a smart way to improve spatial resolution and the edges are crucial for interpreting the images. The fusion technique that utilizes curvelets enables the provision of additional data in both spectral and spatial areas concurrently. In this paper, we employ an amalgamation of Curvelet Transform and a Bounded PCA (CTBPCA) method to fuse thermal and visible images. To evidence the enhanced efficiency of our proposed technique, multiple evaluation metrics and comparisons with existing image merging methods are employed. Our approach outperforms others in both qualitative and quantitative analysis, except for runtime performance. Future Enhancement-The study will be based on using the fused image for target recognition. Future work should also focus on this method’s continued improvement and optimization for real-time video processing

    Survey on wavelet based image fusion techniques

    Get PDF
    Image fusion is the process of combining multiple images into a single image without distortion or loss of information. The techniques related to image fusion are broadly classified as spatial and transform domain methods. In which, the transform domain based wavelet fusion techniques are widely used in different domains like medical, space and military for the fusion of multimodality or multi-focus images. In this paper, an overview of different wavelet transform based methods and its applications for image fusion are discussed and analysed

    Multispectral Palmprint Encoding and Recognition

    Full text link
    Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available.Comment: Preliminary version of this manuscript was published in ICCV 2011. Z. Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral Palmprint Encoding for Human Recognition", International Conference on Computer Vision, 2011. MATLAB Code available: https://sites.google.com/site/zohaibnet/Home/code

    An efficient adaptive fusion scheme for multifocus images in wavelet domain using statistical properties of neighborhood

    Get PDF
    In this paper we present a novel fusion rule which can efficiently fuse multifocus images in wavelet domain by taking weighted average of pixels. The weights are adaptively decided using the statistical properties of the neighborhood. The main idea is that the eigen value of unbiased estimate of the covariance matrix of an image block depends on the strength of edges in the block and thus makes a good choice for weight to be given to the pixel, giving more weightage to pixel with sharper neighborhood. The performance of the proposed method have been extensively tested on several pairs of multifocus images and also compared quantitatively with various existing methods with the help of well known parameters including Petrovic and Xydeas image fusion metric. Experimental results show that performance evaluation based on entropy, gradient, contrast or deviation, the criteria widely used for fusion analysis, may not be enough. This work demonstrates that in some cases, these evaluation criteria are not consistent with the ground truth. It also demonstrates that Petrovic and Xydeas image fusion metric is a more appropriate criterion, as it is in correlation with ground truth as well as visual quality in all the tested fused images. The proposed novel fusion rule significantly improves contrast information while preserving edge information. The major achievement of the work is that it significantly increases the quality of the fused image, both visually and in terms of quantitative parameters, especially sharpness with minimum fusion artifacts

    Comparative study of Image Fusion Methods: A Review

    Full text link
    As the size and cost of sensors decrease, sensor networks are increasingly becoming an attractive method to collect information in a given area. However, one single sensor is not capable of providing all the required information,either because of their design or because of observational constraints. One possible solution to get all the required information about a particular scene or subject is data fusion.. A small number of metrics proposed so far provide only a rough, numerical estimate of fusion performance with limited understanding of the relative merits of different fusion schemes. This paper proposes a method for comprehensive, objective, image fusion performance characterization using a fusion evaluation framework based on gradient information representation. We give the framework of the overallnbsp system and explain its USAge method. The system has many functions: image denoising, image enhancement, image registration, image segmentation, image fusion, and fusion evaluation. This paper presents a literature review on some of the image fusion techniques for image fusion like, Laplace transform, Discrete Wavelet transform based fusion, Principal component analysis (PCA) based fusion etc. Comparison of all the techniques can be the better approach fornbsp future research
    corecore