675 research outputs found

    Features for Cross Spectral Image Matching: A Survey

    Get PDF
    In recent years, cross spectral matching has been gaining attention in various biometric systems for identification and verification purposes. Cross spectral matching allows images taken under different electromagnetic spectrums to match each other. In cross spectral matching, one of the keys for successful matching is determined by the features used for representing an image. Therefore, the feature extraction step becomes an essential task. Researchers have improved matching accuracy by developing robust features. This paper presents most commonly selected features used in cross spectral matching. This survey covers basic concepts of cross spectral matching, visual and thermal features extraction, and state of the art descriptors. In the end, this paper provides a description of better feature selection methods in cross spectral matching

    An Extensive Review on Spectral Imaging in Biometric Systems: Challenges and Advancements

    Full text link
    Spectral imaging has recently gained traction for face recognition in biometric systems. We investigate the merits of spectral imaging for face recognition and the current challenges that hamper the widespread deployment of spectral sensors for face recognition. The reliability of conventional face recognition systems operating in the visible range is compromised by illumination changes, pose variations and spoof attacks. Recent works have reaped the benefits of spectral imaging to counter these limitations in surveillance activities (defence, airport security checks, etc.). However, the implementation of this technology for biometrics, is still in its infancy due to multiple reasons. We present an overview of the existing work in the domain of spectral imaging for face recognition, different types of modalities and their assessment, availability of public databases for sake of reproducible research as well as evaluation of algorithms, and recent advancements in the field, such as, the use of deep learning-based methods for recognizing faces from spectral images

    Illumination tolerance in facial recognition

    Get PDF
    In this research work, five different preprocessing techniques were experimented with two different classifiers to find the best match for preprocessor + classifier combination to built an illumination tolerant face recognition system. Hence, a face recognition system is proposed based on illumination normalization techniques and linear subspace model using two distance metrics on three challenging, yet interesting databases. The databases are CAS PEAL database, the Extended Yale B database, and the AT&T database. The research takes the form of experimentation and analysis in which five illumination normalization techniques were compared and analyzed using two different distance metrics. The performances and execution times of the various techniques were recorded and measured for accuracy and efficiency. The illumination normalization techniques were Gamma Intensity Correction (GIC), discrete Cosine Transform (DCT), Histogram Remapping using Normal distribution (HRN), Histogram Remapping using Log-normal distribution (HRL), and Anisotropic Smoothing technique (AS). The linear subspace models utilized were principal component analysis (PCA) and Linear Discriminant Analysis (LDA). The two distance metrics were Euclidean and Cosine distance. The result showed that for databases with both illumination (shadows), and lighting (over-exposure) variations like the CAS PEAL database the Histogram remapping technique with normal distribution produced excellent result when the cosine distance is used as the classifier. The result indicated 65% recognition rate in 15.8 ms/img. Alternatively for databases consisting of pure illumination variation, like the extended Yale B database, the Gamma Intensity Correction (GIC) merged with the Euclidean distance metric gave the most accurate result with 95.4% recognition accuracy in 1ms/img. It was further gathered from the set of experiments that the cosine distance produces more accurate result compared to the Euclidean distance metric. However the Euclidean distance is faster than the cosine distance in all the experiments conducted

    Efficient Algorithms For Correlation Pattern Recognition

    Get PDF
    The mathematical operation of correlation is a very simple concept, yet has a very rich history of application in a variety of engineering fields. It is essentially nothing but a technique to measure if and to what degree two signals match each other. Since this is a very basic and universal task in a wide variety of fields such as signal processing, communications, computer vision etc., it has been an important tool. The field of pattern recognition often deals with the task of analyzing signals or useful information from signals and classifying them into classes. Very often, these classes are predetermined, and examples (templates) are available for comparison. This task naturally lends itself to the application of correlation as a tool to accomplish this goal. Thus the field of Correlation Pattern Recognition has developed over the past few decades as an important area of research. From the signal processing point of view, correlation is nothing but a filtering operation. Thus there has been a great deal of work in using concepts from filter theory to develop Correlation Filters for pattern recognition. While considerable work has been to done to develop linear correlation filters over the years, especially in the field of Automatic Target Recognition, a lot of attention has recently been paid to the development of Quadratic Correlation Filters (QCF). QCFs offer the advantages of linear filters while optimizing a bank of these simultaneously to offer much improved performance. This dissertation develops efficient QCFs that offer significant savings in storage requirements and computational complexity over existing designs. Firstly, an adaptive algorithm is presented that is able to modify the QCF coefficients as new data is observed. Secondly, a transform domain implementation of the QCF is presented that has the benefits of lower computational complexity and computational requirements while retaining excellent recognition accuracy. Finally, a two dimensional QCF is presented that holds the potential to further save on storage and computations. The techniques are developed based on the recently proposed Rayleigh Quotient Quadratic Correlation Filter (RQQCF) and simulation results are provided on synthetic and real datasets

    Study of Different Algorithms for Face Recognition

    Get PDF
    The importance of utilising biometrics to establish personal authenticity and to detect impostors is growing in the present scenario of global security concern. Development of a biometric system for personal identification, which fulfils the requirements for access control of secured areas and other applications like identity validation for social welfare, crime detection, ATM access, computer security, etc., is felt to be the need of the day [2]. Face recognition has been evolving as a convenient biometric mode for human authentication for more than last two decades. It plays an important role in applications such as video surveillance, human computer interface, and face image database management [1]. A lot of techniques have been applied for different applications. Robustness and reliability becomes more and more important for these applications especially in security systems. Basically Face Recognition is the process through which a person is identified by his facial image. With the help of this technique it is possible to use the facial image of a person to authenticate him into any secure system. Face recognition approaches for still images can be broadly categorized into holistic methods and feature based methods. Holistic methods use the entire raw face image as an input, whereas feature based methods extract local facial features and use their geometric and appearance properties. This work studies the different approaches for a Face Recognition System. The different approaches like PCA, DCT and different types of Wavelets have been studied with the help of Euclidean distance as a classifier and Neural Network as a classifier. The results have been compared for the two database, AMP which contains 975 images of 13 individuals (each person has 75 different images) under various facial expressions and lightning condition with each image being cropped and resized to 64×64 pixels for the simulation and ORL (Olivetti Research Lab) which contains 400 images (each with 112×92 pixels) corresponding to 40 persons in 10 poses each including both male and female. The ORL database image has been resized to 128×128 pixels

    Verification of emotion recognition from facial expression

    Get PDF
    Analysis of facial expressions is an active topic of research with many potential applications, since the human face plays a significant role in conveying a person’s mental state. Due to the practical values it brings, scientists and researchers from different fields such as psychology, finance, marketing, and engineering have developed significant interest in this area. Hence, there are more of a need than ever for the intelligent tool to be employed in the emotional Human-Computer Interface (HCI) by analyzing facial expressions as a better alternative to the traditional devices such as the keyboard and mouse. The face is a window of human mind. The examination of mental states explores the human’s internal cognitive states. A facial emotion recognition system has a potential to read people’s minds and interpret the emotional thoughts to the world. High rates of recognition accuracy of facial emotions by intelligent machines have been achieved in existing efforts based on the benchmarked databases containing posed facial emotions. However, they are not qualified to interpret the human’s true feelings even if they are recognized. The difference between posed facial emotions and spontaneous ones has been identified and studied in the literature. One of the most interesting challenges in the field of HCI is to make computers more human-like for more intelligent user interfaces. In this dissertation, a Regional Hidden Markov Model (RHMM) based facial emotion recognition system is proposed. In this system, the facial features are extracted from three face regions: the eyebrows, eyes and mouth. These regions convey relevant information regarding facial emotions. As a marked departure from prior work, RHMMs for the states of these three distinct face regions instead of the entire face for each facial emotion type are trained. In the recognition step, regional features are extracted from test video sequences. These features are processed according to the corresponding RHMMs to learn the probabilities for the states of the three face regions. The combination of states is utilized to identify the estimated emotion type of a given frame in a video sequence. An experimental framework is established to validate the results of such a system. RHMM as a new classifier emphasizes the states of three facial regions, rather than the entire face. The dissertation proposes the method of forming observation sequences that represent the changes of states of facial regions for training RHMMs and recognition. The proposed method is applicable to the various forms of video clips, including real-time videos. The proposed system shows the human-like capability to infer people’s mental states from moderate level of facial spontaneous emotions conveyed in the daily life in contrast to posed facial emotions. Moreover, the extended research work associated with the proposed facial emotion recognition system is forwarded into the domain of finance and biomedical engineering, respectively. CEO’s fear facial emotion has been found as the strong and positive predictor to forecast the firm stock price in the market. In addition, the experiment results also have demonstrated the similarity of the spontaneous facial reactions to stimuli and inner affective states translated by brain activity. The results revealed the effectiveness of facial features combined with the features extracted from the signals of brain activity for multiple signals correlation analysis and affective state classification
    corecore