574 research outputs found

    A Survey on Ear Biometrics

    No full text
    Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers

    The fundamentals of unimodal palmprint authentication based on a biometric system: A review

    Get PDF
    Biometric system can be defined as the automated method of identifying or authenticating the identity of a living person based on physiological or behavioral traits. Palmprint biometric-based authentication has gained considerable attention in recent years. Globally, enterprises have been exploring biometric authorization for some time, for the purpose of security, payment processing, law enforcement CCTV systems, and even access to offices, buildings, and gyms via the entry doors. Palmprint biometric system can be divided into unimodal and multimodal. This paper will investigate the biometric system and provide a detailed overview of the palmprint technology with existing recognition approaches. Finally, we introduce a review of previous works based on a unimodal palmprint system using different databases

    Quadratic Projection Based Feature Extraction with Its Application to Biometric Recognition

    Full text link
    This paper presents a novel quadratic projection based feature extraction framework, where a set of quadratic matrices is learned to distinguish each class from all other classes. We formulate quadratic matrix learning (QML) as a standard semidefinite programming (SDP) problem. However, the con- ventional interior-point SDP solvers do not scale well to the problem of QML for high-dimensional data. To solve the scalability of QML, we develop an efficient algorithm, termed DualQML, based on the Lagrange duality theory, to extract nonlinear features. To evaluate the feasibility and effectiveness of the proposed framework, we conduct extensive experiments on biometric recognition. Experimental results on three representative biometric recogni- tion tasks, including face, palmprint, and ear recognition, demonstrate the superiority of the DualQML-based feature extraction algorithm compared to the current state-of-the-art algorithm

    Multi-modal association learning using spike-timing dependent plasticity (STDP)

    Get PDF
    We propose an associative learning model that can integrate facial images with speech signals to target a subject in a reinforcement learning (RL) paradigm. Through this approach, the rules of learning will involve associating paired stimuli (stimulus–stimulus, i.e., face–speech), which is also known as predictor-choice pairs. Prior to a learning simulation, we extract the features of the biometrics used in the study. For facial features, we experiment by using two approaches: principal component analysis (PCA)-based Eigenfaces and singular value decomposition (SVD). For speech features, we use wavelet packet decomposition (WPD). The experiments show that the PCA-based Eigenfaces feature extraction approach produces better results than SVD. We implement the proposed learning model by using the Spike- Timing-Dependent Plasticity (STDP) algorithm, which depends on the time and rate of pre-post synaptic spikes. The key contribution of our study is the implementation of learning rules via STDP and firing rate in spatiotemporal neural networks based on the Izhikevich spiking model. In our learning, we implement learning for response group association by following the reward-modulated STDP in terms of RL, wherein the firing rate of the response groups determines the reward that will be given. We perform a number of experiments that use existing face samples from the Olivetti Research Laboratory (ORL) dataset, and speech samples from TIDigits. After several experiments and simulations are performed to recognize a subject, the results show that the proposed learning model can associate the predictor (face) with the choice (speech) at optimum performance rates of 77.26% and 82.66% for training and testing, respectively. We also perform learning by using real data, that is, an experiment is conducted on a sample of face–speech data, which have been collected in a manner similar to that of the initial data. The performance results are 79.11% and 77.33% for training and testing, respectively. Based on these results, the proposed learning model can produce high learning performance in terms of combining heterogeneous data (face–speech). This finding opens possibilities to expand RL in the field of biometric authenticatio

    Deep multimodal biometric recognition using contourlet derivative weighted rank fusion with human face, fingerprint and iris images

    Get PDF
    The goal of multimodal biometric recognition system is to make a decision by identifying their physiological behavioural traits. Nevertheless, the decision-making process by biometric recognition system can be extremely complex due to high dimension unimodal features in temporal domain. This paper explains a deep multimodal biometric system for human recognition using three traits, face, fingerprint and iris. With the objective of reducing the feature vector dimension in the temporal domain, first pre-processing is performed using Contourlet Transform Model. Next, Local Derivative Ternary Pattern model is applied to the pre-processed features where the feature discrimination power is improved by obtaining the coefficients that has maximum variation across pre-processed multimodality features, therefore improving recognition accuracy. Weighted Rank Level Fusion is applied to the extracted multimodal features, that efficiently combine the biometric matching scores from several modalities (i.e. face, fingerprint and iris). Finally, a deep learning framework is presented for improving the recognition rate of the multimodal biometric system in temporal domain. The results of the proposed multimodal biometric recognition framework were compared with other multimodal methods. Out of these comparisons, the multimodal face, fingerprint and iris fusion offers significant improvements in the recognition rate of the suggested multimodal biometric system
    corecore