15 research outputs found

    Research on Face Recognition Based on Embedded System

    Get PDF
    Because a number of image feature data to store, complex calculation to execute during the face recognition, therefore the face recognition process was realized only by PCs with high performance. In this paper, the OpenCV facial Haar-like features were used to identify face region; the Principal Component Analysis (PCA) was employed in quick extraction of face features and the Euclidean Distance was also adopted in face recognition; as thus, data amount and computational complexity would be reduced effectively in face recognition, and the face recognition could be carried out on embedded platform. Finally, based on Tiny6410 embedded platform, a set of embedded face recognition systems was constructed. The test results showed that the system has stable operation and high recognition rate can be used in portable and mobile identification and authentication

    Robust Face Recognition in Rotated Eigen Space

    Get PDF
    Face recognition is a very complex classification problem due to nuisance variations in different conditions. Most face recognition approaches either assume constant lighting condition or standard facial expressions, thus cannot deal with both kinds of variations simultaneously. Principal Component Analysis (PCA) cannot handle complex pattern variations such as illumination and expression. Adaptive PCA rotates eigenspace to extract more representative features thus improving the performance. In this paper, we present a way to extract various sets of features by different eigenspace rotations and propose a method to fuse these features to generate nonorthogonal mappings for face recognition. The proposed method is tested on the Asian Face Database with 856 images from 107 subjects with 5 lighting conditions and 4 expressions. We register only one normally lit neutral face image and test on the remaining face images with variations. Experiments show a 95% classification accuracy and a 20% reduction in error rate. This illustrates that the fused features can provide significantly improved pattern classification

    Robust Face Recognition for Data Mining

    Get PDF
    While the technology for mining text documents in large databases could be said to be relatively mature, the same cannot be said for mining other important data types such as speech, music, images and video. Yet these forms of multimedia data are becoming increasingly prevalent on the internet and intranets as bandwidth rapidly increases due to continuing advances in computing hardware and consumer demand. An emerging major problem is the lack of accurate and efficient tools to query these multimedia data directly, so we are usually forced to rely on available metadata such as manual labeling. Currently the most effective way to label data to allow for searching of multimedia archives is for humans to physically review the material. This is already uneconomic or, in an increasing number of application areas, quite impossible because these data are being collected much faster than any group of humans could meaningfully label them - and the pace is accelerating, forming a veritable explosion of non-text data. Some driver applications are emerging from heightened security demands in the 21st century, postproduction of digital interactive television, and the recent deployment of a planetary sensor network overlaid on the internet backbone

    State Preserving Extreme Learning Machine for Face Recognition

    Get PDF
    Extreme Learning Machine (ELM) has been introduced as a new algorithm for training single hidden layer feed-forward neural networks (SLFNs) instead of the classical gradient-based algorithms. Based on the consistency property of data, which enforce similar samples to share similar properties, ELM is a biologically inspired learning algorithm with SLFNs that learns much faster with good generalization and performs well in classification applications. However, the random generation of the weight matrix in current ELM based techniques leads to the possibility of unstable outputs in the learning and testing phases. Therefore, we present a novel approach for computing the weight matrix in ELM which forms a State Preserving Extreme Leaning Machine (SPELM). The SPELM stabilizes ELM training and testing outputs while monotonically increases its accuracy by preserving state variables. Furthermore, three popular feature extraction techniques, namely Gabor, Pyramid Histogram of Oriented Gradients (PHOG) and Local Binary Pattern (LBP) are incorporated with the SPELM for performance evaluation. Experimental results show that our proposed algorithm yields the best performance on the widely used face datasets such as Yale, CMU and ORL compared to state-of-the-art ELM based classifiers

    Biometric authentication via keystroke sound

    Full text link
    Unlike conventional “one shot ” biometric authentica-tion schemes, continuous authentication has a number of advantages, such as longer time for sensing, ability to rec-tify authentication decisions, and persistent verification of a user’s identity, which are critical in applications de-manding enhanced security. However, traditional modali-ties such as face, fingerprint and keystroke dynamics, have various drawbacks in continuous authentication scenar-ios. In light of this, this paper proposes a novel non-intrusive and privacy-aware biometric modality that utilizes keystroke sound. Given the keystroke sound recorded by a low-cost microphone, our system extracts discriminative features and performs matching between a gallery and a probe sound stream. Motivated by the concept of digraphs used in modeling keystroke dynamics, we learn a virtual alphabet from keystroke sound segments, from which the digraph latency within pairs of virtual letters as well as other statistical features are used to generate match scores. The resultant multiple scores are indicative of the similar-ities between two sound streams, and are fused to make a final authentication decision. We collect a first-of-its-kind keystroke sound database of 45 subjects typing on a keyboard. Experiments on static text-based authentication, demonstrate the potential as well as limitations of this bio-metric modality. 1

    An experimental evaluation of the incidence of fitness-function/search-algorithm combinations on the classification performance of myoelectric control systems with iPCA tuning

    Get PDF
    BACKGROUND: The information of electromyographic signals can be used by Myoelectric Control Systems (MCSs) to actuate prostheses. These devices allow the performing of movements that cannot be carried out by persons with amputated limbs. The state of the art in the development of MCSs is based on the use of individual principal component analysis (iPCA) as a stage of pre-processing of the classifiers. The iPCA pre-processing implies an optimization stage which has not yet been deeply explored. METHODS: The present study considers two factors in the iPCA stage: namely A (the fitness function), and B (the search algorithm). The A factor comprises two levels, namely A(1) (the classification error) and A(2) (the correlation factor). Otherwise, the B factor has four levels, specifically B(1) (the Sequential Forward Selection, SFS), B(2) (the Sequential Floating Forward Selection, SFFS), B(3) (Artificial Bee Colony, ABC), and B(4) (Particle Swarm Optimization, PSO). This work evaluates the incidence of each one of the eight possible combinations between A and B factors over the classification error of the MCS. RESULTS: A two factor ANOVA was performed on the computed classification errors and determined that: (1) the interactive effects over the classification error are not significative (F(0.01,3,72) = 4.0659 > f( AB ) = 0.09), (2) the levels of factor A have significative effects on the classification error (F(0.02,1,72) = 5.0162 < f( A ) = 6.56), and (3) the levels of factor B over the classification error are not significative (F(0.01,3,72) = 4.0659 > f( B ) = 0.08). CONCLUSIONS: Considering the classification performance we found a superiority of using the factor A(2) in combination with any of the levels of factor B. With respect to the time performance the analysis suggests that the PSO algorithm is at least 14 percent better than its best competitor. The latter behavior has been observed for a particular configuration set of parameters in the search algorithms. Future works will investigate the effect of these parameters in the classification performance, such as length of the reduced size vector, number of particles and bees used during optimal search, the cognitive parameters in the PSO algorithm as well as the limit of cycles to improve a solution in the ABC algorithm

    A Subspace Projection Methodology for Nonlinear Manifold Based Face Recognition

    Get PDF
    A novel feature extraction method that utilizes nonlinear mapping from the original data space to the feature space is presented in this dissertation. Feature extraction methods aim to find compact representations of data that are easy to classify. Measurements with similar values are grouped to same category, while those with differing values are deemed to be of separate categories. For most practical systems, the meaningful features of a pattern class lie in a low dimensional nonlinear constraint region (manifold) within the high dimensional data space. A learning algorithm to model this nonlinear region and to project patterns to this feature space is developed. Least squares estimation approach that utilizes interdependency between points in training patterns is used to form the nonlinear region. The proposed feature extraction strategy is employed to improve face recognition accuracy under varying illumination conditions and facial expressions. Though the face features show variations under these conditions, the features of one individual tend to cluster together and can be considered as a neighborhood. Low dimensional representations of face patterns in the feature space may lie in a nonlinear constraint region, which when modeled leads to efficient pattern classification. A feature space encompassing multiple pattern classes can be trained by modeling a separate constraint region for each pattern class and obtaining a mean constraint region by averaging all the individual regions. Unlike most other nonlinear techniques, the proposed method provides an easy intuitive way to place new points onto a nonlinear region in the feature space. The proposed feature extraction and classification method results in improved accuracy when compared to the classical linear representations. Face recognition accuracy is further improved by introducing the concepts of modularity, discriminant analysis and phase congruency into the proposed method. In the modular approach, feature components are extracted from different sub-modules of the images and concatenated to make a single vector to represent a face region. By doing this we are able to extract features that are more representative of the local features of the face. When projected onto an arbitrary line, samples from well formed clusters could produce a confused mixture of samples from all the classes leading to poor recognition. Discriminant analysis aims to find an optimal line orientation for which the data classes are well separated. Experiments performed on various databases to evaluate the performance of the proposed face recognition technique have shown improvement in recognition accuracy, especially under varying illumination conditions and facial expressions. This shows that the integration of multiple subspaces, each representing a part of a higher order nonlinear function, could represent a pattern with variability. Research work is progressing to investigate the effectiveness of subspace projection methodology for building manifolds with other nonlinear functions and to identify the optimum nonlinear function from an object classification perspective

    Emotion recognition from human face

    Get PDF

    A study of eigenvector based face verification in static images

    Get PDF
    As one of the most successful application of image analysis and understanding, face recognition has recently received significant attention, especially during the past few years. There are at least two reasons for this trend the first is the wide range of commercial and law enforcement applications and the second is the availability of feasible technologies after 30 years of research. The problem of machine recognition of human faces continues to attract researchers from disciplines such as image processing, pattern recognition, neural networks, computer vision, computer graphics, and psychology. The strong need for user-friendly systems that can secure our assets and protect our privacy without losing our identity in a sea of numbers is obvious. Although very reliable methods of biometric personal identification exist, for example, fingerprint analysis and retinal or iris scans, these methods depend on the cooperation of the participants, whereas a personal identification system based on analysis of frontal or profile images of the face is often effective without the participant’s cooperation or knowledge. The three categories of face recognition are face detection, face identification and face verification. Face Detection means extract the face from total image of the person. Face identification means the input to the system is an unknown face, and the system reports back the determined identity from a database of known individuals. Face verification means the system needs to confirm or reject the claimed identity of the input. My thesis was face verification in static images. Here a static image means the images which are not in motion. The eigenvectors based face verification algorithm gave the results on face verification in static images based upon the eigenvectors and neural network backpropagation algorithm. Eigen vectors are used for give the geometrical information about the faces. First we take 10 images for each person in same angle with different expressions and apply principle component analysis. Here we consider image dimension as 48 x48 then we get 48 eigenvalues. Out of 48 eigenvalues we consider only 10 highest eigenvaues corresponding eigenvectors. These eigenvectors are given as input to the neural network for training. Here we used backpropagation algorithm for training the neural network. After completion of training we give an image which is in different angle for testing purpose. Here we check the verification rate (the rate at which legitimate users is granted access) and false acceptance rate (the rate at which imposters are granted access). Here neural network take more time for training purpose. The proposed algorithm gives the results on face verification in static images based upon the eigenvectors and neural network modified backpropagation algorithm. In modified backpropagation algorithm momentum term is added for decrease the training time. Here for using the modified backpropagation algorithm verification rate also slightly increased and false acceptance rate also slightly decreased
    corecore