16,361 research outputs found

    Familiarization through Ambient Images Alone

    Get PDF
    The term “ambient images” has begun to show up in much of the current literature on facial recognition. Ambient images refer to naturally occurring views of a face that captures the idiosyncratic ways in which a target face may vary (Ritchie & Burton, 2017). Much of the literature on ambient images have concluded that exposing people to ambient images of a target face can lead to improved facial recognition for that target face. Some studies have even suggested that familiarity is the result of increased exposure to ambient images of a target face (Burton, Kramer, Ritchie, & Jenkins, 2016). The current study extended the literature on ambient images. Using the face sorting paradigm from Jenkins, White, Van Montfort, and Burton (2011), the current study served three purposes. First, this study captured whether there was an incremental benefit in showing ambient images. Particularly, we observed whether performance improved as participants were shown a low, medium, or high number of ambient images. Next, this study attempted to provide a strong enough manipulation that participant would be able to perform the face sorting task perfectly, after being exposed to a high number (45 total) of ambient images. Lastly, this study introduced time data as a measure of face familiarity. The results found support for one aim of this study and partial support for another aim of this study. Time data were found to be an effective quantitative measure of familiarity. Also, there was some evidence of an incremental benefit of ambient images, but that benefit disappeared after viewing around 15 unique exemplar presentations of a novel identity’s face. Lastly, exposing participants to 45 ambient images alone did not cause them to reach perfect performance. The paper concludes with a discussion on the need to extend past ambient images to understand how to best mimic natural familiarity in a lab setting

    Improving Human Face Recognition Using Deep Learning Based Image Registration And Multi-Classifier Approaches

    Get PDF
    Face detection, registration, and recognition have become a fascinating field for researchers. The motivation behind the enormous interest in the topic is the need to improve the accuracy of many real-time applications. Countless methodologies have been acknowledged and presented in the past years. The complexity of the human face visual and the significant changes based on different effects make it more challenging to design as well as implementing a powerful computational system for object recognition in addition to human face recognition. Using supervised learning often requires extensive training for the computer which results in high execution times. It is an essential step in the face recognition to apply strong preprocessing approaches such as face registration to achieve a high recognition accuracy rate. Although there are exist approaches do both detection and recognition, we believe the absence of a complete end-to-end system capable of performing recognition from an arbitrary scene is in large part due to the difficulty in alignment. Often, the face registration is ignored, with the assumption that the detector will perform a rough alignment, leading to suboptimal recognition performance. In this research, we presented an enhanced approach to improve human face recognition using a back-propagation neural network (BPNN) and features extraction based on the correlation between the training images. A key contribution of this paper is the generation of a new set called the T-Dataset from the original training data set, which is used to train the BPNN. We generated the T-Dataset using the correlation between the training images without using a common technique of image density. The correlated T-Dataset provides a high distinction layer between the training images, which helps the BPNN to converge faster and achieve better accuracy. Data and features reduction is essential in the face recognition process, and researchers have recently focused on the modern neural network. Therefore, we used using a classical conventional Principal Component Analysis (PCA) and Local Binary Patterns (LBP) to prove that there is a potential improvement even using traditional methods. We applied five distance measurement algorithms and then combined them to obtain the T-Dataset, which we fed into the BPNN. We achieved higher face recognition accuracy with less computational cost compared with the current approach by using reduced image features. We test the proposed framework on two small data sets, the YALE and AT&T data sets, as the ground truth. We achieved tremendous accuracy. Furthermore, we evaluate our method on one of the state-of-the-art benchmark data sets, Labeled Faces in the Wild (LFW), where we produce a competitive face recognition performance. In addition, we presented an enhanced framework to improve the face registration using deep learning model. We used deep architectures such as VGG16 and VGG19 to train our method. We trained our model to learn the transformation parameters (Rotation, scaling, and shifting). By leaning the transformation parameters, we will able to transfer the image back to the frontal domain. We used the LFW dataset to evaluate our method, and we achieve high accuracy

    Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression

    Full text link
    We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity-expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS

    Illumination and Expression Invariant Face Recognition With One Sample Image

    Get PDF
    Most face recognition approaches either assume constant lighting condition or standard facial expressions, thus cannot deal with both kinds of variations simultaneously. This problem becomes more serious in applications when only one sample image per class is available. In this paper, we present a linear pattern classification algorithm, Adaptive Principal Component Analysis (APCA), which first applies PCA to construct a subspace for image representation; then warps the subspace according to the within-class covariance and between-class covariance of samples to improve class separability. This technique performed well under variations in lighting conditions. To produce insensitivity to expressions, we rotate the subspace before warping in order to enhance the representativeness of features. This method is evaluated on the Asian Face Image Database. Experiments show that APCA outperforms PCA and other methods in terms of accuracy, robustness and generalization ability
    • …
    corecore