80,012 research outputs found
Distinguishing Posed and Spontaneous Smiles by Facial Dynamics
Smile is one of the key elements in identifying emotions and present state of
mind of an individual. In this work, we propose a cluster of approaches to
classify posed and spontaneous smiles using deep convolutional neural network
(CNN) face features, local phase quantization (LPQ), dense optical flow and
histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for
micro-expression smile amplification along with three normalization procedures
for distinguishing posed and spontaneous smiles. Although the deep CNN face
model is trained with large number of face images, HOG features outperforms
this model for overall face smile classification task. Using EVM to amplify
micro-expressions did not have a significant impact on classification accuracy,
while the normalizing facial features improved classification accuracy. Unlike
many manual or semi-automatic methodologies, our approach aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large UvA-NEMO smile
database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial
Behavior Analysi
Face blindness
Facial recognition is a complex task, often done immediately and readily, involving discrimination of subtle differences in facial structures with differences in facial expressions, ageing, perspectives and lighting. Facial recognition requires fast identification of stimuli which are then correlated against reservoirs of faces which are accumulated throughout life (Barton and Corrow, 2016). The facial recognition system is extremely complex, and if impaired, cannot be fully remedied by other areas of the brain. When such injury occurs early on in life, juvenile brain plasticity has been shown to be potentially inadequate to restore facial recognition functions, thereby suggesting that such an impairment can have severe, permanent implications, even at an early age (Barton et al., 2003) Damage to any part of the facial recognition mechanism may result in the development of face blindness. Such dysfunction results in the development of selective face-recognition and visual learning deficits, a condition called prosopagnosia. Prosopagnosia can be either acquired or congenital. The acquired form of prosopagnosia is considered to be a rare consequence of occipital or temporal lobe damage, possibly due to stroke or lesions occurring in adulthood. Congenital prosopagnosia, on the other hand, is usually not found associated with any gross abnormalities, and no clear underlying causative agent is found to be associated with the development of the disease (Grüter et al., 2008). Nevertheless, face blindness in children may also be associated with inherited or acquired brain lesions, and may not be exclusively of a congenital/hereditary aetiology. Moreover, prosopagnosia can also occur in association with other disorders, which may be psychiatric, developmental or associated with multiple types of visual impairment (Watson et al., 2016).peer-reviewe
Flash-lag chimeras: the role of perceived alignment in the composite face effect
Spatial alignment of different face halves results in a configuration that mars the recognition of the identity of either face half (). What would happen to the recognition performance for face halves that were aligned on the retina but were perceived as misaligned, or were misaligned on the retina but were perceived as aligned? We used the 'flash-lag' effect () to address these questions. We created chimeras consisting of a stationary top half-face initially aligned with a moving bottom half-face. Flash-lag chimeras were better recognized than their stationary counterparts. However when flashed face halves were presented physically ahead of moving halves thereby nulling the flash-lag effect, recognition was impaired. This counters the notion that relative movement between the two face halves per se is sufficient to explain better recognition of flash-lag chimeras. Thus, the perceived spatial alignment of face halves (despite retinal misalignment) impairs recognition, while perceived misalignment (despite retinal alignment) does not
Recommended from our members
The Man Who Mistook His Neuropsychologist For a Popstar: When Configural Processing Fails in Acquired Prosopagnosia
We report the case of an individual with acquired prosopagnosia who experiences extreme difficulties in recognizing familiar faces in everyday life despite excellent object recognition skills. Formal testing indicates that he is also severely impaired at remembering pre-experimentally unfamiliar faces and that he takes an extremely long time to identify famous faces and to match unfamiliar faces. Nevertheless, he performs as accurately and quickly as controls at identifying inverted familiar and unfamiliar faces and can recognize famous faces from their external features. He also performs as accurately as controls at recognizing famous faces when fracturing conceals the configural information in the face. He shows evidence of impaired global processing but normal local processing of Navon figures. This case appears to reflect the clearest example yet of an acquired prosopagnosic patient whose familiar face recognition deficit is caused by a severe configural processing deficit in the absence of any problems in featural processing. These preserved featural skills together with apparently intact visual imagery for faces allow him to identify a surprisingly large number of famous faces when unlimited time is available. The theoretical implications of this pattern of performance for understanding the nature of acquired prosopagnosia are discussed.DY, Avery Braun, Jacob Waite, and Nadine Wanke, Bruno Rossion, Thomas Busigny and the grant awarded by AJ by the Experimental Psychology Society (EPS
I know you are beautiful even without looking at you: discrimination of facial beauty in peripheral vision
Prior research suggests that facial attractiveness may capture attention at parafovea. However, little is known about how well facial beauty can be detected at parafoveal and peripheral vision. Participants in this study judged relative attractiveness of a face pair presented simultaneously at several eccentricities from the central fixation. The results show that beauty is not only detectable at parafovea but also at periphery. The discrimination performance at parafovea was indistinguishable from the performance around the fovea. Moreover, performance was well above chance even at the periphery. The results show that the visual system is able to use the low spatial frequency information to appraise attractiveness. These findings not only provide an explanation for why a beautiful face could capture attention when central vision is already engaged elsewhere, but also reveal the potential means by which a crowd of faces is quickly scanned for attractiveness
Time-Efficient Hybrid Approach for Facial Expression Recognition
Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database
Encapsulated social perception of emotional expressions
In this paper I argue that the detection of emotional expressions is, in its early stages, informationally encapsulated. I clarify and defend such a view via the appeal to data from social perception on the visual processing of faces, bodies, facial and bodily expressions. Encapsulated social perception might exist alongside processes that are cognitively penetrated, and that have to do with recognition and categorization, and play a central evolutionary function in preparing early and rapid responses to the emotional stimuli
Training methods for facial image comparison: a literature review
This literature review was commissioned to explore the psychological literature relating to facial image comparison with a particular emphasis on whether individuals can be trained to improve performance on this task. Surprisingly few studies have addressed this question directly. As a consequence, this review has been extended to cover training of face recognition and training of different kinds of perceptual comparisons where we are of the opinion that the methodologies or findings of such studies are informative. The majority of studies of face processing have examined face recognition, which relies heavily on memory. This may be memory for a face that was learned recently (e.g. minutes or hours previously) or for a face learned longer ago, perhaps after many exposures (e.g. friends, family members, celebrities). Successful face recognition, irrespective of the type of face, relies on the ability to retrieve the to-berecognised face from long-term memory. This memory is then compared to the physically present image to reach a recognition decision. In contrast, in face matching task two physical representations of a face (live, photographs, movies) are compared and so long-term memory is not involved. Because the comparison is between two present stimuli rather than between a present stimulus and a memory, one might expect that face matching, even if not an easy task, would be easier to do and easier to learn than face recognition. In support of this, there is evidence that judgment tasks where a presented stimulus must be judged by a remembered standard are generally more cognitively demanding than judgments that require comparing two presented stimuli Davies & Parasuraman, 1982; Parasuraman & Davies, 1977; Warm and Dember, 1998). Is there enough overlap between face recognition and matching that it is useful to look at the literature recognition? No study has directly compared face recognition and face matching, so we turn to research in which people decided whether two non-face stimuli were the same or different. In these studies, accuracy of comparison is not always better when the comparator is present than when it is remembered. Further, all perceptual factors that were found to affect comparisons of simultaneously presented objects also affected comparisons of successively presented objects in qualitatively the same way. Those studies involved judgments about colour (Newhall, Burnham & Clark, 1957; Romero, Hita & Del Barco, 1986), and shape (Larsen, McIlhagga & Bundesen, 1999; Lawson, Bülthoff & Dumbell, 2003; Quinlan, 1995). Although one must be cautious in generalising from studies of object processing to studies of face processing (see, e.g., section comparing face processing to object processing), from these kinds of studies there is no evidence to suggest that there are qualitative differences in the perceptual aspects of how recognition and matching are done. As a result, this review will include studies of face recognition skill as well as face matching skill. The distinction between face recognition involving memory and face matching not involving memory is clouded in many recognition studies which require observers to decide which of many presented faces matches a remembered face (e.g., eyewitness studies). And of course there are other forensic face-matching tasks that will require comparison to both presented and remembered comparators (e.g., deciding whether any person in a video showing a crowd is the target person). For this reason, too, we choose to include studies of face recognition as well as face matching in our revie
- …