126,700 research outputs found

    Status is fine for the in-group but out-group members watch out: Examining an optimal model of face processing using eye-tracking

    Get PDF
    Face recognition is an important factor in everyday social interaction. Bruce and Young\u27s (1986) model of face processing has been largely accepted as a model for face processing, however, it fails to account for differential processing based on race. MacLin and MacLin (in press) propose the presence of a cognitive gating mechanism (CGM) that suggests different processing strategies are used for in-group and out-group members. To date, the model has only been examined using novel stimuli. The present research examined the model using famous and nonfamous African-American and Caucasian faces to determine if the CGM adequately explains the recognition of familiar faces. Reaction times and eye-movements were recorded while participants completed a racial categorization task or famousness classification task. Results indicate that familiarity with a face indeed plays a role in the processing of own- and other-race faces. Reaction times and eye-movements differed as a function of race, fame, and task type. Implications for a modified version of the CGM and other existing face models are discussed

    Visual and Lingual Emotion Recognition using Deep Learning Techniques

    Get PDF
    Emotion recognition has been an integral part of many applications like video games, cognitive computing, and human computer interaction. Emotion can be recognized by many sources including speech, facial expressions, hand gestures and textual attributes. We have developed a prototype emotion recognition system using computer vision and natural language processing techniques. Our goal hybrid system uses mobile camera frames and features abstracted from speech named Mel Frequency Cepstral Coefficient (MFCC) to recognize the emotion of a person. To acknowledge the emotions based on facial expressions, we have developed a Convolutional Neural Network (CNN) model, which has an accuracy of 68%. To recognize emotions based on Speech MFCCs, we have developed a sequential model with an accuracy of 69%. Out Android application can access the front and back camera simultaneously. This allows our application to predict the emotion of the overall conversation happening between the people facing both cameras. The application is also able to record the audio conversation between those people. The two emotions predicted (Face and Speech) are merged into one single emotion using the Fusion Algorithm. Our models are converted to TensorFlow-lite models to reduce the model size and support the limited processing power of mobile. Our system classifies emotions into seven classes: neutral, surprise, happy, fear, sad, disgust, and angr

    Who is that? Brain networks and mechanisms for identifying individuals

    Get PDF
    Social animals can identify conspecifics by many forms of sensory input. However, whether the neuronal computations that support this ability to identify individuals rely on modality-independent convergence or involve ongoing synergistic interactions along the multiple sensory streams remains controversial. Direct neuronal measurements at relevant brain sites could address such questions, but this requires better bridging the work in humans and animal models. Here, we overview recent studies in nonhuman primates on voice and face identity-sensitive pathways and evaluate the correspondences to relevant findings in humans. This synthesis provides insights into converging sensory streams in the primate anterior temporal lobe (ATL) for identity processing. Furthermore, we advance a model and suggest how alternative neuronal mechanisms could be tested

    Autistic trait interactions underlie sex-dependent facial recognition abilities in the normal population

    Get PDF
    Autistic face processing difficulties are either uniquely social or due to a piecemeal cognitive "style". Co-morbidity of social deficits and piecemeal cognition in autism makes teasing apart these accounts difficult. These traits vary normally, and are more separable in the general population, suggesting another way to compare accounts. Participants completed the Autism Quotient survey of autistic traits, and one of three face recognition tests: full-face, eyes-only, or mouth-only. Social traits predicted performance in the full-face condition in both sexes. Eyes-only males’ performance was predicted by a social × cognitive trait interaction: attention to detail boosted face recognition in males with few social traits, but hindered performance in those reporting many social traits. This suggests social/non-social Autism Spectrum Conditions (ASC) trait interactions at the behavioral level. In the presence of few ASC-like difficulties in social reciprocity, an ASC-like attention to detail may confer advantages on typical males’ face recognition skills. On the other hand, when attention to detail co-occurs with difficulties in social reciprocity, a detailed focus may exacerbate such already present social difficulties, as is thought to occur in autism

    EMPATH: A Neural Network that Categorizes Facial Expressions

    Get PDF
    There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain

    Atypical eye contact in autism: Models, mechanisms and development

    Get PDF
    An atypical pattern of eye contact behaviour is one of the most significant symptoms of Autism Spectrum Disorder (ASD). Recent empirical advances have revealed the developmental, cognitive and neural basis of atypical eye contact behaviour in ASD. We review different models and advance a new ‘fast-track modulator model’. Specifically, we propose that atypical eye contact processing in ASD originates in the lack of influence from a subcortical face and eye contact detection route, which is hypothesized to modulate eye contact processing and guide its emergent specialization during development
    corecore