16 research outputs found

    I Undervalue You but I Need You: The Dissociation of Attitude and Memory Toward In-Group Members

    Get PDF
    In the present study, the in-group bias or in-group derogation among mainland Chinese was investigated through a rating task and a recognition test. In two experiments,participants from two universities with similar ranks rated novel faces or names and then had a recognition test. Half of the faces or names were labeled as participants' own university and the other half were labeled as their counterpart. Results showed that, for either faces or names, rating scores for out-group members were consistently higher than those for in-group members, whereas the recognition accuracy showed just the opposite. These results indicated that the attitude and memory for group-relevant information might be dissociated among Mainland Chinese

    A Study on the Emotional Analysis of Abandoned Surrogacy Events Based on Text Mining

    No full text
    In late January 2021, news that actress Zheng Shuang had surrogacy abroad and had wanted to give up her children sparked a public outcry. This paper takes Zheng Shuang’s comments on the topic of surrogacy and her abandonment as the research object. Firstly, the web crawler technology is used to grab and mine the comment text, and then the ROSTCM software is used to analyze the text data to explore the comment topics of Weibo network users after the abandonment event and the analysis of their emotional tendencies to the event

    Eye-movement analysis of training effectiveness for microexpression recognition

    No full text

    Do different emotional valences have same effects on spatial attention?

    No full text
    Emotional stimuli have a priority to be processed relative to neutral stimuli. However, it is still unclear whether different emotions have similar or distinct influences on attention. We conducted three experiments to answer the question, which used three emotion valences: positive, negative and neutral. Pictures of money, snake, lamp and letter x were used as stimuli in Experiment 1. In Experiment 2A, schematic emotional faces (angry, smile and neutral face) were used as experimental stimuli to control the stimuli complexity. In Experiment 2B, stimuli were three line drawing pictures selected from the Chinese Version of Abbreviated PAD Emotion Scales, corresponding respectively to anger, joy and neutral emotion. We employed the paradigm of inhibition of return (IOR, an effect on spatial attention that people are slow to react to stimuli which appear at recently attended locations, cf. Posner & Cohen, 1984) which used exogenous cues and included 20% catch trials. Seventy-four university students participated in the experiments. We found that participants needed more time to process negative emotional pictures (Exp1, 2A&2B), and the effect of IOR could happen at the ISI (interstimulus interval) as short as 50ms (Exp1). Meanwhile, the data demonstrated that IOR happened at 50ms ISI only when the schematic face was angry, and RTs of angry schematic faces were significantly longer than RTs of the other two faces (Exp2A). We further found that the expectancy might play a role in explaining these results (Exp3). In all three experiments, we found consistently there was a U-shaped relationship between RT and ISI, irrespective of the cue validity and emotional valence. These results showed that different emotional valences had distinct influences on attention. To be specific positive and neutral emotions could be processed more rapidly than the negative emotion

    Effects of the duration of expressions on the recognition of microexpressions

    No full text
    Objective: The purpose of this study was to investigate the effects of the duration of expressions on the recognition of microexpressions, which are closely related to deception. Methods: In two experiments, participants were briefly (from 20 to 300 ms) shown one of six basic expressions and then were asked to identify the expression. Results: The results showed that the participants' performance in recognition of microexpressions increased with the duration of the expressions, reaching a turning point at 200 ms before levelling off. The results also indicated that practice could improve the participants' performance. Conclusions: The results of this study suggest that the proper upper limit of the duration of microexpressions might be around 1/5 of a second and confirmed that the ability to recognize microexpressions can be enhanced with practice

    The Machine Knows What You Are Hiding: An Automatic Micro-expression Recognition System

    No full text
    Micro-expressions are one of the most important behavioral clues for lie and dangerous demeanor detections. However, it is difficult for humans to detect micro-expressions. In this paper, a new approach for automatic micro-expression recognition is presented. The system is fully automatic and operates in frame by frame manner. It automatically locates the face and extracts the features by using Gabor filters. GentleSVM is then employed to identify micro-expressions. As for spotting, the system obtained 95.83% accuracy. As for recognition, the system showed 85.42% accuracy which was higher than the performance of trained human subjects. To further improve the performance, a more representative training set, a more sophisticated testing bed, and an accurate image alignment method should be focused in future research.</p

    Recognizing Fleeting Facial Expressions with Different Viewpoints

    No full text
    Most research of facial expression recognition used static, front view and long-lasting stimuli of expressions. A paucity of research exists concerning recognition of the fleeting expressions from different viewpoints. To investigate how duration and viewpoints together influence the expression recognition, we employed expressions with two different viewpoints (three-quarters and profile views) and showed them to the participants transiently. The duration of expressions was one of the following: 20, 40, 80, 120, 160, 200, 240, or 280 ms. In experiment 1, we used static facial expressions; In experiment 2, we added dynamic information by adding two neutral expressions before and after the emotional expressions. The results showed an interaction effect between viewpoint and duration on expression recognition. Furthermore, we found that happiness is the easiest expression to recognize even under the conditions of fleeting presentation and side-view. This study informed the automatic expression recognition of human data under conditions of short duration and different viewpoints. </div
    corecore