12 research outputs found

    “You Should Have Seen the Look on Your Face…”: Self-awareness of Facial Expressions

    Get PDF
    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals’ awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities

    Postural Effects on the Mental Rotation of Body-Related Pictures: An fMRI Study

    No full text
    This study investigated the embodied effects involved in the mental rotation of pictures of body parts (hands and feet). Blood oxygen level-dependent (BOLD) signals were collected from 18 healthy volunteers who performed mental rotation tasks of rotated drawings of hands under different arm postures. Congruent drawings of hands (those congruent with left-hand posture) evoked stronger activation in the left supplementary motor area (SMA), left precentral gyrus, and left superior parietal lobule (SPL) than did incongruent drawings of hands. Congruent drawings of hands (those congruent with right-hand posture) evoked significant activation in the left inferior parietal lobule (IPL), right SMA, bilateral middle frontal gyrus (MFG), left inferior frontal gyrus (IFG), and bilateral superior frontal gyrus (SFG) compared to that evoked by the incongruent drawings of hands. Similar methodology was implemented with drawings of feet. However, no significant differences in brain activation were observed between congruent and incongruent drawings of feet. This finding suggests that body posture influences body part-related mental rotation in an effector-specific manner. A direct comparison between the medially and laterally rotated drawings revealed activation in the right IPL, left precentral gyrus, bilateral IFG, and bilateral SFG. These results suggest that biomechanical constraints affect the cognitive process of mental rotation

    Human-like Decision Making for Autonomous Vehicles at the Intersection Using Inverse Reinforcement Learning

    No full text
    With the rapid development of autonomous driving technology, both self-driven and human-driven vehicles will share roads in the future and complex information exchange among vehicles will be required. Therefore, autonomous vehicles need to behave as similar to human drivers as possible, to ensure that their behavior can be effectively understood by the drivers of other vehicles and be more in line with the cognition of humans on driving behavior. Therefore, this paper studies the evaluation function of human drivers, using the method of inverse reinforcement learning, aiming for the learned behavior to better imitate the behavior of human drivers. At the same time, this paper proposes a semi-Markov model, to extract the intentions of surrounding related vehicles and divides them into defensive and cooperative, leading the vehicle to adopt a reasonable response to different types of driving scenarios

    Neural activity associated with attention orienting triggered by implied action cues

    No full text
    Spatial attention can be directed by the actions of others. We used ERPs method to investigate the neural underpins associated with attention orienting which is induced by implied body action. Participants performed a standard non-predictive cuing task, in which a directional implied action (throwing and running) or non-action (standing) cue was randomly presented and then followed by a target to the left or right of the central cue, despite cue direction. The cue-triggered ERPs results demonstrated that implied action cues, rather than the non-action cue, could shift the observers' spatial attention as demonstrated by the robust anterior directing attention negativity (ADAN) effects in throwing and running cues. Further, earlier N1 (100-170 ms) and P2 (170-260 ms) waveform differences occurred between implied action and non-action cues over posterior electrodes. The P2 component might reflect implied motion signal perception of implied action cues, and this implied motion perception might play an important role in facilitating the attentional shifts induced by implied action cues. Target-triggered ERPs data (mainly P3a component) indicated that implied action cues (throwing and running) speeded and enhanced the responses to valid targets compared to invalid targets. Furthermore, P3a might imply that implied action orienting may share similar mechanisms of action with voluntary attention, especially at the novel stimuli processing decision-level. These results further support previous behavioral findings that implied body actions direct spatial attention and extend our understanding about the nature of the attentional shifts that are elicited by implied action cues. (C) 2016 Elsevier B.V. All rights reserved

    Brain Activation in Contrasts of Microexpression Following Emotional Contexts

    No full text
    The recognition of microexpressions may be influenced by emotional contexts. The microexpression is recognized poorly when it follows a negative context in contrast to a neutral context. Based on the behavioral evidence, we predicted that the effect of emotional contexts might be dependent on neural activities. Using the synthesized microexpressions task modified from the Micro-Expression Training Tool (METT), we performed an functional MRI (fMRI) study to compare brain response in contrasts of the same targets following different contexts. Behaviorally, we observed that the accuracies of target microexpressions following neutral contexts were significantly higher than those following negative or positive contexts. At the neural level, we found increased brain activations in contrasts of the same targets following different contexts, which reflected the discrepancy in the processing of emotional contexts. The increased activations implied that different emotional contexts might differently influence the processing of subsequent target microexpressions and further suggested interactions between the processing of emotional contexts and of microexpressions

    Effects of task-irrelevant emotional information on deception

    No full text
    Deception has been reported to be influenced by task-relevant emotional information from an external stimulus. However, it remains unclear how task-irrelevant emotional information would influence deception. In the present study, facial expressions of different valence and emotion intensity were presented to participants, where they were asked to make either truthful or deceptive gender judgments according to the preceding cues. We observed the influence of facial expression intensity upon individuals' cognitive cost of deceiving (mean difference of individuals' truthful and deceptive response times). Larger cost was observed for high intensity faces compared to low intensity faces. These results provided insights on how automatic attraction of attention evoked by task-irrelevant emotional information in facial expressions influenced individuals' cognitive cost of deceiving

    "You Should Have Seen the Look on Your Face . . .": Self-awareness of Facial Expressions

    No full text
    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals' awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities

    CAS(ME)(2): A Database for Spontaneous Macro-Expression and Micro-Expression Spotting and Recognition

    No full text
    Deception is a very common phenomenon and its detection can be beneficial to our daily lives. Compared with other deception cues, micro-expression has shown great potential as a promising cue for deception detection. The spotting and recognition of micro-expression from long videos may significantly aid both law enforcement officers and researchers. However, database that contains both micro-expression and macro-expression in long videos is still not publicly available. To facilitate development in this field, we present a new database, Chinese Academy of Sciences Macro-Expressions and Micro-Expressions (CAS(ME)(2)), which provides both macro-expressions and micro-expressions in two parts (A and B). Part A contains 87 long videos that contain spontaneous macro-expressions and micro-expressions. Part B includes 300 cropped spontaneous macro-expression samples and 57 micro-expression samples. The emotion labels are based on a combination of action units (AUs), self-reported emotion for every facial movement, and the emotion types of emotion-evoking videos. Local Binary Pattern (LBP) was employed for the spotting and recognition of macro-expressions and micro-expressions and the results were reported as a baseline evaluation. The CAS(ME)(2) database offers both long videos and cropped expression samples, which may aid researchers in developing efficient algorithms for the spotting and recognition of macro-expressions and micro-expressions

    Voluntary action and tactile sensory feedback in the intentional binding effect

    No full text
    The intentional binding effect refers to a subjective compression over a temporal interval between the start point initialized by a voluntary action and the endpoint signaled by an external sensory (visual or audio) feedback. The present study aimed to explore the influence of tactile sensory feedback on this binding effect by comparing voluntary key-press actions with voluntary key-release actions. In experiment 1, each participant was instructed to report the perceived interval (in ms) between an action and the subsequent visual sensory feedback. In this task, either the action (key-press or key-release) was voluntarily performed by the participant or a kinematically identical movement was passively applied to the left index finger of the participant. In experiment 2, we explored whether the difference in the perception of time was affected by the direction of action. In experiment 3, we developed an apparatus in which two parallel laser beams were generated by a laser emission unit and detected by a laser receiver unit; this allowed the movement of the left index finger to be detected without it touching a keyboard (i.e., without any tactile sensory feedback). Convergent results from all of the experiments showed that the temporal binding effect was only observed when the action was both voluntary and involved physical contact with the key, suggesting that the combination of intention and tactile sensory feedback, as a form of top-down processing, likely distracted attention from temporal events and caused the different binding effects
    corecore