847 research outputs found

    Detecting social signals from the face

    Get PDF
    This thesis investigates our sensitivity to social signals from the face, both in health and disease, and explores some of the methodologies employed to measure them. The first set of experiments used forced choice and free naIll1ng paradigms to investigate the interpretation of a set of facial expressions by Western and Japanese participants. Performance in the forced choice task exceeded that measured in the free naming task for both cultures, but the Japanese participants were found to be particularly poor at labelling expressions of fear and disgust. The difficulties experienced with translation and interpretation in these tasks led to the development of a psychophysical paradigm which was used to measure the signalling strength of facial expressions without the need for participants to interpret what they saw. Psychophysical tasks were also used to measure sensitivity to eye gaze direction. A 'live' and screen-based task produced comparable thresholds and revealed that our sensitivity to these ocular signals was at least as good as Snellen acuity. Manipulations of the facial surround in the screen-based task revealed that the detection of gaze direction was facilitated by the presence of the facial surround and as such it can be assumed that gaze discriminations are likely to be made in conjunction with other face processing analyses. The tasks developed in these chapters were used to test two patients with bilateral amygdala damage. Patients with this brain injury have been reported to experience difficulties in the interpretation of facial and auditory signals of fear. In this thesis, their performance was found to depend on the task used to measure it. However, neither patient was found to be impaired in their ability to label fearful expressions compared to control participants. Instead, patient SE demonstrated a consistently poor performance in his ability to interpret expressions of disgust. Vll Experiments 2, 3, 4 and 5 of Chapter 3, have also been reported in Perception, 1995, Vol. 24, Supplement, pp. 14. The Face as a long distance transmitter. Jenkins, J., Craven, B. & Bruce, V. Experiments 1,2,3 and 4 of Chapter 3 were also reported in the Technical Report of the Institute of Electronics Information and Communication Engineers. HIP 96-39 (1997-03). Methods for detecting social signals from the face. Jenkins, J., Craven, B., Bruce, V., & Akamatsu, S. Experiments 2 and 5 of Chapter 3, and a selection of the patient studies from Chapter 6 were reported at the Experimental Psychology Society, Bristol meeting, 1996, and at the Applied Vision Association, Annual Meeting, April, 1996. Sensitivity to Expressive Signals from the Human Face: Psychophysical and Neuropsychological Investigations. Jenkins, J., Bruce, V., Calder, A., & Craven, B

    Observers’ Pupillary Responses in Recognising Real and Posed Smiles: A Preliminary Study

    Get PDF
    Pupillary responses (PR) change differently for different types of stimuli. This study aims to check whether observers’ PR can recognise real and posed smiles from a set of smile images and videos. We showed the smile images and smile videos stimuli to observers, and recorded their pupillary responses considering four different situations, namely paired videos, paired images, single videos, and single images. When the same smiler was viewed by observers in both real and posed smile forms, we refer them as “paired”; otherwise we use the term “single”. The primary analysis on pupil data revealed that the differences of pupillary response between real and posed smiles are more significant in case of paired videos compared to others. This result is found from timeline analysis, KS-test, and ANOVA test. Overall, our model can recognise real and posed smiles from observers’ pupillary responses instead of smilers’ responses. Our research will be applicable in affective computing and computerhuman interaction for measuring emotional authenticity

    Sensorimotor cortex as a critical component of an 'extended' mirror neuron system: Does it solve the development, correspondence, and control problems in mirroring?

    Get PDF
    A core assumption of how humans understand and infer the intentions and beliefs of others is the existence of a functional self-other distinction. At least two neural systems have been proposed to manage such a critical distinction. One system, part of the classic motor system, is specialized for the preparation and execution of motor actions that are self realized and voluntary, while the other appears primarily involved in capturing and understanding the actions of non-self or others. The latter system, of which the mirror neuron system is part, is the canonical action 'resonance' system in the brain that has evolved to share many of the same circuits involved in motor control. Mirroring or 'shared circuit systems' are assumed to be involved in resonating, imitating, and/or simulating the actions of others. A number of researchers have proposed that shared representations of motor actions may form a foundational cornerstone for higher order social processes, such as motor learning, action understanding, imitation, perspective taking, understanding facial emotions, and empathy. However, mirroring systems that evolve from the classic motor system present at least three problems: a development, a correspondence, and a control problem. Developmentally, the question is how does a mirroring system arise? How do humans acquire the ability to simulate through mapping observed onto executed actions? Are mirror neurons innate and therefore genetically programmed? To what extent is learning necessary? In terms of the correspondence problem, the question is how does the observer agent know what the observed agent's resonance activation pattern is? How does the matching of motor activation patterns occur? Finally, in terms of the control problem, the issue is how to efficiently control a mirroring system when it is turned on automatically through observation? Or, as others have stated the problem more succinctly: "Why don't we imitate all the time?" In this review, we argue from an anatomical, physiological, modeling, and functional perspectives that a critical component of the human mirror neuron system is sensorimotor cortex. Not only are sensorimotor transformations necessary for computing the patterns of muscle activation and kinematics during action observation but they provide potential answers to the development, correspondence and control problems

    Emotional contagion and prosocial behavior in rodents

    Get PDF
    Empathy is critical to adjusting our behavior to the state of others. The past decade dramatically deepened our understanding of the biological origin of this capacity. We now understand that rodents robustly show emotional contagion for the distress of others via neural structures homologous to those involved in human empathy. Their propensity to approach others in distress strengthens this effect. Although rodents can also learn to favor behaviors that benefit others via structures overlapping with those of emotional contagion, they do so less reliably and more selectively. Together, this suggests evolution selected mechanisms for emotional contagion to prepare animals for dangers by using others as sentinels. Such shared emotions additionally can, under certain circumstances, promote prosocial behavior

    The perception and cognition of emotion from motion

    Get PDF
    Emotional expression has been intensively researched in the past, however, this research was normally conducted on facial expressions and only seldomly on dynamic stimuli. We have been interested in better understanding the perception and cognition of emotion from human motion. To this end 11 experiments were conducted that spanned the perception and representation of emotion, the role spatial and temporal cues played in the perception of emotions and finally high level cognitive features in the categorisation of emotion. The stimuli we employed were point-light displays of human arm movements recorded as actors portrayed ordinary actions with emotion. To create them we used motion capture technology and computer animation techniques. Results from the first two experiments showed basic human competence in recognition of emotion and that the representation of emotions is along two dimensions. These dimensions resembled arousal and valence, and the psychological space resembled that found for both facial expression and experienced affect. In a search for possible stimulus properties that would act as correlates for the dimensions, it emerged that arousal could be accounted for by movement speed while valence was related to phase relations between joints in the displays. In the third experiment we manipulated the dimension of arousal and showed that through a modulation of duration, perception of angry, sad and neutral movements could be modulated. In experiments 4-7 the contribution of spatial cues to the perception of emotion was explored and in the final set of experiments (8-11) perception of emotion was examined from a cognitive perspective. Through the course of the research a number of interesting findings emerged that suggested three primary directions for future research: the possible relationship between attributions of animacy and emotion to animate and inanimate non-humans. The phase or timing relationships between elements in a display as a categorical cue to valence and finally the unexplored relationship between cues to emotion from movements and faces

    Empathy and Evaluative Inquiry

    Get PDF

    The Complementary Brain: A Unifying View of Brain Specialization and Modularity

    Full text link
    Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-I-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-I-0657

    The Complementary Brain: From Brain Dynamics To Conscious Experiences

    Full text link
    How do our brains so effectively achieve adaptive behavior in a changing world? Evidence is reviewed that brains are organized into parallel processing streams with complementary properties. Hierarchical interactions within each stream and parallel interactions between streams create coherent behavioral representations that overcome the complementary deficiencies of each stream and support unitary conscious experiences. This perspective suggests how brain design reflects the organization of the physical world with which brains interact, and suggests an alternative to the computer metaphor suggesting that brains are organized into independent modules. Examples from perception, learning, cognition, and action are described, and theoretical concepts and mechanisms by which complementarity is accomplished are summarized.Defense Advanced Research Projects and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-1-0657
    • …
    corecore