15,785 research outputs found

    Social re-orientation and brain development: An expanded and updated view.

    Get PDF
    Social development has been the focus of a great deal of neuroscience based research over the past decade. In this review, we focus on providing a framework for understanding how changes in facets of social development may correspond with changes in brain function. We argue that (1) distinct phases of social behavior emerge based on whether the organizing social force is the mother, peer play, peer integration, or romantic intimacy; (2) each phase is marked by a high degree of affect-driven motivation that elicits a distinct response in subcortical structures; (3) activity generated by these structures interacts with circuits in prefrontal cortex that guide executive functions, and occipital and temporal lobe circuits, which generate specific sensory and perceptual social representations. We propose that the direction, magnitude and duration of interaction among these affective, executive, and perceptual systems may relate to distinct sensitive periods across development that contribute to establishing long-term patterns of brain function and behavior

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    The role of facial movements in emotion recognition

    Get PDF
    Most past research on emotion recognition has used photographs of posed expressions intended to depict the apex of the emotional display. Although these studies have provided important insights into how emotions are perceived in the face, they necessarily leave out any role of dynamic information. In this Review, we synthesize evidence from vision science, affective science and neuroscience to ask when, how and why dynamic information contributes to emotion recognition, beyond the information conveyed in static images. Dynamic displays offer distinctive temporal information such as the direction, quality and speed of movement, which recruit higher-level cognitive processes and support social and emotional inferences that enhance judgements of facial affect. The positive influence of dynamic information on emotion recognition is most evident in suboptimal conditions when observers are impaired and/or facial expressions are degraded or subtle. Dynamic displays further recruit early attentional and motivational resources in the perceiver, facilitating the prompt detection and prediction of others’ emotional states, with benefits for social interaction. Finally, because emotions can be expressed in various modalities, we examine the multimodal integration of dynamic and static cues across different channels, and conclude with suggestions for future research

    Diagnostic information use to understand brain mechanisms of facial expression categorization

    Get PDF
    Proficient categorization of facial expressions is crucial for normal social interaction. Neurophysiological, behavioural, event-related potential, lesion and functional neuroimaging techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless process, and the associated arrangement of bilateral networks. These brain areas exhibit consistent and replicable activation patterns, and can be broadly defined to include visual (occipital and temporal), limbic (amygdala) and prefrontal (orbitofrontal) regions. Together, these areas support early perceptual processing, the formation of detailed representations and subsequent recognition of expressive faces. Despite the critical role of facial expressions in social communication and extensive work in this area, it is still not known how the brain decodes nonverbal signals in terms of expression-specific features. For these reasons, this thesis investigates the role of these so-called diagnostic facial features at three significant stages in expression recognition; the spatiotemporal inputs to the visual system, the dynamic integration of features in higher visual (occipitotemporal) areas, and early sensitivity to features in V1. In Chapter 1, the basic emotion categories are presented, along with the brain regions that are activated by these expressions. In line with this, the current cognitive theory of face processing reviews functional and anatomical dissociations within the distributed neural “face network”. Chapter 1 also introduces the way in which we measure and use diagnostic information to derive brain sensitivity to specific facial features, and how this is a useful tool by which to understand spatial and temporal organisation of expression recognition in the brain. In relation to this, hierarchical, bottom-up neural processing is discussed along with high-level, top-down facilitatory mechanisms. Chapter 2 describes an eye-movement study that reveals inputs to the visual system via fixations reflect diagnostic information use. Inputs to the visual system dictate the information distributed to cognitive systems during the seamless and rapid categorization of expressive faces. How we perform eye-movements during this task informs how task-driven and stimulus-driven mechanisms interact to guide the extraction of information supporting recognition. We recorded eye movements of observers who categorized the six basic categories of facial expressions. We use a measure of task-relevant information (diagnosticity) to discuss oculomotor behaviour, with focus on two findings. Firstly, fixated regions reveal expression differences. Secondly, by examining fixation sequences, the intersection of fixations with diagnostic information increases in a sequence of fixations. This suggests a top-down drive to acquire task-relevant information, with different functional roles for first and final fixations. A combination of psychophysical studies of visual recognition together with the EEG (electroencephalogram) signal is used to infer the dynamics of feature extraction and use during the recognition of facial expressions in Chapter 3. The results reveal a process that integrates visual information over about 50 milliseconds prior to the face-sensitive N170 event-related potential, starting at the eye region, and proceeding gradually towards lower regions. The finding that informative features for recognition are not processed simultaneously but in an orderly progression over a short time period is instructive for understanding the processes involved in visual recognition, and in particular the integration of bottom-up and top-down processes. In Chapter 4 we use fMRI to investigate the task-dependent activation to diagnostic features in early visual areas, suggesting top-down mechanisms as V1 traditionally exhibits only simple response properties. Chapter 3 revealed that diagnostic features modulate the temporal dynamics of brain signals in higher visual areas. Within the hierarchical visual system however, it is not known if an early (V1/V2/V3) sensitivity to diagnostic information contributes to categorical facial judgements, conceivably driven by top-down signals triggered in visual processing. Using retinotopic mapping, we reveal task-dependent information extraction within the earliest cortical representation (V1) of two features known to be differentially necessary for face recognition tasks (eyes and mouth). This strategic encoding of face images is beyond typical V1 properties and suggests a top-down influence of task extending down to the earliest retinotopic stages of visual processing. The significance of these data is discussed in the context of the cortical face network and bidirectional processing in the visual system. The visual cognition of facial expression processing is concerned with the interactive processing of bottom-up sensory-driven information and top-down mechanisms to relate visual input to categorical judgements. The three experiments presented in this thesis are summarized in Chapter 5 in relation to how diagnostic features can be used to explore such processing in the human brain leading to proficient facial expression categorization

    Temporal precedence of emotion over attention modulations in the lateral amygdala: Intracranial ERP evidence from a patient with temporal lobe epilepsy

    Get PDF
    Previous fMRI studies have reported mixed evidence for the influence of selective attention on amygdala responses to emotional stimuli, with some studies showing "automatic" emotional effects to threat-related stimuli without attention (or even without awareness), but other studies showing a gating of amygdala activity by selective attention with no response to unattended stimuli. We recorded intracranial local field potentials from the intact left lateral amygdala in a human patient prior to surgery for epilepsy and tested, with a millisecond time resolution, for neural responses to fearful faces appearing at either task-relevant or task-irrelevant locations. Our results revealed an early emotional effect in the amygdala arising prior to, and independently of, attentional modulation. However, at a later latency, we found a significant modulation of the differential emotional response when attention was directed toward or away from fearful faces. These results suggest separate influences of emotion and attention on amygdala activation and may help reconcile previous discrepancies concerning the relative responsiveness of the human amygdala to emotional and attentional factors

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Spatiotemporal dipole source localization of face processing ERPs in adolescents: a preliminary study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Despite extensive investigation of the neural systems for face perception and emotion recognition in adults and young children in the past, the precise temporal activation of brain sources specific to the processing of emotional facial expressions in older children and adolescents is not well known. This preliminary study aims to trace the spatiotemporal dynamics of facial emotion processing during adolescence and provide a basis for future developmental studies and comparisons with patient populations that have social-emotional deficits such as autism.</p> <p>Methods</p> <p>We presented pictures showing happy, angry, fearful, or neutral facial expressions to healthy adolescents (aged 10–16 years) and recorded 128-channel event-related potentials (ERPs) while they performed an emotion discrimination task. ERP components were analyzed for effects of age and emotion on amplitude and latency. The underlying cortical sources of scalp ERP activity were modeled as multiple equivalent current dipoles using Brain Electrical Source Analysis (BESA).</p> <p>Results</p> <p>Initial global/holistic processing of faces (P1) took place in the visual association cortex (lingual gyrus) around 120 ms post-stimulus. Next, structural encoding of facial features (N170) occurred between 160–200 ms in the inferior temporal/fusiform region, and perhaps early emotion processing (Vertex Positive Potential or VPP) in the amygdala and orbitofrontal cortex. Finally, cognitive analysis of facial expressions (P2) in the prefrontal cortex and emotional reactions in somatosensory areas were observed from about 230 ms onwards. The temporal sequence of cortical source activation in response to facial emotion processing was occipital, prefrontal, fusiform, parietal for young adolescents and occipital, limbic, inferior temporal, and prefrontal for older adolescents.</p> <p>Conclusion</p> <p>This is a first report of high-density ERP dipole source analysis in healthy adolescents which traces the sequence of neural activity within the first 500 ms of categorizing emotion from faces. Our spatio-temporal brain source models showed the presence of adult-like cortical networks for face processing in adolescents, whose functional specificity to different emotions appear to be not yet fully mature. Age-related differences in brain activation patterns illustrate the continued development and maturation of distinct neural systems for processing facial expressions during adolescence and possible changes in emotion perception, experience, and reaction with age.</p
    • …
    corecore