674 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    (e)motion: The interplay between emotional processing and the sensorimotor system

    Get PDF
    This thesis aimed to explore the relationship between emotional processing and the sensorimotor system, mainly focusing on one information source derived from emotional body language (EBL). We investigated such a relationship in four different experiments and through several methodologies ranging from behavioral to neurophysiological techniques, by means of transcranial magnetic stimulation (TMS) and high-density electroencephalography (hdEEG), in healthy subjects (experiments 1, 2 and 3) and patients affected by Parkinson’s Disease (PD) (experiment 4). In the first experiment, whose aims were to explore the ability to process, discriminate and recognize emotional information carried by body language and to test motor response through response times (RTs) to emotional stimuli (i.e., EBL and IAPS), we found that fearful EBL is rapidly recognized and processed, probably because of a rapid and instinctual activation of several brain structures involved in defensive reactions. In the second experiment we investigated the effects of emotion processing (i.e., Fear, Happy and Neutral) on the sensorimotor system through a TMS protocol assessing short-latency afferent inhibition (SAI) at two timepoints (i.e., 120 and 300 ms). Our results showed that sensorimotor inhibition in the first 120 ms after stimulus onset is increased during processing of fearful emotional stimuli, reflecting the fact that automatic processing of threatening information can modulate attentional resources and cholinergic activity. In the third experiment, were a protocol involving hdEEG and a source localization workflow was implemented in the study of event-related potentials (ERPs) and mu-alpha and beta-bands rhythms during EBL processing, we confirmed what observed in the second experiment by showing that during processing of fearful body expressions there was an increased activity in the β frequency band in the somatosensory cortex which in turn may be one of the factors responsible for reducing the activation of motor related areas and, hence, increase sensorimotor inhibition. Lastly, in the fourth experiment we partly replicated the experimental design of the first experiment but in patients with Parkinson’s disease and using not only emotional body language stimuli and emotional scenes, but also emotional facial expressions. Our results showed that motor responses in PD patients are speeded when observing a potential threat, for both the embodied set of stimuli (EBL and facial expressions). We discussed this finding in relation to the “Kinesia paradoxa” phenomenon, defined as “the sudden transient ability of a patient with PD to perform a task he or she was previously unable to perform”

    On the neural networks of empathy: A principal component analysis of an fMRI study

    Get PDF
    © 2008 Nomi et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licens

    Diagnostic information use to understand brain mechanisms of facial expression categorization

    Get PDF
    Proficient categorization of facial expressions is crucial for normal social interaction. Neurophysiological, behavioural, event-related potential, lesion and functional neuroimaging techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless process, and the associated arrangement of bilateral networks. These brain areas exhibit consistent and replicable activation patterns, and can be broadly defined to include visual (occipital and temporal), limbic (amygdala) and prefrontal (orbitofrontal) regions. Together, these areas support early perceptual processing, the formation of detailed representations and subsequent recognition of expressive faces. Despite the critical role of facial expressions in social communication and extensive work in this area, it is still not known how the brain decodes nonverbal signals in terms of expression-specific features. For these reasons, this thesis investigates the role of these so-called diagnostic facial features at three significant stages in expression recognition; the spatiotemporal inputs to the visual system, the dynamic integration of features in higher visual (occipitotemporal) areas, and early sensitivity to features in V1. In Chapter 1, the basic emotion categories are presented, along with the brain regions that are activated by these expressions. In line with this, the current cognitive theory of face processing reviews functional and anatomical dissociations within the distributed neural “face network”. Chapter 1 also introduces the way in which we measure and use diagnostic information to derive brain sensitivity to specific facial features, and how this is a useful tool by which to understand spatial and temporal organisation of expression recognition in the brain. In relation to this, hierarchical, bottom-up neural processing is discussed along with high-level, top-down facilitatory mechanisms. Chapter 2 describes an eye-movement study that reveals inputs to the visual system via fixations reflect diagnostic information use. Inputs to the visual system dictate the information distributed to cognitive systems during the seamless and rapid categorization of expressive faces. How we perform eye-movements during this task informs how task-driven and stimulus-driven mechanisms interact to guide the extraction of information supporting recognition. We recorded eye movements of observers who categorized the six basic categories of facial expressions. We use a measure of task-relevant information (diagnosticity) to discuss oculomotor behaviour, with focus on two findings. Firstly, fixated regions reveal expression differences. Secondly, by examining fixation sequences, the intersection of fixations with diagnostic information increases in a sequence of fixations. This suggests a top-down drive to acquire task-relevant information, with different functional roles for first and final fixations. A combination of psychophysical studies of visual recognition together with the EEG (electroencephalogram) signal is used to infer the dynamics of feature extraction and use during the recognition of facial expressions in Chapter 3. The results reveal a process that integrates visual information over about 50 milliseconds prior to the face-sensitive N170 event-related potential, starting at the eye region, and proceeding gradually towards lower regions. The finding that informative features for recognition are not processed simultaneously but in an orderly progression over a short time period is instructive for understanding the processes involved in visual recognition, and in particular the integration of bottom-up and top-down processes. In Chapter 4 we use fMRI to investigate the task-dependent activation to diagnostic features in early visual areas, suggesting top-down mechanisms as V1 traditionally exhibits only simple response properties. Chapter 3 revealed that diagnostic features modulate the temporal dynamics of brain signals in higher visual areas. Within the hierarchical visual system however, it is not known if an early (V1/V2/V3) sensitivity to diagnostic information contributes to categorical facial judgements, conceivably driven by top-down signals triggered in visual processing. Using retinotopic mapping, we reveal task-dependent information extraction within the earliest cortical representation (V1) of two features known to be differentially necessary for face recognition tasks (eyes and mouth). This strategic encoding of face images is beyond typical V1 properties and suggests a top-down influence of task extending down to the earliest retinotopic stages of visual processing. The significance of these data is discussed in the context of the cortical face network and bidirectional processing in the visual system. The visual cognition of facial expression processing is concerned with the interactive processing of bottom-up sensory-driven information and top-down mechanisms to relate visual input to categorical judgements. The three experiments presented in this thesis are summarized in Chapter 5 in relation to how diagnostic features can be used to explore such processing in the human brain leading to proficient facial expression categorization
    • …
    corecore