228 research outputs found

    Effects of Social and Non-Social Interpretations of Complex Images on Human Eye Movement and Brain Activation

    Get PDF
    Communicating and interacting with others is an essential part of our daily routines as humans. Performing these actions appropriately requires the ability to identify, extract, and process salient social cues from the environment. The subsequent application of such knowledge is important for inferring and predicting the behavior of other people. The eyes and brain must work together to fixate and process only the most critical social signals within a scene while passing over and / or completely ignoring other aspects of the scene. While brain activation to isolated presentations of objects and people presentations have been characterized, information about the brain\u27s activation patterns to more comprehensive scenes containing multiple categories of information is limited. Furthermore, little is known about how different interpretations of a scene might alter how that scene is viewed or how the brain responds to that scene. Therefore, the studies presented herein used a combination of infrared eye tracking and functional magnetic resonance imaging techniques to investigate the eye movement and brain activation patterns to socially- and non-socially-relevant interpretations of the same set of complex stimuli. Eye tracking data showed that each gaze pattern was consistent with viewing and attending to only one category of information (people or objects) despite both categories being present in all images. Functional magnetic resonance imaging revealed that a region of the right superior temporal sulcus was selectively activated by the social condition compared to the non-social condition, an area known for its role in social tasks. Brain activation in response to the non-social condition was located in many of the same regions associated with the recognition and processing of visual objects presented in isolation. Taken together, these results demonstrate that in healthy adults, eye movement and brain activation patterns to identical scenes change markedly as a function of attentional focus and interpretation intention. Utilizing realistic and complex stimuli to study the eye gaze and neural activation patterns associated with processing social versus non-social information in the healthy brain is an important step towards understanding the deficits present in individuals with social cognition disorders like autism and schizophrenia

    Cortical network responses map onto data-driven features that capture visual semantics of movie fragments

    Get PDF
    Research on how the human brain extracts meaning from sensory input relies in principle on methodological reductionism. In the present study, we adopt a more holistic approach by modeling the cortical responses to semantic information that was extracted from the visual stream of a feature film, employing artificial neural network models. Advances in both computer vision and natural language processing were utilized to extract the semantic representations from the film by combining perceptual and linguistic information. We tested whether these representations were useful in studying the human brain data. To this end, we collected electrocorticography responses to a short movie from 37 subjects and fitted their cortical patterns across multiple regions using the semantic components extracted from film frames. We found that individual semantic components reflected fundamental semantic distinctions in the visual input, such as presence or absence of people, human movement, landscape scenes, human faces, etc. Moreover, each semantic component mapped onto a distinct functional cortical network involving high-level cognitive regions in occipitotemporal, frontal and parietal cortices. The present work demonstrates the potential of the data-driven methods from information processing fields to explain patterns of cortical responses, and contributes to the overall discussion about the encoding of high-level perceptual information in the human brain

    Representation of faces in perirhinal cortex

    Get PDF
    The prevailing view of medial temporal lobe (MTL) functioning holds that its structures are dedicated to long-term declarative memory. Recent evidence challenges this view, suggesting that perirhinal cortex (PrC), which interfaces the MTL with the ventral visual pathway, supports highly integrated object representations that contribute to both recognition memory and perceptual discrimination. Here, I used functional magnetic resonance imaging to examine PrC activity, as well as its broader functional connectivity, during perceptual and mnemonic tasks involving faces, a stimulus class proposed to rely on integrated representations for discrimination. In Chapter 2, I revealed that PrC involvement was related to task demands that emphasized face individuation. Discrimination under these conditions is proposed to benefit from the uniqueness afforded by highly-integrated stimulus representations. Multivariate partial least squares analyses revealed that PrC, the fusiform face area (FFA), and the amygdala were part of a pattern of regions exhibiting preferential activity for tasks emphasizing stimulus individuation. In Chapter 3, I provided evidence of resting-state connectivity between face-selective aspects of PrC, the FFA, and amygdala. These findings point to a privileged functional relationship between these regions, consistent with task-related co- recruitment revealed in Chapter 2. In addition, the strength of resting-state connectivity was related to behavioral performance on a face discrimination task. These results suggest a mechanism by which PrC may participate in the representation of faces. In Chapter 4, I examined PrC connectivity during task contexts. I provided evidence that distinctions between tasks emphasizing recognition memory and perceptual discrimination demands are better reflected in the connectivity of PrC with other regions in the brain, rather than in the presence or absence of PrC activity. Further, this functional connectivity was related to behavioral performance for the memory task. Together, these findings indicate that mnemonic demands are not the sole arbiter of PrC involvement, counter to the prevailing view of MTL functioning. Instead, they highlight the importance of connectivity-based approaches in elucidating the contributions of PrC, and point to a role of PrC in the representation of faces in a manner that can support memory and perception, and that may apply to other object categories more broadly

    Decoding the neural mechanisms of human tool use.

    Get PDF
    Sophisticated tool use is a defining characteristic of the primate species but how is it supported by the brain, particularly the human brain? Here we show, using functional MRI and pattern classification methods, that tool use is subserved by multiple distributed action-centred neural representations that are both shared with and distinct from those of the hand. In areas of frontoparietal cortex we found a common representation for planned hand- and tool-related actions. In contrast, in parietal and occipitotemporal regions implicated in hand actions and body perception we found that coding remained selectively linked to upcoming actions of the hand whereas in parietal and occipitotemporal regions implicated in tool-related processing the coding remained selectively linked to upcoming actions of the tool. The highly specialized and hierarchical nature of this coding suggests that hand- and tool-related actions are represented separately at earlier levels of sensorimotor processing before becoming integrated in frontoparietal cortex. DOI:http://dx.doi.org/10.7554/eLife.00425.001

    Diagnostic information use to understand brain mechanisms of facial expression categorization

    Get PDF
    Proficient categorization of facial expressions is crucial for normal social interaction. Neurophysiological, behavioural, event-related potential, lesion and functional neuroimaging techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless process, and the associated arrangement of bilateral networks. These brain areas exhibit consistent and replicable activation patterns, and can be broadly defined to include visual (occipital and temporal), limbic (amygdala) and prefrontal (orbitofrontal) regions. Together, these areas support early perceptual processing, the formation of detailed representations and subsequent recognition of expressive faces. Despite the critical role of facial expressions in social communication and extensive work in this area, it is still not known how the brain decodes nonverbal signals in terms of expression-specific features. For these reasons, this thesis investigates the role of these so-called diagnostic facial features at three significant stages in expression recognition; the spatiotemporal inputs to the visual system, the dynamic integration of features in higher visual (occipitotemporal) areas, and early sensitivity to features in V1. In Chapter 1, the basic emotion categories are presented, along with the brain regions that are activated by these expressions. In line with this, the current cognitive theory of face processing reviews functional and anatomical dissociations within the distributed neural “face network”. Chapter 1 also introduces the way in which we measure and use diagnostic information to derive brain sensitivity to specific facial features, and how this is a useful tool by which to understand spatial and temporal organisation of expression recognition in the brain. In relation to this, hierarchical, bottom-up neural processing is discussed along with high-level, top-down facilitatory mechanisms. Chapter 2 describes an eye-movement study that reveals inputs to the visual system via fixations reflect diagnostic information use. Inputs to the visual system dictate the information distributed to cognitive systems during the seamless and rapid categorization of expressive faces. How we perform eye-movements during this task informs how task-driven and stimulus-driven mechanisms interact to guide the extraction of information supporting recognition. We recorded eye movements of observers who categorized the six basic categories of facial expressions. We use a measure of task-relevant information (diagnosticity) to discuss oculomotor behaviour, with focus on two findings. Firstly, fixated regions reveal expression differences. Secondly, by examining fixation sequences, the intersection of fixations with diagnostic information increases in a sequence of fixations. This suggests a top-down drive to acquire task-relevant information, with different functional roles for first and final fixations. A combination of psychophysical studies of visual recognition together with the EEG (electroencephalogram) signal is used to infer the dynamics of feature extraction and use during the recognition of facial expressions in Chapter 3. The results reveal a process that integrates visual information over about 50 milliseconds prior to the face-sensitive N170 event-related potential, starting at the eye region, and proceeding gradually towards lower regions. The finding that informative features for recognition are not processed simultaneously but in an orderly progression over a short time period is instructive for understanding the processes involved in visual recognition, and in particular the integration of bottom-up and top-down processes. In Chapter 4 we use fMRI to investigate the task-dependent activation to diagnostic features in early visual areas, suggesting top-down mechanisms as V1 traditionally exhibits only simple response properties. Chapter 3 revealed that diagnostic features modulate the temporal dynamics of brain signals in higher visual areas. Within the hierarchical visual system however, it is not known if an early (V1/V2/V3) sensitivity to diagnostic information contributes to categorical facial judgements, conceivably driven by top-down signals triggered in visual processing. Using retinotopic mapping, we reveal task-dependent information extraction within the earliest cortical representation (V1) of two features known to be differentially necessary for face recognition tasks (eyes and mouth). This strategic encoding of face images is beyond typical V1 properties and suggests a top-down influence of task extending down to the earliest retinotopic stages of visual processing. The significance of these data is discussed in the context of the cortical face network and bidirectional processing in the visual system. The visual cognition of facial expression processing is concerned with the interactive processing of bottom-up sensory-driven information and top-down mechanisms to relate visual input to categorical judgements. The three experiments presented in this thesis are summarized in Chapter 5 in relation to how diagnostic features can be used to explore such processing in the human brain leading to proficient facial expression categorization

    Functional Connectivity of the Rodent Striatum

    Get PDF
    The striatum serves as the major input nucleus of the basal ganglia circuitry, important for its varied roles in cognition, motivation, and sensorimotor function. Despite decades of study, fundamental features of the striatum’s functional organization and broader role(s) within the basal ganglia circuitry remain contentious and/or poorly defined. Given the diverse and critical roles of striatal activity in normal brain function and a multitude of disease states (including neurodegenerative and psychiatric disorders), a better understanding of this nucleus’ functional organization is imperative. The use of electrophysiological tools, which predominate the field, allow for in-depth characterizations of discrete, pre-selected brain regions, but are not appropriate for delineating functional neural circuit interactions on large spatial scales in an unbiased manner. A complementary approach to these studies is the use of functional magnetic resonance imaging (fMRI), which provides global, unbiased measures of functional neural circuit and network connectivity. In the first two studies described herein (Chapters 2 and 3), we used fMRI to map the functional response patterns to electrical DBS of the rat nucleus accumbens (NAc; ventral striatum), as well as the dual striatal outputs: external globus pallidus (GPe), and substantia nigra pars reticulata. Notable findings included the presence of negative fMRI signals in striatum during stimulation of each nuclei, robust prefrontal cortical modulation by NAc- and GPe-DBS, and marked functional connectivity changes by high frequency DBS. We next used optogenetic tools to more selectively map the brain-wide responses to stimulation of GPe neurons in healthy and Parkinson’s disease model rats (Chapter 4), as well as dorsal striatal neurons and their motor cortical inputs (Chapter 5). Optogenetic stimulation of each nuclei elicited an intriguing dorsal striatal negative fMRI signal, observed during direct striatal stimulation as well as putative recruitment of both excitatory both inhibitory striatal inputs, and thus suggestive of neurovascular uncoupling. Additionally, results from our GPe experiments revealed that this signal may be compromised in certain neurological disease states (e.g., Parkinson’s disease). Collectively, the studies described in this dissertation have exploited fMRI tools to reveal novel features of striatal connectivity, which may shed light on striatal function in health and disease.Doctor of Philosoph

    The Developmental Trajectory of Contour Integration in Autism Spectrum Disorders

    Full text link
    Sensory input is inherently ambiguous and complex, so perception is believed to be achieved by combining incoming sensory information with prior knowledge. One model envisions the grouping of sensory features (the local dimensions of stimuli) to be the outcome of a predictive process relying on prior experience (the global dimension of stimuli) to disambiguate possible configurations those elements could take. Contour integration, the linking of aligned but separate visual elements, is one example of perceptual grouping. Kanizsa-type illusory contour (IC) stimuli have been widely used to explore contour integration processing. Consisting of two conditions which differ only in the alignment of their inducing elements, one induces the experience of a shape apparently defined by a contour and the second does not. This contour has no counterpart in actual visual space – it is the visual system that fills-in the gap between inducing elements. A well-tested electrophysiological index associated with this process (the IC-effect) provided us with a metric of the visual system’s contribution to contour integration. Using visually evoked potentials (VEP), we began by probing the limits of this metric to three manipulations of contour parameters previously shown to impact subjective experience of illusion strength. Next we detailed the developmental trajectory of contour integration processes over childhood and adolescence. Finally, because persons with autism spectrum disorders (ASDs) have demonstrated an altered balance of global and local processing, we hypothesized that contour integration may be atypical. We compared typical development to development in persons with ASDs to reveal possible mechanisms underlying this processing difference. Our manipulations resulted in no differences in the strength of the IC-effect in adults or children in either group. However, timing of the IC-effect was delayed in two instances: 1) peak latency was delayed by increasing the extent of contour to be filled-in relative to overall IC size and 2) onset latency was delayed in participants with ASDs relative to their neurotypical counterparts
    • …
    corecore