589 research outputs found

    Beliefs about the Minds of Others Influence How We Process Sensory Information

    Get PDF
    Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing

    Multisensory causal inference in the brain

    Get PDF
    At any given moment, our brain processes multiple inputs from its different sensory modalities (vision, hearing, touch, etc.). In deciphering this array of sensory information, the brain has to solve two problems: (1) which of the inputs originate from the same object and should be integrated and (2) for the sensations originating from the same object, how best to integrate them. Recent behavioural studies suggest that the human brain solves these problems using optimal probabilistic inference, known as Bayesian causal inference. However, how and where the underlying computations are carried out in the brain have remained unknown. By combining neuroimaging-based decoding techniques and computational modelling of behavioural data, a new study now sheds light on how multisensory causal inference maps onto specific brain areas. The results suggest that the complexity of neural computations increases along the visual hierarchy and link specific components of the causal inference process with specific visual and parietal regions

    Anatomical connectivity patterns predict face selectivity in the fusiform gyrus

    Get PDF
    A fundamental assumption in neuroscience is that brain structure determines function. Accordingly, functionally distinct regions of cortex should be structurally distinct in their connections to other areas. We tested this hypothesis in relation to face selectivity in the fusiform gyrus. By using only structural connectivity, as measured through diffusion-weighted imaging, we were able to predict functional activation to faces in the fusiform gyrus. These predictions outperformed two control models and a standard group-average benchmark. The structure–function relationship discovered from the initial participants was highly robust in predicting activation in a second group of participants, despite differences in acquisition parameters and stimuli. This approach can thus reliably estimate activation in participants who cannot perform functional imaging tasks and is an alternative to group-activation maps. Additionally, we identified cortical regions whose connectivity was highly influential in predicting face selectivity within the fusiform, suggesting a possible mechanistic architecture underlying face processing in humans.United States. Public Health Service (DA023427)National Institute of Mental Health (U.S.) (F32 MH084488)National Eye Institute (T32 EY013935)Poitras FoundationSimons FoundationEllison Medical Foundatio

    Visual adaptation alters the apparent speed of real-world actions

    Get PDF
    The apparent physical speed of an object in the field of view remains constant despite variations in retinal velocity due to viewing conditions (velocity constancy). For example, people and cars appear to move across the field of view at the same objective speed regardless of distance. In this study a series of experiments investigated the visual processes underpinning judgements of objective speed using an adaptation paradigm and video recordings of natural human locomotion. Viewing a video played in slow-motion for 30seconds caused participants to perceive subsequently viewed clips played at standard speed as too fast, so playback had to be slowed down in order for it to appear natural; conversely after viewing fast-forward videos for 30seconds, playback had to be speeded up in order to appear natural. The perceived speed of locomotion shifted towards the speed depicted in the adapting video (‘re-normalisation’). Results were qualitatively different from those obtained in previously reported studies of retinal velocity adaptation. Adapting videos that were scrambled to remove recognizable human figures or coherent motion caused significant, though smaller shifts in apparent locomotion speed, indicating that both low-level and high-level visual properties of the adapting stimulus contributed to the changes in apparent speed

    Neural correlates of enhanced visual short-term memory for angry faces: An fMRI study

    Get PDF
    Copyright: © 2008 Jackson et al.Background: Fluid and effective social communication requires that both face identity and emotional expression information are encoded and maintained in visual short-term memory (VSTM) to enable a coherent, ongoing picture of the world and its players. This appears to be of particular evolutionary importance when confronted with potentially threatening displays of emotion - previous research has shown better VSTM for angry versus happy or neutral face identities.Methodology/Principal Findings: Using functional magnetic resonance imaging, here we investigated the neural correlates of this angry face benefit in VSTM. Participants were shown between one and four to-be-remembered angry, happy, or neutral faces, and after a short retention delay they stated whether a single probe face had been present or not in the previous display. All faces in any one display expressed the same emotion, and the task required memory for face identity. We find enhanced VSTM for angry face identities and describe the right hemisphere brain network underpinning this effect, which involves the globus pallidus, superior temporal sulcus, and frontal lobe. Increased activity in the globus pallidus was significantly correlated with the angry benefit in VSTM. Areas modulated by emotion were distinct from those modulated by memory load.Conclusions/Significance: Our results provide evidence for a key role of the basal ganglia as an interface between emotion and cognition, supported by a frontal, temporal, and occipital network.The authors were supported by a Wellcome Trust grant (grant number 077185/Z/05/Z) and by BBSRC (UK) grant BBS/B/16178

    The Effect of Visual Experience on the Development of Functional Architecture in hMT+

    Get PDF
    We investigated whether the visual hMT+ cortex plays a role in supramodal representation of sensory flow, not mediated by visual mental imagery. We used functional magnetic resonance imaging to measure neural activity in sighted and congenitally blind individuals during passive perception of optic and tactile flows. Visual motion-responsive cortex, including hMT+, was identified in the lateral occipital and inferior temporal cortices of the sighted subjects by response to optic flow. Tactile flow perception in sighted subjects activated the more anterior part of these cortical regions but deactivated the more posterior part. By contrast, perception of tactile flow in blind subjects activated the full extent, including the more posterior part. These results demonstrate that activation of hMT+ and surrounding cortex by tactile flow is not mediated by visual mental imagery and that the functional organization of hMT+ can develop to subserve tactile flow perception in the absence of any visual experience. Moreover, visual experience leads to a segregation of the motion-responsive occipitotemporal cortex into an anterior subregion involved in the representation of both optic and tactile flows and a posterior subregion that processes optic flow only

    Differences in selectivity to natural images in early visual areas (V1–V3)

    Get PDF
    High-level regions of the ventral visual pathway respond more to intact objects compared to scrambled objects. The aim of this study was to determine if this selectivity for objects emerges at an earlier stage of processing. Visual areas (V1–V3) were defined for each participant using retinotopic mapping. Participants then viewed intact and scrambled images from different object categories (bottle, chair, face, house, shoe) while neural responses were measured using fMRI. Our rationale for using scrambled images is that they contain the same low-level properties as the intact objects, but lack the higher-order combinations of features that are characteristic of natural images. Neural responses were higher for scrambled than intact images in all regions. However, the difference between intact and scrambled images was smaller in V3 compared to V1 and V2. Next, we measured the spatial patterns of response to intact and scrambled images from different object categories. We found higher within-category compared to between category correlations for both intact and scrambled images demonstrating distinct patterns of response. Spatial patterns of response were more distinct for intact compared to scrambled images in V3, but not in V1 or V2. These findings demonstrate the emergence of selectivity to natural images in V3

    Tracing the Flow of Perceptual Features in an Algorithmic Brain Network

    Get PDF
    The model of the brain as an information processing machine is a profound hypothesis in which neuroscience, psychology and theory of computation are now deeply rooted. Modern neuroscience aims to model the brain as a network of densely interconnected functional nodes. However, to model the dynamic information processing mechanisms of perception and cognition, it is imperative to understand brain networks at an algorithmic level–i.e. as the information flow that network nodes code and communicate. Here, using innovative methods (Directed Feature Information), we reconstructed examples of possible algorithmic brain networks that code and communicate the specific features underlying two distinct perceptions of the same ambiguous picture. In each observer, we identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition

    The Faces in Infant-Perspective Scenes Change over the First Year of Life

    Get PDF
    Mature face perception has its origins in the face experiences of infants. However, little is known about the basic statistics of faces in early visual environments. We used head cameras to capture and analyze over 72,000 infant-perspective scenes from 22 infants aged 1-11 months as they engaged in daily activities. The frequency of faces in these scenes declined markedly with age: for the youngest infants, faces were present 15 minutes in every waking hour but only 5 minutes for the oldest infants. In general, the available faces were well characterized by three properties: (1) they belonged to relatively few individuals; (2) they were close and visually large; and (3) they presented views showing both eyes. These three properties most strongly characterized the face corpora of our youngest infants and constitute environmental constraints on the early development of the visual system

    Connectivity precedes function in the development of the visual word form area

    Get PDF
    What determines the cortical location at which a given functionally specific region will arise in development? We tested the hypothesis that functionally specific regions develop in their characteristic locations because of pre-existing differences in the extrinsic connectivity of that region to the rest of the brain. We exploited the visual word form area (VWFA) as a test case, scanning children with diffusion and functional imaging at age 5, before they learned to read, and at age 8, after they learned to read. We found the VWFA developed functionally in this interval and that its location in a particular child at age 8 could be predicted from that child's connectivity fingerprints (but not functional responses) at age 5. These results suggest that early connectivity instructs the functional development of the VWFA, possibly reflecting a general mechanism of cortical development.National Institutes of Health (U.S.) (Grant F32HD079169)Eunice Kennedy Shriver National Institute of Child Health and Human Development (U.S.) (Grant F32HD079169)National Institutes of Health (U.S.) (Grant R01HD067312)Eunice Kennedy Shriver National Institute of Child Health and Human Development (U.S.) (Grant R01HD067312
    corecore