1,252 research outputs found
Neural representation of geometry and surface properties in object and scene perception
Multiple cortical regions are crucial for perceiving the visual world, yet the processes shaping representations in these regions are unclear. To address this issue, we must elucidate how perceptual features shape representations of the environment. Here, we explore how the weighting of different visual features affects neural representations of objects and scenes, focusing on the scene-selective parahippocampal place area (PPA), but additionally including the retrosplenial complex (RSC), occipital place area (OPA), lateral occipital (LO) area, fusiform face area (FFA) and occipital face area (OFA). Across three experiments, we examined functional magnetic resonance imaging (fMRI) activity while human observers viewed scenes and objects that varied in geometry (shape/layout) and surface properties (texture/material). Interestingly, we found equal sensitivity in the PPA for these properties within a scene, revealing that spatial-selectivity alone does not drive activation within this cortical region. We also observed sensitivity to object texture in PPA, but not to the same degree as scene texture, and representations in PPA varied when objects were placed within scenes. We conclude that PPA may process surface properties in a domain-specific manner, and that the processing of scene texture and geometry is equally-weighted in PPA and may be mediated by similar underlying neuronal mechanisms
Differential neural dynamics underling pragmatic and semantic affordance processing in macaque ventral premotor cortex
Premotor neurons play a fundamental role in transforming physical properties of observed objects, such as size and shape, into motor plans for grasping them, hence contributing to "pragmatic" affordance processing. Premotor neurons can also contribute to "semantic" affordance processing, as they can discharge differently even to pragmatically identical objects depending on their behavioural relevance for the observer (i.e. edible or inedible objects). Here, we compared the response of monkey ventral premotor area F5 neurons tested during pragmatic (PT) or semantic (ST) visuomotor tasks. Object presentation responses in ST showed shorter latency and lower object selectivity than in PT. Furthermore, we found a difference between a transient representation of semantic affordances and a sustained representation of pragmatic affordances at both the single neuron and population level. Indeed, responses in ST returned to baseline within 0.5 s whereas in PT they showed the typical sustained visual-to-motor activity during Go trials. In contrast, during No-go trials, the time course of pragmatic and semantic information processing was similar. These findings suggest that premotor cortex generates different dynamics depending on pragmatic and semantic information provided by the context in which the to-be-grasped object is presented
Anterior Intraparietal Area: a Hub in the Observed Manipulative Action Network.
Current knowledge regarding the processing of observed manipulative actions (OMAs) (e.g., grasping, dragging, or dropping)
is limited to grasping and underlying neural circuitry remains controversial. Here, we addressed these issues by combining
chronic neuronal recordings along the anteroposterior extent of monkeys\u2019 anterior intraparietal (AIP) area with tracer
injections into the recorded sites. We found robust neural selectivity for 7 distinct OMAs, particularly in the posterior part of
AIP (pAIP), where it was associated with motor coding of grip type and own-hand visual feedback. This cluster of functional
properties appears to be specifically grounded in stronger direct connections of pAIP with the temporal regions of the
ventral visual stream and the prefrontal cortex, as connections with skeletomotor related areas and regions of the dorsal
visual stream exhibited opposite or no rostrocaudal gradients. Temporal and prefrontal areas may provide visual and
contextual information relevant for manipulative action processing. These results revise existing models of the action
observation network, suggesting that pAIP constitutes a parietal hub for routing information about OMA identity to the
other nodes of the network
The hippocampus and entorhinal cortex map events across space and time
The medial temporal lobe supports the encoding of new facts and experiences, and organizes them so that we can infer relationships and make unique associations during new encounters. Evidence from studies on humans and animals suggest that the hippocampus is specifically required for our ability to form these internal representations of the world. The mechanism by which the hippocampus performs this function remains unclear, but electrophysiological recordings in the hippocampus support a general model. One component of this model suggests that the cortex represents places, times, and events separately, and then the hippocampus generates conjunctive representations that connect the three. According to this hypothesis, the hippocampus binds places and events to an existing relational structure. This dissertation explores how item and place associations develop within cortex, and then examines the relational structure that organizes these events within the hippocampus. The first study suggests that contrary to previous models, events and places are bound together outside of the hippocampus in the entorhinal cortex and perirhinal cortex. The second study shows that this relational scaffold may be embodied by a continually changing code that permits both the association and separation of information across the continuum of time. The final study suggests that the hippocampus and entorhinal cortex contain qualitatively different time codes that may act in a complementary fashion to bind events and places and relate them across time. Overall, these studies support a theory wherein time is encoded in a range of brain regions that also contain conjunctive item and position information. In these regions, conjunctive representations of items, places, and times are organized not only by their perceptual similarity but also their temporal proximity
Making sense of real-world scenes
To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties
Seeing it all: Convolutional network layers map the function of the human visual system
International audienceConvolutional networks used for computer vision represent candidate models for the computations performed in mammalian visual systems. We use them as a detailed model of human brain activity during the viewing of natural images by constructing predictive models based on their different layers and BOLD fMRI activations. Analyzing the predictive performance across layers yields characteristic fingerprints for each visual brain region: early visual areas are better described by lower level convolutional net layers and later visual areas by higher level net layers, exhibiting a progression across ventral and dorsal streams. Our predictive model generalizes beyond brain responses to natural images. We illustrate this on two experiments, namely retinotopy and face-place oppositions, by synthesizing brain activity and performing classical brain mapping upon it. The synthesis recovers the activations observed in the corresponding fMRI studies, showing that this deep encoding model captures representations of brain function that are universal across experimental paradigms
Modulating human brain responses via optimal natural image selection and synthetic image generation
Understanding how human brains interpret and process information is
important. Here, we investigated the selectivity and inter-individual
differences in human brain responses to images via functional MRI. In our first
experiment, we found that images predicted to achieve maximal activations using
a group level encoding model evoke higher responses than images predicted to
achieve average activations, and the activation gain is positively associated
with the encoding model accuracy. Furthermore, aTLfaces and FBA1 had higher
activation in response to maximal synthetic images compared to maximal natural
images. In our second experiment, we found that synthetic images derived using
a personalized encoding model elicited higher responses compared to synthetic
images from group-level or other subjects' encoding models. The finding of
aTLfaces favoring synthetic images than natural images was also replicated. Our
results indicate the possibility of using data-driven and generative approaches
to modulate macro-scale brain region responses and probe inter-individual
differences in and functional specialization of the human visual system
Cortico-spinal modularity in the parieto-frontal system: a new perspective on action control
: Classical neurophysiology suggests that the motor cortex (MI) has a unique role in action control. In contrast, this review presents evidence for multiple parieto-frontal spinal command modules that can bypass MI. Five observations support this modular perspective: (i) the statistics of cortical connectivity demonstrate functionally-related clusters of cortical areas, defining functional modules in the premotor, cingulate, and parietal cortices; (ii) different corticospinal pathways originate from the above areas, each with a distinct range of conduction velocities; (iii) the activation time of each module varies depending on task, and different modules can be activated simultaneously; (iv) a modular architecture with direct motor output is faster and less metabolically expensive than an architecture that relies on MI, given the slow connections between MI and other cortical areas; (v) lesions of the areas composing parieto-frontal modules have different effects from lesions of MI. Here we provide examples of six cortico-spinal modules and functions they subserve: module 1) arm reaching, tool use and object construction; module 2) spatial navigation and locomotion; module 3) grasping and observation of hand and mouth actions; module 4) action initiation, motor sequences, time encoding; module 5) conditional motor association and learning, action plan switching and action inhibition; module 6) planning defensive actions. These modules can serve as a library of tools to be recombined when faced with novel tasks, and MI might serve as a recombinatory hub. In conclusion, the availability of locally-stored information and multiple outflow paths supports the physiological plausibility of the proposed modular perspective
- …