23 research outputs found

    Fast Recurrent Processing via Ventrolateral Prefrontal Cortex Is Needed by the Primate Ventral Stream for Robust Core Visual Object Recognition

    No full text
    © 2020 The Author(s) Distributed neural population spiking patterns in macaque inferior temporal (IT) cortex that support core object recognition require additional time to develop for specific, “late-solved” images. This suggests the necessity of recurrent processing in these computations. Which brain circuits are responsible for computing and transmitting these putative recurrent signals to IT? To test whether the ventrolateral prefrontal cortex (vlPFC) is a critical recurrent node in this system, here, we pharmacologically inactivated parts of vlPFC and simultaneously measured IT activity while monkeys performed object discrimination tasks. vlPFC inactivation deteriorated the quality of late-phase (>150 ms from image onset) IT population code and produced commensurate behavioral deficits for late-solved images. Finally, silencing vlPFC caused the monkeys’ IT activity and behavior to become more like those produced by feedforward-only ventral stream models. Together with prior work, these results implicate fast recurrent processing through vlPFC as critical to producing behaviorally sufficient object representations in IT

    Neural population control via deep image synthesis

    No full text
    Particular deep artificial neural networks (ANNs) are today’s most accurate models of the primate brain’s ventral visual stream. Using an ANN-driven image synthesis method, we found that luminous power patterns (i.e., images) can be applied to primate retinae to predictably push the spiking activity of targeted V4 neural sites beyond naturally occurring levels. This method, although not yet perfect, achieves unprecedented independent control of the activity state of entire populations of V4 neural sites, even those with overlapping receptive fields. These results show how the knowledge embedded in today’s ANN models might be used to noninvasively set desired internal brain states at neuron-level resolution, and suggest that more accurate ANN models would produce even more accurate control.National Eye Institute (Grant R01-EY014970)Office of Naval Research (Grant MURI-114407

    Task-driven convolutional recurrent models of the visual system

    No full text
    Feed-forward convolutional neural networks (CNNs) are currently state-of-the-art for object classification tasks such as ImageNet. Further, they are quantitatively accurate models of temporally-averaged responses of neurons in the primate brain's visual system. However, biological visual systems have two ubiquitous architectural features not shared with typical CNNs: local recurrence within cortical areas, and long-range feedback from downstream areas to upstream areas. Here we explored the role of recurrence in improving classification performance. We found that standard forms of recurrence (vanilla RNNs and LSTMs) do not perform well within deep CNNs on the ImageNet task. In contrast, novel cells that incorporated two structural features, bypassing and gating, were able to boost task accuracy substantially. We extended these design principles in an automated search over thousands of model architectures, which identified novel local recurrent cells and long-range feedback connections useful for object recognition. Moreover, these task-optimized ConvRNNs matched the dynamics of neural activity in the primate visual system better than feedforward networks, suggesting a role for the brain's recurrent connections in performing difficult visual behaviors.Simons Foundation (Grant 325500/542965)European Union. Horizon 2020 Research and Innovation Programme (Grant 705498)National Institutes of Health (U.S.). Ç‚b National Eye Institute (Grant R01-EY014970)United States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant MURI-114407

    Multimodal investigations of human face perception in neurotypical and autistic adults

    No full text
    Faces are among the most important visual stimuli that we perceive in everyday life. Although there is a plethora of literature studying many aspects of face perception, the vast majority of them focuses on a single aspect of face perception using unimodal approaches. In this review, we advocate for studying face perception using multimodal cognitive neuroscience approaches. We highlight two case studies: the first study investigates ambiguity in facial expressions of emotion, and the second study investigates social trait judgment. In the first set of studies, we revealed an event-related potential that signals emotion ambiguity and we found convergent response to emotion ambiguity using functional neuroimaging and single-neuron recordings. In the second set of studies, we discussed recent findings about neural substrates underlying comprehensive social evaluation, and the relationship between personality factors and social trait judgements. Notably, in both sets of studies, we provided an in-depth discussion of altered face perception in people with autism spectrum disorder (ASD) and offered a computational account for the behavioral and neural markers of atypical facial processing in ASD. Finally, we suggest new perspectives for studying face perception. All data discussed in the case studies of this review are publicly available

    Using an animal learning model of the hippocampus to simulate human fMRI data

    No full text
    Recent human fMRI studies have shown that the hippocampal region is essential for probabilistic category learning, memory formation-retrieval and context based performance. We present an artificial neural network model that can qualitatively simulate the BOLD signal for these tasks. The model offers ideas on the functional architecture and the relationship between the hippocampus and other brain structures. We also show that symptoms of neurobiological diseases like Parkinson's disease (PD) and Schizophrenia can be simulated and studied using the model

    The inferior temporal cortex is a potential cortical precursor of orthographic processing in untrained monkeys

    No full text
    The ability to recognize written letter strings is foundational to human reading, but the underlying neuronal mechanisms remain largely unknown. Recent behavioral research in baboons suggests that non-human primates may provide an opportunity to investigate this question. We recorded the activity of hundreds of neurons in V4 and the inferior temporal cortex (IT) while naĂŻve macaque monkeys passively viewed images of letters, English words and non-word strings, and tested the capacity of those neuronal representations to support a battery of orthographic processing tasks. We found that simple linear read-outs of IT (but not V4) population responses achieved high performance on all tested tasks, even matching the performance and error patterns of baboons on word classification. These results show that the IT cortex of untrained primates can serve as a precursor of orthographic processing, suggesting that the acquisition of reading in humans relies on the recycling of a brain network evolved for other visual functions
    corecore