30 research outputs found

    Stimulus predictability reduces responses in primary visual cortex

    Get PDF
    In this functional magnetic resonance imaging study we tested whether the predictability of stimuli affects responses in primary visual cortex (V1). The results of this study indicate that visual stimuli evoke smaller responses in V1 when their onset or motion direction can be predicted from the dynamics of surrounding illusory motion. We conclude from this finding that the human brain anticipates forthcoming sensory input that allows predictable visual stimuli to be processed with less neural activation at early stages of cortical processing

    Untangling Perceptual Memory: Hysteresis and Adaptation Map into Separate Cortical Networks

    Get PDF
    Perception is an active inferential process in which prior knowledge is combined with sensory input, the result of which determines the contents of awareness. Accordingly, previous experience is known to help the brain "decide” what to perceive. However, a critical aspect that has not been addressed is that previous experience can exert 2 opposing effects on perception: An attractive effect, sensitizing the brain to perceive the same again (hysteresis), or a repulsive effect, making it more likely to perceive something else (adaptation). We used functional magnetic resonance imaging and modeling to elucidate how the brain entertains these 2 opposing processes, and what determines the direction of such experience-dependent perceptual effects. We found that although affecting our perception concurrently, hysteresis and adaptation map into distinct cortical networks: a widespread network of higher-order visual and fronto-parietal areas was involved in perceptual stabilization, while adaptation was confined to early visual areas. This areal and hierarchical segregation may explain how the brain maintains the balance between exploiting redundancies and staying sensitive to new information. We provide a Bayesian model that accounts for the coexistence of hysteresis and adaptation by separating their causes into 2 distinct terms: Hysteresis alters the prior, whereas adaptation changes the sensory evidence (the likelihood function

    An Open Resource for Non-human Primate Imaging.

    Get PDF
    Non-human primate neuroimaging is a rapidly growing area of research that promises to transform and scale translational and cross-species comparative neuroscience. Unfortunately, the technological and methodological advances of the past two decades have outpaced the accrual of data, which is particularly challenging given the relatively few centers that have the necessary facilities and capabilities. The PRIMatE Data Exchange (PRIME-DE) addresses this challenge by aggregating independently acquired non-human primate magnetic resonance imaging (MRI) datasets and openly sharing them via the International Neuroimaging Data-sharing Initiative (INDI). Here, we present the rationale, design, and procedures for the PRIME-DE consortium, as well as the initial release, consisting of 25 independent data collections aggregated across 22 sites (total = 217 non-human primates). We also outline the unique pitfalls and challenges that should be considered in the analysis of non-human primate MRI datasets, including providing automated quality assessment of the contributed datasets

    Visual perceptual learning of feature conjunctions leverages non-linear mixed selectivity

    No full text
    Abstract Visual objects are often defined by multiple features. Therefore, learning novel objects entails learning feature conjunctions. Visual cortex is organized into distinct anatomical compartments, each of which is devoted to processing a single feature. A prime example are neurons purely selective to color and orientation, respectively. However, neurons that jointly encode multiple features (mixed selectivity) also exist across the brain and play critical roles in a multitude of tasks. Here, we sought to uncover the optimal policy that our brain adapts to achieve conjunction learning using these available resources. 59 human subjects practiced orientation-color conjunction learning in four psychophysical experiments designed to nudge the visual system towards using one or the other resource. We find that conjunction learning is possible by linear mixing of pure color and orientation information, but that more and faster learning takes place when both pure and mixed selectivity representations are involved. We also find that learning with mixed selectivity confers advantages in performing an untrained “exclusive or” (XOR) task several months after learning the original conjunction task. This study sheds light on possible mechanisms underlying conjunction learning and highlights the importance of learning by mixed selectivity

    Specialized Networks for Social Cognition in the Primate Brain

    No full text
    Primates have evolved diverse cognitive capabilities to navigate their complex social world. To understand how the brain implements critical social cognitive abilities, we describe functional specialization in the domains of face processing, social interaction understanding, and mental state attribution. Systems for face processing are specialized from the level of single cells to populations of neurons within brain regions to hierarchically organized networks that extract and represent abstract social information. Such functional specialization is not confined to the sensorimotor periphery but appears to be a pervasive theme of primate brain organization all the way to the apex regions of cortical hierarchies. Circuits processing social information are juxtaposed with parallel systems involved in processing nonsocial information, suggesting common computations applied to different domains. The emerging picture of the neural basis of social cognition is a set of distinct but interacting subnetworks involved in component processes such as face perception and social reasoning, traversing large parts of the primate brain

    Previous motor actions outweigh sensory information in sensorimotor statistical learning

    No full text
    Humans can use their previous experience in form of statistical priors to improve decisions. It is, however, unclear how such priors are learned and represented. Importantly, it has remained elusive whether prior learning is independent of the sensorimotor system involved in the learning process or not, as both modality-specific and modality-general learning have been reported in the past. Here, we used a saccadic eye movement task to probe the learning and representation of a spatial prior across a few trials. In this task, learning occurs in an unsupervised manner and through encountering trial-by-trial visual hints drawn from a distribution centered on the target location. Using a model-comparison approach, we found that participants’ prior knowledge is largely represented in the form of their previous motor actions, with minimal influence from the previously seen visual hints. By using two different motor contexts for response (looking either at the estimated target location, or exactly opposite to it), we could further compare whether prior experience obtained in one motor context can be transferred to the other. Although learning curves were highly similar, and participants seemed to use the same strategy for both response types, they could not fully transfer their knowledge between contexts, as performance and confidence ratings dropped after a switch of the required response. Together, our results suggest that humans preferably use the internal representations of their previous motor actions, rather than past incoming sensory information, to form statistical sensorimotor priors on the timescale of a few trials

    Face and Mooney face stimuli.

    No full text
    <p><b>(A)</b> This greyscale image is easily perceived as a face although most visual information is covered by shadows. <b>(B)</b> A typical “Mooney” face. <b>(C)</b> An extremely easy “Mooney” face, devoid of cast shadows.</p
    corecore