761 research outputs found

    A New Perceptual Bias Reveals Suboptimal Population Decoding of Sensory Responses

    Get PDF
    Several studies have reported optimal population decoding of sensory responses in two-alternative visual discrimination tasks. Such decoding involves integrating noisy neural responses into a more reliable representation of the likelihood that the stimuli under consideration evoked the observed responses. Importantly, an ideal observer must be able to evaluate likelihood with high precision and only consider the likelihood of the two relevant stimuli involved in the discrimination task. We report a new perceptual bias suggesting that observers read out the likelihood representation with remarkably low precision when discriminating grating spatial frequencies. Using spectrally filtered noise, we induced an asymmetry in the likelihood function of spatial frequency. This manipulation mainly affects the likelihood of spatial frequencies that are irrelevant to the task at hand. Nevertheless, we find a significant shift in perceived grating frequency, indicating that observers evaluate likelihoods of a broad range of irrelevant frequencies and discard prior knowledge of stimulus alternatives when performing two-alternative discrimination

    Population decoding in rat barrel cortex: optimizing the linear readout of correlated population responses

    Get PDF
    Sensory information is encoded in the response of neuronal populations. How might this information be decoded by downstream neurons? Here we analyzed the responses of simultaneously recorded barrel cortex neurons to sinusoidal vibrations of varying amplitudes preceded by three adapting stimuli of 0, 6 and 12 µm in amplitude. Using the framework of signal detection theory, we quantified the performance of a linear decoder which sums the responses of neurons after applying an optimum set of weights. Optimum weights were found by the analytical solution that maximized the average signal-to-noise ratio based on Fisher linear discriminant analysis. This provided a biologically plausible decoder that took into account the neuronal variability, covariability, and signal correlations. The optimal decoder achieved consistent improvement in discrimination performance over simple pooling. Decorrelating neuronal responses by trial shuffling revealed that, unlike pooling, the performance of the optimal decoder was minimally affected by noise correlation. In the non-adapted state, noise correlation enhanced the performance of the optimal decoder for some populations. Under adaptation, however, noise correlation always degraded the performance of the optimal decoder. Nonetheless, sensory adaptation improved the performance of the optimal decoder mainly by increasing signal correlation more than noise correlation. Adaptation induced little systematic change in the relative direction of signal and noise. Thus, a decoder which was optimized under the non-adapted state generalized well across states of adaptation

    Causal inference in multisensory perception and the brain

    Get PDF
    To build coherent and veridical multisensory representations of the environment, human observers consider the causal structure of multisensory signals: If they infer a common source of the signals, observers integrate them weighted by their reliability. Otherwise, they segregate the signals. Generally, observers infer a common source if the signals correspond structurally and spatiotemporally. In six projects, the current PhD thesis investigated this causal inference model with the help of audiovisual spatial signals presented to human observers in a ventriloquist paradigm. A first psychophysical study showed that sensory reliability determines causal inference via two mechanisms: Sensory reliability modulates how observers infer the causal structure from spatial signal disparity. Further, sensory reliability determines the weight of audiovisual signals if observers integrate the signals under assumption of a common source. Using multivariate decoding of fMRI signals, three PhD projects revealed that auditory and visual cortical hierarchies jointly implement causal inference. Specific regions of the hierarchies represented constituent spatial estimates of the causal inference model. In line with this model, anterior regions of intraparietal sulcus (IPS) represent audiovisual signals dependent on visual reliability, task-relevance, and spatial disparity of the signals. However, even in case of small signal discrepancies suggesting a common source, reliability-weighting in IPS was suboptimal as compared to a Maximum Estimation Likelihood model. By temporally manipulating visual reliability, the fifth PhD project demonstrated that human observers learn sensory reliability from current and past signals in order to weight audiovisual signals, consistent with a Bayesian learner. Finally, the sixth project showed that if visual flashes were rendered unaware by continuous flash suppression, the visual bias of the perceived auditory location was strongly reduced but still significant. The reduced ventriloquist effect was presumably mediated by the drop of visual reliability accompanying perceptual unawareness. In conclusion, the PhD thesis suggests that human observers integrate multisensory signals according to their causal structure and temporal regularity: They integrate the signals if a common source is likely by weighting them proportional to the reliability which they learnt from the signals’ history. Crucially, specific regions of cortical hierarchies jointly implement these multisensory processes

    Confirmation bias without rhyme or reason

    Get PDF
    Having a confirmation bias sometimes leads us to hold inaccurate beliefs. So, the puzzle goes: why do we have it? According to the influential argumentative theory of reasoning, confirmation bias emerges because the primary function of reason is not to form accurate beliefs, but to convince others that we’re right. A crucial prediction of the theory, then, is that confirmation bias should be found only in the reasoning domain. In this article, we argue that there is evidence that confirmation bias does exist outside the reasoning domain. This undermines the main evidential basis for the argumentative theory of reasoning. In presenting the relevant evidence, we explore why having such confirmation bias may not be maladaptive

    Correlated Variability and Adaptation in Orbitofrontal Cortex during Economic Choice

    Get PDF
    Economic decision-making requires the computation and comparison of subjective values. Several lines of evidence suggest that these processes are mediated by circuits in orbitofrontal cortex (OFC). Neurons in OFC encode the subjective values of choice options and outcomes, and damage to this area leads to selective deficits in value-guided behavior. To understand the nature of choice more thoroughly, it is useful to consider the features of OFC circuits that can limit or enhance information processing. In this document, I present work examining two factors that influence encoding in OFC: noise correlation and value adaptation. In the first study, I show that noise correlations in OFC are small but non-negligible, and that the structure of these correlations constrains the resolution of value representation in OFC. I go on to show that correlation structure predicts a weak relationship between single-neuron variability and decision outcomes in the context of a uniform linear model of decision making. These findings are consistent with empirical data and support the hypothesis that OFC mediates value-based decision-making. In the second study, I investigate how neurons in OFC adapt to changes in the value distribution. I show that neurons adapt to both maximum and minimum available values, but that the dynamic range does not completely remap across conditions. While intermediate adaptation is sub-optimal, it indicates that OFC neurons can partially compensate for changes in the scale of decisions, allowing increased resolution of value encoding in high-magnitude conditions. In summary, decision-making may be limited by correlated noise, but the effect of this constraint is relatively small. Moreover, variability introduced by noise correlation may be partially ameliorated by adaptation to the value range

    Semi-orthogonal subspaces for value mediate a tradeoff between binding and generalization

    Full text link
    When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces. To test this hypothesis, we examined neuronal responses in five reward-sensitive regions in macaques performing a risky choice task with sequential offers. Surprisingly, in all areas, the neural population encoded the values of offers presented on the left and right in distinct subspaces. We show that the encoding we observe is sufficient to bind the values of the offers to their respective positions in space while preserving abstract value information, which may be important for rapid learning and generalization to novel contexts. Moreover, after both offers have been presented, all areas encode the value of the first and second offers in orthogonal subspaces. In this case as well, the orthogonalization provides binding. Our binding-by-subspace hypothesis makes two novel predictions borne out by the data. First, behavioral errors should correlate with putative spatial (but not temporal) misbinding in the neural representation. Second, the specific representational geometry that we observe across animals also indicates that behavioral errors should increase when offers have low or high values, compared to when they have medium values, even when controlling for value difference. Together, these results support the idea that the brain makes use of semi-orthogonal subspaces to bind features together.Comment: arXiv admin note: substantial text overlap with arXiv:2205.0676

    Interpreting Encoding and Decoding Models

    Get PDF
    Encoding and decoding models are widely used in systems, cognitive, and computational neuroscience to make sense of brain-activity data. However, the interpretation of their results requires care. Decoding models can help reveal whether particular information is present in a brain region in a format the decoder can exploit. Encoding models make comprehensive predictions about representational spaces. In the context of sensory systems, encoding models enable us to test and compare brain-computational models, and thus directly constrain computational theory. Encoding and decoding models typically include fitted linear-model components. Sometimes the weights of the fitted linear combinations are interpreted as reflecting, in an encoding model, the contribution of different sensory features to the representation or, in a decoding model, the contribution of different measured brain responses to a decoded feature. Such interpretations can be problematic when the predictor variables or their noise components are correlated and when priors (or penalties) are used to regularize the fit. Encoding and decoding models are evaluated in terms of their generalization performance. The correct interpretation depends on the level of generalization a model achieves (e.g. to new response measurements for the same stimuli, to new stimuli from the same population, or to stimuli from a different population). Significant decoding or encoding performance of a single model (at whatever level of generality) does not provide strong constraints for theory. Many models must be tested and inferentially compared for analyses to drive theoretical progress.Comment: 19 pages, 2 figures, author preprin

    Scaling of sensory information in largeneural populations shows signatures ofinformation-limiting correlations

    Get PDF
    How is information distributed across large neuronal populations within a given brain area? Information may be distributed roughly evenly across neuronal populations, so that total information scales linearly with the number of recorded neurons. Alternatively, the neural code might be highly redundant, meaning that total information saturates. Here we investigate how sensory information about the direction of a moving visual stimulus is distributed across hundreds of simultaneously recorded neurons in mouse primary visual cortex. We show that information scales sublinearly due to correlated noise in these populations. We compartmentalized noise correlations into information-limiting and nonlimiting components, then extrapolate to predict how information grows with even larger neural populations. We predict that tens of thousands of neurons encode 95% of the information about visual stimulus direction, much less than the number of neurons in primary visual cortex. These findings suggest that the brain uses a widely distributed, but nonetheless redundant code that supports recovering most sensory information from smaller subpopulations.We would like to thank Alexandre Pouget, Peter Latham, and members of the HMSNeurobiology Department for useful discussions and feedback on the work, and RachelWilson and Richard Born for comments on early versions of the manuscript. The workwas supported by a scholar award from the James S. McDonnell Foundation (grant#220020462 to J.D.), grants from the NIH (R01MH115554 to J.D.; R01MH107620 to C.D.H.; R01NS089521 to C.D.H.; R01NS108410 to C.D.H.; F31EY031562 to A.W.J.), theNSF’s NeuroNex program (DBI-1707398. to R.N.), MINECO (Spain; BFU2017-85936-Pto R.M.-B.), the Howard Hughes Medical Institute (HHMI, ref 55008742 to R.M.-B.), theICREA Academia (2016 to R.M.-B.), the Government of Aragon (Spain; ISAAC lab, codT33 17D to I.A.-R.), the Spanish Ministry of Economy and Competitiveness (TIN2016-80347-R to I.A.-R.), the Gatsby Charitable Foundation (to R.N.), and an NSF GraduateResearch Fellowship (to A.W.J.)
    • …
    corecore