35 research outputs found

    Human noise blindness drives suboptimal cognitive inference

    Get PDF
    Humans typically make near-optimal sensorimotor judgements but show systematic biases when making more cognitive judgements. Here we test the hypothesis that, while humans are sensitive to the noise present during early sensory encoding, the “optimality gap” arises because they are blind to noise introduced by later cognitive integration of variable or discordant pieces of information. In six psychophysical experiments, human observers judged the average orientation of an array of contrast gratings. We varied the stimulus contrast (encoding noise) and orientation variability (integration noise) of the array. Participants adapted near-optimally to changes in encoding noise, but, under increased integration noise, displayed a range of suboptimal behaviours: they ignored stimulus base rates, reported excessive confidence in their choices, and refrained from opting out of objectively difficult trials. These overconfident behaviours were captured by a Bayesian model blind to integration noise. Our study provides a computationally grounded explanation of human suboptimal cognitive inference

    Valence-Specific Modulation in the Accumulation of Perceptual Evidence Prior to Visual Scene Recognition

    Get PDF
    Visual scene recognition is a dynamic process through which incoming sensory information is iteratively compared with predictions regarding the most likely identity of the input stimulus. In this study, we used a novel progressive unfolding task to characterize the accumulation of perceptual evidence prior to scene recognition, and its potential modulation by the emotional valence of these scenes. Our results show that emotional (pleasant and unpleasant) scenes led to slower accumulation of evidence compared to neutral scenes. In addition, when controlling for the potential contribution of non-emotional factors (i.e., familiarity and complexity of the pictures), our results confirm a reliable shift in the accumulation of evidence for pleasant relative to neutral and unpleasant scenes, suggesting a valence-specific effect. These findings indicate that proactive iterations between sensory processing and top-down predictions during scene recognition are reliably influenced by the rapidly extracted (positive) emotional valence of the visual stimuli. We interpret these findings in accordance with the notion of a genuine positivity offset during emotional scene recognition

    Expectation (and attention) in visual cognition.

    No full text
    Visual cognition is limited by computational capacity, because the brain can process only a fraction of the visual sensorium in detail, and by the inherent ambiguity of the information entering the visual system. Two mechanisms mitigate these burdens: attention prioritizes stimulus processing on the basis of motivational relevance, and expectations constrain visual interpretation on the basis of prior likelihood. Of the two, attention has been extensively investigated while expectation has been relatively neglected. Here, we review recent work that has begun to delineate a neurobiology of visual expectation, and contrast the findings with those of the attention literature, to explore how these two central influences on visual perception overlap, differ and interact

    Grounding predictive coding models in empirical neuroscience research.

    No full text
    Clark makes a convincing case for the merits of conceptualizing brains as hierarchical prediction machines. This perspective has the potential to provide an elegant and powerful general theory of brain function, but it will ultimately stand or fall with evidence from basic neuroscience research. Here, we characterize the status quo of that evidence and highlight important avenues for future investigations

    Feature-based attention and feature-based expectation

    No full text
    Foreknowledge of target stimulus features improves visual search performance as a result of 'feature-based attention' (FBA). Recent studies have reported that 'feature-based expectation' (FBE) also heightens decision sensitivity. Superficially, it appears that the latter work has simply rediscovered (and relabeled) the effects of FBA. However, this is not the case. Here we explain why

    Visual Prediction Error Spreads Across Object Features in Human Visual Cortex

    No full text
    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected vs. unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations, like those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human functional magnetic resonance imaging (fMRI) with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multi-feature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neuro-computational principles of multi-feature expectations and indicate that objects are the unit of selection for predictive vision

    Visual prediction error spreads across object features in human visual cortex.

    No full text
    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected vs. unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations, like those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human functional magnetic resonance imaging (fMRI) with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multi-feature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neuro-computational principles of multi-feature expectations and indicate that objects are the unit of selection for predictive vision

    Expectation and surprise determine neural population responses in the ventral visual stream.

    No full text
    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, "predictive coding" models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction ("face expectation") and prediction error ("face surprise"), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects' perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se

    Attention sharpens the distinction between expected and unexpected percepts in the visual brain.

    No full text
    Attention, the prioritization of goal-relevant stimuli, and expectation, the modulation of stimulus processing by probabilistic context, represent the two main endogenous determinants of visual cognition. Neural selectivity in visual cortex is enhanced for both attended and expected stimuli, but the functional relationship between these mechanisms is poorly understood. Here, we adjudicated between two current hypotheses of how attention relates to predictive processing, namely, that attention either enhances or filters out perceptual prediction errors (PEs), the PE-promotion model versus the PE-suppression model. We acquired fMRI data from category-selective visual regions while human subjects viewed expected and unexpected stimuli that were either attended or unattended. Then, we trained multivariate neural pattern classifiers to discriminate expected from unexpected stimuli, depending on whether these stimuli had been attended or unattended. If attention promotes PEs, then this should increase the disparity of neural patterns associated with expected and unexpected stimuli, thus enhancing the classifier's ability to distinguish between the two. In contrast, if attention suppresses PEs, then this should reduce the disparity between neural signals for expected and unexpected percepts, thus impairing classifier performance. We demonstrate that attention greatly enhances a neural pattern classifier's ability to discriminate between expected and unexpected stimuli in a region- and stimulus category-specific fashion. These findings are incompatible with the PE-suppression model, but they strongly support the PE-promotion model, whereby attention increases the precision of prediction errors. Our results clarify the relationship between attention and expectation, casting attention as a mechanism for accelerating online error correction in predicting task-relevant visual inputs

    Attention Sharpens the Distinction between Expected and Unexpected Percepts in the Visual Brain

    No full text
    Attention, the prioritization of goal-relevant stimuli, and expectation, the modulation of stimulus processing by probabilistic context, represent the two main endogenous determinants of visual cognition. Neural selectivity in visual cortex is enhanced for both attended and expected stimuli, but the functional relationship between these mechanisms is poorly understood. Here, we adjudicated between two current hypotheses of how attention relates to predictive processing, namely, that attention either enhances or filters out perceptual prediction errors (PEs), the PE-promotion model versus the PE-suppression model. We acquired fMRI data from category-selective visual regions while human subjects viewed expected and unexpected stimuli that were either attended or unattended. Then, we trained multivariate neural pattern classifiers to discriminate expected from unexpected stimuli, depending on whether these stimuli had been attended or unattended. If attention promotes PEs, then this should increase the disparity of neural patterns associated with expected and unexpected stimuli, thus enhancing the classifier's ability to distinguish between the two. In contrast, if attention suppresses PEs, then this should reduce the disparity between neural signals for expected and unexpected percepts, thus impairing classifier performance. We demonstrate that attention greatly enhances a neural pattern classifier's ability to discriminate between expected and unexpected stimuli in a region- and stimulus category-specific fashion. These findings are incompatible with the PE-suppression model, but they strongly support the PE-promotion model, whereby attention increases the precision of prediction errors. Our results clarify the relationship between attention and expectation, casting attention as a mechanism for accelerating online error correction in predicting task-relevant visual inputs
    corecore