30 research outputs found

    Judgments of effort exerted by others are influenced by received rewards

    Get PDF
    Estimating invested effort is a core dimension for evaluating own and others’ actions, and views on the relationship between effort and rewards are deeply ingrained in various societal attitudes. Internal representations of effort, however, are inherently noisy, e.g. due to the variability of sensorimotor and visceral responses to physical exertion. The uncertainty in effort judgments is further aggravated when there is no direct access to the internal representations of exertion – such as when estimating the effort of another person. Bayesian cue integration suggests that this uncertainty can be resolved by incorporating additional cues that are predictive of effort, e.g. received rewards. We hypothesized that judgments about the effort spent on a task will be influenced by the magnitude of received rewards. Additionally, we surmised that such influence might further depend on individual beliefs regarding the relationship between hard work and prosperity, as exemplified by a conservative work ethic. To test these predictions, participants performed an effortful task interleaved with a partner and were informed about the obtained reward before rating either their own or the partner’s effort. We show that higher rewards led to higher estimations of exerted effort in self-judgments, and this effect was even more pronounced for other-judgments. In both types of judgment, computational modelling revealed that reward information and sensorimotor markers of exertion were combined in a Bayes-optimal manner in order to reduce uncertainty. Remarkably, the extent to which rewards influenced effort judgments was associated with conservative world-views, indicating links between this phenomenon and general beliefs about the relationship between effort and earnings in society

    Supplementary Matrial

    No full text
    The PDF file below contains the Supplementary Text and Figures for the manuscript entitled: "Differential effects of intra-modal and cross-modal reward value on perception: ERP evidence", by Vakhrushev et al

    Interaction of spatial attention and the associated reward value of audiovisual stimuli

    No full text
    Reward value and selective attention both enhance the representation of sensory stimuli at the earliest stages of processing. It is still debated whether and how reward-driven and attentional mechanisms interact to influence perception. Here we ask whether the interaction between reward value and selective attention depends on the sensory modality through which the reward information is conveyed. Human participants first learned the reward value of uni-modal visual and auditory stimuli during a conditioning phase. Subsequently, they performed a target detection task on bimodal stimuli containing a previously rewarded stimulus in one, both, or neither of the modalities. Additionally, participants were required to focus their attention on one side and only report targets on the attended side. Our results showed a strong modulation of visual and auditory event-related potentials (ERPs) by spatial attention. We found no main effect of reward value but importantly an interaction effect was found as the strength of attentional modulation of the ERPs was significantly affected by the reward value. When reward effects were inspected separately with respect to each modality, auditory value-driven modulation of attention was found to dominate the ERP effects whereas visual reward value on its own led to no effect, likely due to its interference with the target processing. These results inspire a two-stage model where first the salience of a high reward stimulus is enhanced on a local priority map specific to each sensory modality, and at a second stage reward value and top-down attentional mechanisms are integrated across sensory modalities to affect perception

    Learned value modulates the access to visual awareness during continuous flash suppression

    No full text
    Abstract Monetary value enhances visual perception and attention and boosts activity in the primary visual cortex, however, it is still unclear whether monetary value can modulate the conscious access to rewarding stimuli. Here we investigate this issue by employing a breaking continuous flash suppression (b-CFS) paradigm. We measured suppression durations of sinusoidal gratings having orthogonal orientations under CFS in adult volunteers before and after a short session of Pavlovian associative learning in which each orientation was arbitrarily associated either with high or low monetary reward. We found that monetary value accelerated the access to visual awareness during CFS. Specifically, after the associative learning, suppression durations of the visual stimulus associated with high monetary value were shorter compared to the visual stimulus associated with low monetary value. Critically, the effect was replicated in a second experiment using a detection task for b-CFS that was orthogonal to the reward associative learning. These results indicate that monetary reward facilitates the access to awareness of visual stimuli associated with monetary value probably by boosting their representation at the early stages of visual processing in the brain

    The effect of monetary reward on visual awareness

    No full text

    Modulation of perception by visual, auditory, and audiovisual reward predicting cues

    No full text
    Rewards influence information processing in the primary sensory areas specialized to process stimuli from a specific sensory modality. In real life situations, we receive sensory inputs not only from one single modality, but stimuli are often multisensory. It is however not known whether the reward-driven modulation of perception follows the same principles when reward is cued through a single or multiple sensory modalities. We previously showed that task-irrelevant reward cues modulate perception both intra- as well as cross-modally, likely through a putative enhancement in the integration of the stimulus parts into a coherent object. In this study, we explicitly test this possibility by assessing whether reward enhances the integration of unisensory components of a multisensory object in accordance with the supra-additive principle of multisensory integration. Towards this aim, we designed a simple detection task using reward predicting cues that were either unisensory (auditory or visual, both above the detection threshold) or multisensory (audiovisual). We conducted two experiments, behavioral (experiment 1) and simultaneous behavioral and neuroimaging testing (experiment 2). We expected that reward speeds up reaction times in response to all stimulus configurations, and that additionally the reward effects in multisensory cues fulfill the supra-additive principle of multisensory integration. We observed that reward decreased response times in both experiments with the strongest effect found for the multisensory stimuli in experiment 1. However, this behavioral effect did not fulfill the supra-additive principle. Neuroimaging results demonstrated sensory supra-additivity at the classical areas involved in multisensory integration such as the Superior Temporal areas (STS), while reward modulation was found in the midbrain and fronto-parietal areas, reflecting the typical areas that receive dopaminergic projections. However, reward did not enhance the supra-additivity in the STS compared to a no reward condition. Instead, we observed that some of the reward-related areas showed a sub-additive modulation by rewards and areas exhibiting a weaker supra-additive response to audiovisual stimuli, namely the fusiform gyrus, were modulated by rewards of audiovisual stimuli as measured by a conjunction analysis. Overall, our results indicate that reward does not enhance the multisensory integration through a supra-additive rule. These findings inspire a model where reward and sensory integration processes are regulated by two independent mechanisms, where sensory information is integrated at an early stage in a supra-additive manner, while reward modulates perception at a later stage sub-additively. Moreover, an associative area in the Fusiform gyrus exhibits a convergence of both reward and multisensory integration signals, indicating that it may be a hub to integrate different types of signals including rewards to disambiguate the information from different sensory modalities

    Pre-saccadic attention spreads to stimuli forming a perceptual group with the saccade target

    No full text
    The pre-saccadic attention shift—a rapid increase in visual sensitivity at the target—is an inevitable precursor of saccadic eye movements. Saccade targets are often parts of the objects that are of interest to the active observer. Although the link between saccades and covert attention shifts is well established, it remains unclear if pre-saccadic attention selects the location of the eye movement target or rather the entire object that occupies this location. Indeed, several neurophysiological studies suggest that attentional modulations of neural activity in visual cortex spreads across parts of objects (e.g., elements grouped by Gestalt principles) that contain the target location of a saccade. To understand the nature of pre-saccadic attentional selection, we examined how visual sensitivity, measured in a challenging orientation discrimination task, changes during saccade preparation at locations that are perceptually grouped with the saccade target. In Experiment 1, using grouping by color in a delayed-saccade task, we found no consistent spread of attention to locations that formed a perceptual group with the saccade target. However, performance depended on the side of the stimulus arrangement relative to the saccade target location, an effect we discuss with respect to attentional momentum. In Experiment 2, employing stronger perceptual grouping cues (color and motion) and an immediate-saccade task, we obtained a reliable grouping effect: Attention spread to locations that were perceptually grouped with the saccade target while saccade preparation was underway. We also replicated the side effect observed in Experiment 1. These results provide evidence that the pre-saccadic attention spreads beyond the target location along the saccade direction, and selects scene elements that—based on Gestalt criteria—are likely to belong to the same object as the saccade target

    The role of temporal and spatial attention in size adaptation

    No full text
    One of the most important tasks for the visual system is to construct an internal representation of the spatial properties of objects, including their size. Size perception includes a combination of bottom-up (retinal inputs) and top-down (e.g., expectations) information, which makes the estimates of object size malleable and susceptible to numerous contextual cues. For example, it has been shown that size perception is prone to adaptation: brief previous presentations of larger or smaller adapting stimuli at the same region of space changes the perceived size of a subsequent test stimulus. Large adapting stimuli cause the test to appear smaller than its veridical size and vice versa. Here, we investigated whether size adaptation is susceptible to attentional modulation. First, we measured the magnitude of adaptation aftereffects for a size discrimination task. Then, we compared these aftereffects (on average 15–20%) with those measured while participants were engaged, during the adaptation phase, in one of the two highly demanding central visual tasks: Multiple Object Tracking (MOT) or Rapid Serial Visual Presentation (RSVP). Our results indicate that deploying visual attention away from the adapters did not significantly affect the distortions of perceived size induced by adaptation, with accuracy and precision in the discrimination task being almost identical in all experimental conditions. Taken together, these results suggest that visual attention does not play a key role in size adaptation, in line with the idea that this phenomenon can be accounted for by local gain control mechanisms within area V1

    Automatic pre-saccadic selection of stimuli perceptually grouped with saccade targets

    No full text

    Multineuron representations of visual attention

    No full text
    Recently techniques have become available that allow for simultaneous recordings from multiple neurons in awake behaving higher primates. These recordings can be analyzed with multivariate statistical methods, such as Fisher’s linear discriminant method or support vector machines to determine how much information is represented in the activity of a population of neurons. We have applied these techniques to recordings from groups of neurons in primary visual cortex (area V1). Neurons in this area are not only tuned to basic stimulus features, but also reflect whether image elements are attended or not. These attentional signals are weaker than the feature-selective responses, and it might be suspected that the reliability of attentional signals in area V1 is limited by the noisiness of neuronal responses as well as by the tuning of the neurons to low-level features. Our surprising finding is that the locus of attention can be decoded on a single trial from the activity of a small population of neurons in area V1. One critical factor that determines how well information from multiple neurons is combined is the correlation of the response variability, or noise correlation, across neurons. It has been suggested that correlations between the activities of neurons that are part of a population limit the information gain, and we find that the correlations indeed reduce the benefit of pooling neuronal responses evoked by the same object, but they actually also enhance the advantage of pooling responses evoked by different objects. At the population level these opposing effects cancel each other, so that the net effect of the noise correlations is negligible and attention can be decoded reliably. We next investigated if it is possible to decode attention if we introduce large variations in luminance contrast, because luminance contrast has a strong effect on the activity of V1 neurons and therefore may disrupt the coding of attention. However, we find that some neurons in area V1 are modulated strongly by attention and others only by luminance contrast so that attention and contrast are represented by separable codes. These results demonstrate the advantages of multineuron representations of visual attention
    corecore