85 research outputs found
Physiological arousal underlies preferential access to visual awareness of fear-conditioned (and possibly disgust-conditioned) stimuli
Fear and disgust have been associated with opposite influences on visual processing, even though both constitute negative emotions that motivate avoidance behavior and entail increased arousal. In the current study, we hypothesized that (a) homeostatic relevance modulates early stages of visual processing, (b) through widespread physiological responses, and that (c) the direction of these modulations depends on whether an emotion calls for immediate regulatory behavior or not. Specifically, we expected that increased arousal should facilitate the detection of fear-related stimuli, and inhibit the detection of disgust-related stimuli. These hypotheses were tested in two preregistered experiments (data collected in 2022, total N = 120, ethnically homogeneous Polish sample). Using a novel, response bias-free version of the breaking continuous flash suppression paradigm, we examined localization and discrimination of fear- and disgust-conditioned stimuli at individually determined perceptual thresholds. Our first hypothesis was confirmed: fear-conditioned stimuli were detected and discriminated better than neutral stimuli, and the magnitude of conditioning-related perceptual preference was related to arousal during conditioning acquisition. In contrast with our second hypothesis, perceptual access to disgust-conditioned stimuli was not diminished. Exploratory analyses suggest that discrimination of disgust-conditioned stimuli was also enhanced, although these effects appeared weaker than those evoked by fear conditioning. The current study strengthens previous evidence for facilitated perception of threatening objects and shows for the first time that stimuli evoking disgust might also gain preferential access to awareness. The results imply that homeostatically relevant stimuli are prioritized by the visual system and that this preference is grounded in the underlying arousal levels
Scene context automatically drives predictions of object transformations
As our viewpoint changes, the whole scene around us rotates coherently. This allows us to predict how one part of a scene (e.g., an object) will change by observing other parts (e.g., the scene background). While human object perception is known to be strongly context-dependent, previous research has largely focused on how scene context can disambiguate fixed object properties, such as identity (e.g., a car is easier to recognize on a road than on a beach). It remains an open question whether object representations are updated dynamically based on the surrounding scene context, for example across changes in viewpoint. Here, we tested whether human observers dynamically and automatically predict the appearance of objects based on the orientation of the background scene. In three behavioral experiments (N = 152), we temporarily occluded objects within scenes that rotated. Upon the objects' reappearance, participants had to perform a perceptual discrimination task, which did not require taking the scene rotation into account. Performance on this orthogonal task strongly depended on whether objects reappeared rotated coherently with the surrounding scene or not. This effect persisted even when a majority of trials violated this real-world contingency between scene and object, showcasing the automaticity of these scene-based predictions. These findings indicate that contextual information plays an important role in predicting object transformations in structured real-world environments
Memory reports are biased by all relevant contents of working memory
Sensory input is inherently noisy while the world is inherently predictable. When multiple observations of the same object are available, integration of the available information necessarily increases the reliability of a world estimate. Optimal integration of multiple instances of sensory evidence has already been demonstrated during multisensory perception but could benefit unimodal perception as well. In the present study 330 participants observed a sequence of four orientations and were cued to report one of them. Reports were biased by all simultaneously memorized items that were similar and relevant to the target item, weighted by their reliability (signal-to-noise ratio). Orientations presented before and presented after the target biased report, demonstrating that the bias emerges in memory and not (exclusively) during perception or encoding. Only attended, task-relevant items biased report. We suggest that these results reflect how the visual system integrates information that is sampled from the same object at consecutive timepoints to promote perceptual stability and behavioural effectiveness in a dynamic world. We suggest that similar response biases, such as serial dependence, might be instances of a more general mechanism of working memory averaging. Data is available at https://osf.io/embcf/
Searching near and far: the attentional template incorporates viewing distance
According to theories of visual search, observers generate a visual representation of the search target (the âattentional templateâ) that guides spatial attention towards target-like visual input. In real-world vision, however, objects produce vastly different visual input depending on their location: your car produces a retinal image that is ten times smaller when itâs parked fifty compared to five meters away. Across four experiments, we investigated whether the attentional template incorporates viewing distance when observers search for familiar object categories. On each trial, participants were pre-cued to search for a car or person in the near or far plane of an outdoor scene. In âsearch trialsâ, the scene reappeared and participants had to indicate whether the search target was present or absent. In intermixed âcatch-trialsâ, two silhouettes were briefly presented on either side of fixation (matching the shape and/or predicted size of the search target), one of which was followed by a probe-stimulus. We found that participants were more accurate at reporting the location (Exp. 1&2) and orientation (Exp. 3) of probe-stimuli when they were presented at the location of size-matching silhouettes. Thus, attentional templates incorporate the predicted size of an object based on the current viewing distance. This was only the case, however, when silhouettes also matched the shape of the search target (Exp 2). We conclude that attentional templates for finding objects in scenes are shaped by a combination of category-specific attributes (shape) and context-dependent expectations about the likely appearance (size) of these objects at the current viewing location
A nasal visual field advantage in interocular competition
When our eyes are confronted with discrepant images (yielding incompatible retinal inputs) interocular competition (IOC) is instigated. During IOC, one image temporarily dominates perception, while the other is suppressed. Many factors affecting IOC have been extensively examined. One factor that received surprisingly little attention, however, is the stimulusâ visual hemifield (VHF) of origin. This is remarkable, as the VHF location of stimuli is known to affect visual performance in various contexts. Prompted by exploratory analyses, we examined five independent datasets of breaking continuous flash suppression experiments, to establish the VHFâs role in IOC. We found that targets presented in nasal VHF locations broke through suppression much faster than targets in temporal VHF locations. Furthermore, we found that the magnitude of this nasal advantage depended on how strongly the targets were suppressed: the nasal advantage was larger for the recessive eye than for the dominant eye, and was larger in observers with a greater dominance imbalance between the eyes. Our findings suggest that the nasal advantage reported here originates in processing stages where IOC is resolved. Finally, we propose that a nasal advantage in IOC serves an adaptive role in human vision, as it can aid perception of partially occluded objects
Statistical Learning Facilitates Access to Awareness
Statistical learning (SL) allows us to quickly extract regularities from sensory inputs. Although many studies have established that SL serves a wide range of cognitive functions, it remains unknown whether SL impacts conscious access. We addressed this question, seeking converging evidence from multiple paradigms across four experiments (total N = 153): Two reaction-time based b-CFS experiments showed that objects at probable locations and with probable features are released from suppression faster than improbable objects. In a visual masking experiment, weobserved higher sensitivity to probable (versus improbable) objects, independent of conscious access to the stimulus dimension carrying the regularities. Finally, a pre-registered accuracy-based b-CFS experiment showed higher localization accuracy for interocularly suppressed probable (versus improbable) objects given identical presentation durations, thereby excluding processing differences emerging after conscious access (e.g., criterion shifts). Together, these findings demonstrate that SL prioritizes conscious access of probable over improbable visual input
Mountains of memory in a sea of uncertainty: Sampling the external world despite useful information in visual working memory
A large part of research on visual working memory (VWM) has traditionally focused on estimating its maximum capacity. Yet, humans rarely need to load up their VWM maximally during natural behavior, since visual information often remains accessible in the external world. Recent work, using paradigms that take into account the accessibility of information in the outside world, has indeed shown that observers utilize only one or two items in VWM before sampling from the external world again. One straightforward interpretation of this finding is that, in daily behavior, much fewer items are memorized than the typically reported capacity limits. Here, we first investigate whether this lower reliance on VWM when information is externally accessible might instead reflect resampling before VWM is actually depleted. To this aim we devised an online task, in which participants copied a model (six items in a 4x4 grid; always accessible) in an adjacent empty 4x4 grid. A key aspect of our paradigm is that we (unpredictably) interrupted participants just before inspection of the model with a 2-alternative-forced-choice (2-AFC) question, probing their VWM content. Critically, we observed above-chance performance on probes appearing just before model inspection. This finding shows that the external world was resampled, despite VWM still containing relevant information. We then asked whether increasing the cost of sampling causes participants to load up more information in VWM or, alternatively, to squeeze out more information from VWM (at the cost of making more errors). To manipulate the cost of resampling, we made it more difficult (specifically, more time-consuming) to access the model. We show that with increased cost of accessing the model (which lead to fewer, but longer model inspections), participants could place more items correctly immediately after sampling, and they kept attempting to place items for longer after their first error. These findings demonstrate that participants both encoded more information in VWM and made attempts to squeeze out more information from VWM when sampling became more costly. We argue that human observers constantly evaluate how certain they are of their VWM contents, and only use that VWM content of which their certainty exceeds a context-dependent âaction thresholdâ. This threshold, in turn, depends on the trade-off between the cost of resampling and the benefits of making an action. We argue that considering the interplay between the available VWM contents and a context-dependent action threshold, is key for reconciling the traditional VWM literature with VWM use in our day-to-day behavior
A matter of availability: sharper tuning for memorized than for perceived stimulus features
Our visual environment is relatively stable over time. An optimized visual system could capitalize on this by devoting less representational resources to objects that are physically present. The vividness of subjective experience, however, suggests that externally available (perceived) information is more strongly represented in neural signals than memorized information. To distinguish between these opposing predictions, we use EEG multivariate pattern analysis to quantify the representational strength of task-relevant features in anticipation of a change-detection task. Perceptual availability was manipulated between experimental blocks by either keeping the stimulus available on the screen during a 2-s delay period (perception) or removing it shortly after its initial presentation (memory). We find that task-relevant (attended) memorized features are more strongly represented than irrelevant (unattended) features. More importantly, we find that task-relevant features evoke significantly weaker representations when they are perceptually available compared with when they are unavailable. These findings demonstrate that, contrary to what subjective experience suggests, vividly perceived stimuli elicit weaker neural representations (in terms of detectable multivariate information) than the same stimuli maintained in visual working memory. We hypothesize that an efficient visual system spends little of its limited resources on the internal representation of information that is externally available anyway
Replication studies in the Netherlands:Lessons learned and recommendations for funders, publishers and editors, and universities
Drawing on our experiences conducting replications we describe the lessons we learned about replication studies and formulate recommendations for researchers, policy makers, and funders about the role of replication in science and how it should be supported and funded. We first identify a variety of benefits of doing replication studies. Next, we argue that it is often necessary to improve aspects of the original study, even if that means deviating from the original protocol. Thirdly, we argue that replication studies highlight the importance of and need for more transparency of the research process, but also make clear how difficult that is. Fourthly, we underline that it is worth trying out replication in the humanities. We finish by formulating recommendations regarding reproduction and replication research, aimed specifically at funders, editors and publishers, and universities and other research institutes
Correction:Â Protocol of the Healthy Brain Study:An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context
[This corrects the article DOI: 10.1371/journal.pone.0260952.]
- âŠ