182 research outputs found

    Spatial remapping of Tactile Events: Assessing the effects of frequent posture changes

    Get PDF
    During the apparently mindless act of localizing a tactile sensation our brain must realign its initial spatial representation on the skin (somatotopicaly arranged) according to current body posture (arising from proprioception, vision and even audition). We have recently illustrated4 the temporal course of this recoding of tactile space from somatotopic to external coordinates using a crossmodal cueing psychophysical paradigm5,6 where behavioural reactions to visual targets are evaluated as a function of the location of irrelevant tactile cues. We found that the tactile events are initially represented in terms of a fleeting, non-conscious but nevertheless behaviorally consequential somatotopic format, which is quickly replaced by the representations referred to external spatial locations that prevail in our everyday experience. In this addendum, we test the intuition that frequent changes in body posture will make it harder to update the spatial remapping system and thus, produce stronger psychophysical correlates of the initial somatotopically-based spatial representations. Contrary to this expectation, however, we found no evidence for a modulation when preventing adaptation to a body posture

    Not so fast: orienting to crossmodal semantic congruence depends on task relevance and perceptual load

    Get PDF
    Research shows that crossmodal semantic congruency plays a role in the orienting of spatial attention and visual search. However, the extent to which crossmodal semantic relationships summon attention automatically or necessitate of top-down modulation is still not entirely clear. To date, researchers have used varied methodologies and their outcomes have been inconsistent. Variations in the task-relevance of the crossmodal stimuli (from explicitly needed, to entirely task-irrelevant), the amount of perceptual load, and response modality, may account for the mixed results of previous experiments. In the present study, we address the effects of audiovisual semantic congruence on spatial attention across variations in task relevance and perceptual load. Participants had to search for visual target images amongst distractor images of common objects, paired with sounds that were characteristic of those objects (e.g., guitar image and chord sound). Under conditions of relatively low perceptual load, crossmodal semantic congruence was found to speed up visual search times regardless of the task relevance of crossmodal congruence. However, when the perceptual load is higher, audiovisual semantic congruence expedited visual search latencies only when the audiovisual object was task-relevant. These results support the conclusion that semantic-based crossmodal congruence does not attract attention fully automatically, and draws from top-down processes

    Vision affects how fast we hear sounds move

    Get PDF
    There is a growing body of knowledge about the behavioral and neural correlates of cross-modal interactions in the perception of motion direction, as well as about the computations that underlie unimodal visual speed processing. Yet, the multisensory contributions to the perception of motion speed remain largely uncharted. Here we show that visual motion information exerts a profound influence on the perception of auditory speed. Moreover, our results suggest that this influence is specifically caused by visual velocity rather than by earlier, more local, frequency-based components of visual motion. The way in which visual speed information affects how fast we hear a sound move can be well described by a weighted average model that takes into account the visual speed signal in the computation of auditory speed

    Cinematographic continuity edits across shot scales and camera angles:An ERP analysis

    Get PDF
    Film editing has attracted great theoretical and practical interest since the beginnings of cinematography. In recent times, the neural correlates of visual transitions at edit cuts have been at the focus of attention in neurocinematics. Many Event Related Potential (ERP) studies studies have reported the consequences of cuts involving narrative discontinuities, and violations of standard montage rules. However, less is known about edits that are meant to induce continuity. Here, we addressed the neural correlates of continuity editing involving scale, and angle variations across the cut within the same scene, two of the most popular devices used for continuity editing. We recorded the electroencephalographic signal obtained from 20 viewers as they watched four different cinematographic excerpts to extract ERPs at edit points. First, we were able to reproduce the general time and scalp distribution of the typical ERPs to filmic cuts in prior studies. Second, we found significant ERP modulations triggered by scale changes (scale out, scale in, or maintaining the same scale). Edits involving an increase in scale (scale out) led to amplification of the ERP deflection, and scale reduction (scale in) led to decreases, compared to edits that kept scale across the cut. These modulations coincide with the time window of the N300 and N400 components and, according to previous findings, their amplitude has been associated with the likelihood of consciously detecting the edit. Third, we did not detect similar modulations as a function of angle variations across the cut. Based on these findings, we suggest that cuts involving reduction of scale are more likely to go unnoticed, than ones that scale out. This relationship between scale in/out and visibility is documented in film edition manuals. Specifically, in order to achieve fluidity in a scene, the edition is designed from the most opened shots to the most closed ones

    Long-Range a-Synchronization as Control Signal for BCI: A Feasibility Study

    Get PDF
    First published February 7, 2023Shifts in spatial attention are associated with variations in α band (α, 8–14 Hz) activity, specifically in interhemispheric imbalance. The underlying mechanism is attributed to local α-synchronization, which regulates local inhibition of neural excitability, and frontoparietal synchronization reflecting long-range communication. The direction-specific nature of this neural correlate brings forward its potential as a control signal in brain-computer interfaces (BCIs). In the present study, we explored whether long-range α-synchronization presents lateralized patterns dependent on voluntary attention orienting and whether these neural patterns can be picked up at a single-trial level to provide a control signal for active BCI. We collected electroencephalography (EEG) data from a cohort of healthy adults (n = 10) while performing a covert visuospatial attention (CVSA) task. The data show a lateralized pattern of α-band phase coupling between frontal and parieto-occipital regions after target presentation, replicating previous findings. This pattern, however, was not evident during the cue-to-target orienting interval, the ideal time window for BCI. Furthermore, decoding the direction of attention trial-by-trial from cue-locked synchronization with support vector machines (SVMs) was at chance level. The present findings suggest EEG may not be capable of detecting long-range α-synchronization in attentional orienting on a single-trial basis and, thus, highlight the limitations of this metric as a reliable signal for BCI control.This research was supported by the Agència de Gestió d’Ajuts Universitaris i de Recerca Generalitat de Catalunya Grant 2017 SGR 1545. This project has been co-funded with 50% by the European Regional Development Fund under the framework of the FEDER Operative Programme for Catalunya 2014-2020 Ministerio de Ciencia e Innovación (Ref: PID2019-108531GB-I00 AEI/FEDER)

    Conflict monitoring and attentional adjustment during binocular rivalry

    Get PDF
    First published: 06 December 2021To make sense of ambiguous and, at times, fragmentary sensory input, the brain must rely on a process of active interpretation. At any given moment, only one of several possible perceptual representations prevails in our conscious experience. Our hypothesis is that the competition between alternative representations induces a pattern of neural activation resembling cognitive conflict, eventually leading to fluctuations between different perceptual outcomes in the case of steep competition. To test this hypothesis, we probed changes in perceptual awareness between competing images using binocular rivalry. We drew our predictions from the conflict monitoring theory, which holds that cognitive control is invoked by the detection of conflict during information processing. Our results show that fronto-medial theta oscillations (5–7 Hz), an established electroencephalography (EEG) marker of conflict, increases right before perceptual alternations and decreases thereafter, suggesting that conflict monitoring occurs during perceptual competition. Furthermore, to investigate conflict resolution via attentional engagement, we looked for a neural marker of perceptual switches as by parieto-occipital alpha oscillations (8–12 Hz). The power of parieto-occipital alpha displayed an inverse pattern to that of fronto-medial theta, reflecting periods of high interocular inhibition during stable perception, and low inhibition around moments of perceptual change. Our findings aim to elucidate the relationship between conflict monitoring mechanisms and perceptual awareness.H2020 Marie Skłodowska-Curie Actions, Grant/Award Number: 794649; Universitat Pompeu Fabra; FEDER Operative Programme for Catalunya 2014–2020; IkerBasque Research Fellowships; Ramon y Cajal, Grant/Award Number: RYC2019-027538-I; University Pompeu Fabra; AGAUR Generalitat de Catalunya, Grant/Award Numbers: 2017 SGR 1545, FI-DGR 2019; Ministerio de Ciencia e Innovaci on, Grant/Award Number: PID2019-108531GB-I00 AEI/FEDE

    Visual limitations shape audio-visual integration

    Get PDF
    Pérez-Bellido A, Ernst MO, Soto-Faraco S, López-Moliner J. Visual limitations shape audio-visual integration. Journal of Vision. 2015;15(14): 5.Recent studies have proposed that some cross-modal illusions might be expressed in what were previously thought of as sensory-specific brain areas. Therefore, one interesting question is whether auditory-driven visual illusory percepts respond to manipulations of low-level visual attributes (such as luminance or chromatic contrast), in the same way as their non-illusory analogs. Here we addressed this question using the double flash illusion (DFI), whereby one brief flash can be perceived as two when combined with two beeps presented in rapid succession. Our results showed that the perception of two illusory flashes depended on luminance contrast, just as the perception of two real flashes did. Specifically we found that the higher the flash luminance contrast, the stronger the DFI. Such a pattern seems to contradict what would be predicted from a maximum likelihood estimation perspective, and can be explained by considering that low-level visual stimulus attributes similarly modulate the perception of sound-induced visual phenomena and “real” visual percepts. This finding provides psychophysical support for the involvement of sensory-specific brain areas in the expression of the DFI. On the other hand, the addition of chromatic contrast failed to produce a change in the strength of the DFI despite improved visual sensitivity to real flashes. The null impact of chromaticity on the cross-modal illusion might suggest a weaker interaction of the parvocellular visual pathway with the auditory system for cross-modal illusions
    corecore