69 research outputs found

    Spatial remapping of Tactile Events: Assessing the effects of frequent posture changes

    Get PDF
    During the apparently mindless act of localizing a tactile sensation our brain must realign its initial spatial representation on the skin (somatotopicaly arranged) according to current body posture (arising from proprioception, vision and even audition). We have recently illustrated4 the temporal course of this recoding of tactile space from somatotopic to external coordinates using a crossmodal cueing psychophysical paradigm5,6 where behavioural reactions to visual targets are evaluated as a function of the location of irrelevant tactile cues. We found that the tactile events are initially represented in terms of a fleeting, non-conscious but nevertheless behaviorally consequential somatotopic format, which is quickly replaced by the representations referred to external spatial locations that prevail in our everyday experience. In this addendum, we test the intuition that frequent changes in body posture will make it harder to update the spatial remapping system and thus, produce stronger psychophysical correlates of the initial somatotopically-based spatial representations. Contrary to this expectation, however, we found no evidence for a modulation when preventing adaptation to a body posture

    Vision affects how fast we hear sounds move

    Get PDF
    There is a growing body of knowledge about the behavioral and neural correlates of cross-modal interactions in the perception of motion direction, as well as about the computations that underlie unimodal visual speed processing. Yet, the multisensory contributions to the perception of motion speed remain largely uncharted. Here we show that visual motion information exerts a profound influence on the perception of auditory speed. Moreover, our results suggest that this influence is specifically caused by visual velocity rather than by earlier, more local, frequency-based components of visual motion. The way in which visual speed information affects how fast we hear a sound move can be well described by a weighted average model that takes into account the visual speed signal in the computation of auditory speed

    Efectos de repetición en la codificación de estímulos auditivos

    Get PDF
    [spa] Identificar - por ejemplo, un suceso auditivo - es una habilidad que requiere categorizar las sensaciones recibidas y transmitidas a través de los órganos sensoriales. El flujo de información que el ser humano recibe del mundo que le rodea es categorizado y organizado en elementos discretos (delimitados en el tiempo y el espacio), que pueden ser almacenados y/o guiar la conducta. Este trabajo explora cómo la redundancia en la estimulación auditiva afecta a la capacidad del sistema perceptivo para identificar estímulos, cuando éstos se presentan con una alta presión temporal. En concreto, se muestra que bajo circunstancias de presentación rápida e incertidumbre espacial, los fallos en identificación (pérdidas o confusiones) se acumulan en los elementos repetidos con mayor frecuencia que en los no repetidos

    Alpha stimulation of the human parietal cortex attunes tactile perception to external space

    Get PDF
    An intriguing question in neuroscience concerns how somatosensory events on the skin are represented in the human brain. Since Head and Holmes’ [1] neuropsychological dissociation between localizing touch on the skin and localizing body parts in external space, touch is considered to operate in a variety of spatial reference frames [2]. At least two representations of space are in competition during orienting to touch: a somatotopic one, reflecting the organization of the somatosensory cortex (S1) [3], and a more abstract, external reference frame that factors postural changes in relation to body parts and/or external space [4, 5]. Previous transcranial magnetic stimulation (TMS) studies suggest that the posterior parietal cortex (PPC) plays a key role in supporting representations as well as orienting attention in an external reference frame [4, 6]. Here, we capitalized on the TMS entrainment approach [7, 8], targeting the intraparietal sulcus (IPS). We found that frequency-specific (10 Hz) tuning of the PPC induced spatially specific enhancement of tactile detection that was expressed in an external reference frame. This finding establishes a tight causal link between a concrete form of brain activity (10 Hz oscillation) and a specific type of spatial representation, revealing a fundamental property of how the parietal cortex encodes information.This work was supported by grants from the European Research Council (StG-2010 263145), the Spanish Ministry of Science and Innovation (PSI2010-15426), and the Comissionat per a Universitats i Recerca del DIUE-Generalitat de Catalunya (SRG2009-092) to S.S.-F

    The speakers’ accent shapes the listeners’ phonological predictions during speech perception

    No full text
    This study investigates the specificity of predictive coding in spoken word comprehension using event-related potentials (ERPs). We measured word-evoked ERPs in Catalan speakers listening to semantically constraining sentences produced in their native regional accent (Experiment 1) or in a non-native accent (Experiment 2). Semantically anomalous words produced long-lasting negative shift (N400) starting as early as 250 ms, thus reflecting phonological as well as semantic mismatch. Semantically expected but phonologically unexpected (non-native forms embedded in a native context) produced only an early (∼250 ms) negative difference. In contrast, this phonological expectancy effect failed for native albeit phonologically unexpected target words embedded in a non-native context. These results suggest phonologically precise expectations when operating over native input, whilst phonologically less specified expectations in a non-native context. Our findings shed light on contextual influence during word recognition, suggesting that word form prediction based on context is sensitive and adaptive to phonological variability.his research was supported by the Spanish Ministry of Science and Innovation (PSI2010-15426 and Consolider INGENIO CSD2007-00012), Comissionat per a Universitats i Recerca del DIUE-Generalitat de Catalunya (SGR2009-092), and the European Research Council (StG-2010263145)

    Searching for audiovisual correspondence in multiple speaker scenarios

    No full text
    A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.This work was supported by grants PSI2010-15426 and Consolider INGENIO CSD2007-00012 (MICINN), Generalitat de Catalunya (SRG2009-092), and European Research Council (StG-2010 263145)

    Modality-switching in the Simon task: The clash of reference frames

    No full text
    The representation of spatial information related to an event can influence behavior even when location is task-irrelevant, as in the case of Stimulus–Response (S-R) compatibility effects on the Simon task. However, unlike single-modality situations, which are often used to study the Simon effect, in real-life scenarios various sensory modalities provide spatial information coded in different coordinate systems. Here, we address the expression of S-R compatibility effects in mixed-modality contexts, where events can occur in 1 of various sensory modalities (i.e., vision, touch or audition). The results confirm that, in single-modality cases, Simon effects in vision are expressed in an external spatial frame of reference, while touch information is coded anatomically. Remarkably, when mixing visual and tactile trials in an unpredictable way, the Simon effect disappeared in vision whereas tactile Simon effects remained expressed in their own (anatomical) frame of reference. Mixing visual and auditory stimuli did not obliterate the visual Simon effect and S-R compatibility effects in an external reference frame were evident for both modalities. The extinction of visual Simon effects as a result of mixing visual and tactile modalities can be interpreted as a consequence of the dynamic reorganization of the weights associated to the different sources of spatial information at play.This research was supported by the Ministerio de Economia y Competitividad (PSI2016-75558-P), the AGAUR Generalitat de Catalunya (2014SGR856) and the European Research Council (StG-2010 263145) to SSF. M.R. was supported by the Juan de la Cierva postdoctoral fellowship (JCI-2012-12335) from the Ministerio de Economia y Competitividad

    Alpha stimulation of the human parietal cortex attunes tactile perception to external space

    No full text
    An intriguing question in neuroscience concerns how somatosensory events on the skin are represented in the human brain. Since Head and Holmes’ [1] neuropsychological dissociation between localizing touch on the skin and localizing body parts in external space, touch is considered to operate in a variety of spatial reference frames [2]. At least two representations of space are in competition during orienting to touch: a somatotopic one, reflecting the organization of the somatosensory cortex (S1) [3], and a more abstract, external reference frame that factors postural changes in relation to body parts and/or external space [4, 5]. Previous transcranial magnetic stimulation (TMS) studies suggest that the posterior parietal cortex (PPC) plays a key role in supporting representations as well as orienting attention in an external reference frame [4, 6]. Here, we capitalized on the TMS entrainment approach [7, 8], targeting the intraparietal sulcus (IPS). We found that frequency-specific (10 Hz) tuning of the PPC induced spatially specific enhancement of tactile detection that was expressed in an external reference frame. This finding establishes a tight causal link between a concrete form of brain activity (10 Hz oscillation) and a specific type of spatial representation, revealing a fundamental property of how the parietal cortex encodes information.This work was supported by grants from the European Research Council (StG-2010 263145), the Spanish Ministry of Science and Innovation (PSI2010-15426), and the Comissionat per a Universitats i Recerca del DIUE-Generalitat de Catalunya (SRG2009-092) to S.S.-F

    Modality-switching in the Simon task: The clash of reference frames

    No full text
    The representation of spatial information related to an event can influence behavior even when location is task-irrelevant, as in the case of Stimulus–Response (S-R) compatibility effects on the Simon task. However, unlike single-modality situations, which are often used to study the Simon effect, in real-life scenarios various sensory modalities provide spatial information coded in different coordinate systems. Here, we address the expression of S-R compatibility effects in mixed-modality contexts, where events can occur in 1 of various sensory modalities (i.e., vision, touch or audition). The results confirm that, in single-modality cases, Simon effects in vision are expressed in an external spatial frame of reference, while touch information is coded anatomically. Remarkably, when mixing visual and tactile trials in an unpredictable way, the Simon effect disappeared in vision whereas tactile Simon effects remained expressed in their own (anatomical) frame of reference. Mixing visual and auditory stimuli did not obliterate the visual Simon effect and S-R compatibility effects in an external reference frame were evident for both modalities. The extinction of visual Simon effects as a result of mixing visual and tactile modalities can be interpreted as a consequence of the dynamic reorganization of the weights associated to the different sources of spatial information at play.This research was supported by the Ministerio de Economia y Competitividad (PSI2016-75558-P), the AGAUR Generalitat de Catalunya (2014SGR856) and the European Research Council (StG-2010 263145) to SSF. M.R. was supported by the Juan de la Cierva postdoctoral fellowship (JCI-2012-12335) from the Ministerio de Economia y Competitividad

    The speakers’ accent shapes the listeners’ phonological predictions during speech perception

    No full text
    This study investigates the specificity of predictive coding in spoken word comprehension using event-related potentials (ERPs). We measured word-evoked ERPs in Catalan speakers listening to semantically constraining sentences produced in their native regional accent (Experiment 1) or in a non-native accent (Experiment 2). Semantically anomalous words produced long-lasting negative shift (N400) starting as early as 250 ms, thus reflecting phonological as well as semantic mismatch. Semantically expected but phonologically unexpected (non-native forms embedded in a native context) produced only an early (∼250 ms) negative difference. In contrast, this phonological expectancy effect failed for native albeit phonologically unexpected target words embedded in a non-native context. These results suggest phonologically precise expectations when operating over native input, whilst phonologically less specified expectations in a non-native context. Our findings shed light on contextual influence during word recognition, suggesting that word form prediction based on context is sensitive and adaptive to phonological variability.his research was supported by the Spanish Ministry of Science and Innovation (PSI2010-15426 and Consolider INGENIO CSD2007-00012), Comissionat per a Universitats i Recerca del DIUE-Generalitat de Catalunya (SGR2009-092), and the European Research Council (StG-2010263145)
    corecore