817 research outputs found

    A phonologically congruent sound boosts a visual target into perceptual awareness

    Get PDF
    Capacity limitations of attentional resources allow only a fraction of sensory inputs to enter our awareness. Most prominently, in the attentional blink the observer often fails to detect the second of two rapidly successive targets that are presented in a sequence of distractor items. To investigate how auditory inputs enable a visual target to escape the attentional blink, this study presented the visual letter targets T1 and T2 together with phonologically congruent or incongruent spoken letter names. First, a congruent relative to an incongruent sound at T2 rendered visual T2 more visible. Second, this T2 congruency effect was amplified when the sound was congruent at T1 as indicated by a T1 congruency × T2 congruency interaction. Critically, these effects were observed both when the sounds were presented in synchrony with and prior to the visual target letters suggesting that the sounds may increase visual target identification via multiple mechanisms such as audiovisual priming or decisional interactions. Our results demonstrate that a sound around the time of T2 increases subjects' awareness of the visual target as a function of T1 and T2 congruency. Consistent with Bayesian causal inference, the brain may thus combine (1) prior congruency expectations based on T1 congruency and (2) phonological congruency cues provided by the audiovisual inputs at T2 to infer whether auditory and visual signals emanate from a common source and should hence be integrated for perceptual decisions

    Distinct computational principles govern multisensory integration in primary sensory and association cortices

    Get PDF
    Human observers typically integrate sensory signals in a statistically optimal fashion into a coherent percept by weighting them in proportion to their reliabilities [1, 2, 3 and 4]. An emerging debate in neuroscience is to which extent multisensory integration emerges already in primary sensory areas or is deferred to higher-order association areas [5, 6, 7, 8 and 9]. This fMRI study used multivariate pattern decoding to characterize the computational principles that define how auditory and visual signals are integrated into spatial representations across the cortical hierarchy. Our results reveal small multisensory influences that were limited to a spatial window of integration in primary sensory areas. By contrast, parietal cortices integrated signals weighted by their sensory reliabilities and task relevance in line with behavioral performance and principles of statistical optimality. Intriguingly, audiovisual integration in parietal cortices was attenuated for large spatial disparities when signals were unlikely to originate from a common source. Our results demonstrate that multisensory interactions in primary and association cortices are governed by distinct computational principles. In primary visual cortices, spatial disparity controlled the influence of non-visual signals on the formation of spatial representations, whereas in parietal cortices, it determined the influence of task-irrelevant signals. Critically, only parietal cortices integrated signals weighted by their bottom-up reliabilities and top-down task relevance into multisensory spatial priority maps to guide spatial orienting

    Conditioned sounds enhance visual processing

    Get PDF
    This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS) that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral) or monetary outcomes (+50 euro cents, −50 cents, 0 cents). In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception

    Attention controls multisensory perception via two distinct mechanisms at different levels of the cortical hierarchy

    Get PDF
    To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via two distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy

    Comparing TMS perturbations to occipital and parietal cortices in concurrent TMS-fMRI studies-Methodological considerations

    Get PDF
    Neglect and hemianopia are two neuropsychological syndromes that are associated with reduced awareness for visual signals in patients' contralesional hemifield. They offer the unique possibility to dissociate the contributions of retino-geniculate and retino-colliculo circuitries in visual perception. Yet, insights from patient fMRI studies are limited by heterogeneity in lesion location and extent, long-term functional reorganization and behavioural compensation after stroke. Transcranial magnetic stimulation (TMS) has therefore been proposed as a complementary method to investigate the effect of transient perturbations on functional brain organization. This concurrent TMS-fMRI study applied TMS perturbation to occipital and parietal cortices with the aim to 'mimick' neglect and hemianopia. Based on the challenges and interpretational limitations of our own study we aim to provide tutorial guidance on how future studies should compare TMS to primary sensory and association areas that are governed by distinct computational principles, neural dynamics and functional architecture

    Distinct neural mechanisms and temporal constraints govern a cascade of audiotactile interactions

    Get PDF
    Synchrony is a crucial cue indicating whether sensory signals are caused by single or independent sources. In order to be integrated and produce multisensory behavioural benefits, signals must co-occur within a temporal integration window (TIW). Yet, the underlying neural determinants and mechanisms of integration across asynchronies remain unclear. This psychophysics and electroencephalography study investigated the temporal constraints of behavioural response facilitation and neural interactions for evoked response potentials (ERP), inter-trial coherence (ITC), and time-frequency (TF) power. Participants were presented with noise bursts, ‘taps to the face’, and their audiotactile (AT) combinations at seven asynchronies: 0, ±20, ±70, and ±500 ms. Behaviourally we observed an inverted U-shape function for AT response facilitation, which was maximal for synchronous AT stimulation and declined within a ≀70 ms TIW. For ERPs, we observed AT interactions at 110 ms for near-synchronous stimuli within a ≀20 ms TIW and at 400 ms within a ≀70 ms TIW consistent with behavioural response facilitation. By contrast, AT interactions for theta ITC and ERPs at 200 ms post-stimulus were selective for ±70 ms asynchrony, potentially mediated via phase resetting. Finally, interactions for induced theta power and alpha/beta power rebound emerged at 800-1100 ms across several asynchronies including even 500 ms auditory leading asynchrony. In sum, we observed neural interactions that were confined to or extending beyond the behavioural TIW or specific for ±70 ms asynchrony. This diversity of temporal profiles and constraints demonstrates that multisensory integration unfolds in a cascade of interactions that are governed by distinct neural mechanisms

    The influence of auditory attention on rhythmic speech tracking: Implications for studies of unresponsive patients

    Get PDF
    Language comprehension relies on integrating words into progressively more complex structures, like phrases and sentences. This hierarchical structure-building is reflected in rhythmic neural activity across multiple timescales in E/MEG in healthy, awake participants. However, recent studies have shown evidence for this “cortical tracking” of higher-level linguistic structures also in a proportion of unresponsive patients. What does this tell us about these patients’ residual levels of cognition and consciousness? Must the listener direct their attention toward higher level speech structures to exhibit cortical tracking, and would selective attention across levels of the hierarchy influence the expression of these rhythms? We investigated these questions in an EEG study of 72 healthy human volunteers listening to streams of monosyllabic isochronous English words that were either unrelated (scrambled condition) or composed of four-word-sequences building meaningful sentences (sentential condition). Importantly, there were no physical cues between four-word-sentences. Rather, boundaries were marked by syntactic structure and thematic role assignment. Participants were divided into three attention groups: from passive listening (passive group) to attending to individual words (word group) or sentences (sentence group). The passive and word groups were initially naïve to the sentential stimulus structure, while the sentence group was not. We found significant tracking at word- and sentence rate across all three groups, with sentence tracking linked to left middle temporal gyrus and right superior temporal gyrus. Goal-directed attention to words did not enhance word-rate-tracking, suggesting that word tracking here reflects largely automatic mechanisms, as was shown for tracking at the syllable-rate before. Importantly, goal-directed attention to sentences relative to words significantly increased sentence-rate-tracking over left inferior frontal gyrus. This attentional modulation of rhythmic EEG activity at the sentential rate highlights the role of attention in integrating individual words into complex linguistic structures. Nevertheless, given the presence of high-level cortical tracking under conditions of lower attentional effort, our findings underline the suitability of the paradigm in its clinical application in patients after brain injury. The neural dissociation between passive tracking of sentences and directed attention to sentences provides a potential means to further characterise the cognitive state of each unresponsive patient

    Using the past to estimate sensory uncertainty

    Get PDF
    To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals
    • 

    corecore