44 research outputs found

    The COGs (context, object, and goals) in multisensory processing

    Get PDF
    Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and “top-down” control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer’s goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications

    fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation.</p> <p>Results</p> <p>The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency.</p> <p>Conclusions</p> <p>These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.</p

    Predictive coding and multisensory integration : an attentional account of the multisensory mind

    Get PDF
    Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top down and bottom up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation

    The Functional Neuroanatomy of Letter-Speech Sound Integration and Its Relation to Brain Abnormalities in Developmental Dyslexia

    Get PDF
    This mini-review provides a comparison of the brain systems associated with developmental dyslexia and the brain systems associated with letter-speech sound (LSS) integration. First, the findings on the functional neuroanatomy of LSS integration are summarized in order to obtain a comprehensive overview of the brain regions involved in this process. To this end, neurocognitive studies investigating LSS integration in both normal and abnormal reading development are taken into account. The neurobiological basis underlying LSS integration is consequently compared with existing neurocognitive models of functional and structural brain abnormalities in developmental dyslexia—focusing on superior temporal and occipito-temporal (OT) key regions. Ultimately, the commonalities and differences between the brain systems engaged by LSS integration and the brain systems identified with abnormalities in developmental dyslexia are investigated. This comparison will add to our understanding of the relation between LSS integration and normal and abnormal reading development

    Audiovisual Processing of Chinese Characters Elicits Suppression and Congruency Effects in MEG

    Get PDF
    Learning to associate written letters/characters with speech sounds is crucial for reading acquisition. Most previous studies have focused on audiovisual integration in alphabetic languages. Less is known about logographic languages such as Chinese characters, which map onto mostly syllable-based morphemes in the spoken language. Here we investigated how long-term exposure to native language affects the underlying neural mechanisms of audiovisual integration in a logographic language using magnetoencephalography (MEG). MEG sensor and source data from 12 adult native Chinese speakers and a control group of 13 adult Finnish speakers were analyzed for audiovisual suppression (bimodal responses vs. sum of unimodal responses) and congruency (bimodal incongruent responses vs. bimodal congruent responses) effects. The suppressive integration effect was found in the left angular and supramarginal gyri (205–365 ms), left inferior frontal and left temporal cortices (575–800 ms) in the Chinese group. The Finnish group showed a distinct suppression effect only in the right parietal and occipital cortices at a relatively early time window (285–460 ms). The congruency effect was only observed in the Chinese group in left inferior frontal and superior temporal cortex in a late time window (about 500–800 ms) probably related to modulatory feedback from multi-sensory regions and semantic processing. The audiovisual integration in a logographic language showed a clear resemblance to that in alphabetic languages in the left superior temporal cortex, but with activation specific to the logographic stimuli observed in the left inferior frontal cortex. The current MEG study indicated that learning of logographic languages has a large impact on the audiovisual integration of written characters with some distinct features compared to previous results on alphabetic languages

    Visuohaptic convergence in a corticocerebellar network

    Full text link
    The processing of visual and haptic inputs, occurring either separately or jointly, is crucial for everyday-life object recognition, and has been a focus of recent neuroimaging research. Previously, visuohaptic convergence has been mostly investigated with matching-task paradigms. However, much less is known about visuohaptic convergence in the absence of additional task demands. We conducted two functional magnetic resonance imaging experiments in which subjects actively touched and/or viewed unfamiliar object stimuli without any additional task demands. In addition, we performed two control experiments with audiovisual and audiohaptic stimulation to examine the specificity of the observed visuohaptic convergence effects. We found robust visuohaptic convergence in bilateral lateral occipital cortex and anterior cerebellum. In contrast, neither the anterior cerebellum nor the lateral occipital cortex showed any involvement in audiovisual or audiohaptic convergence, indicating that multisensory convergence in these regions is specifically geared to visual and haptic inputs. These data suggest that in humans the lateral occipital cortex and the anterior cerebellum play an important role in visuohaptic processing even in the absence of additional task demands

    Integration of spoken and written words in beginning readers: A topographic ERP study

    Full text link
    Integrating visual and auditory language information is critical for reading. Suppression and congruency effects in audiovisual paradigms with letters and speech sounds have provided information about low-level mechanisms of grapheme-phoneme integration during reading. However, the central question about how such processes relate to reading entire words remains unexplored. Using ERPs, we investigated whether audiovisual integration occurs for words already in beginning readers, and if so, whether this integration is reflected by differences in map strength or topography (aim 1); and moreover, whether such integration is associated with reading fluency (aim 2). A 128-channel EEG was recorded while 69 monolingual (Swiss)-German speaking first-graders performed a detection task with rare targets. Stimuli were presented in blocks either auditorily (A), visually (V) or audiovisually (matching: AVM; nonmatching: AVN). Corresponding ERPs were computed, and unimodal ERPs summated (A + V = sumAV). We applied TANOVAs to identify time windows with significant integration effects: suppression (sumAV-AVM) and congruency (AVN-AVM). They were further characterized using GFP and 3D-centroid analyses, and significant effects were correlated with reading fluency. The results suggest that audiovisual suppression effects occur for familiar German and unfamiliar English words, whereas audiovisual congruency effects can be found only for familiar German words, probably due to lexical-semantic processes involved. Moreover, congruency effects were characterized by topographic differences, indicating that different sources are active during processing of congruent compared to incongruent audiovisual words. Furthermore, no clear associations between audiovisual integration and reading fluency were found. The degree to which such associations develop in beginning readers remains open to further investigation

    Efficient Visual Search from Synchronized Auditory Signals Requires Transient Audiovisual Events

    Get PDF
    BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony) of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps) we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time

    Fixing fluency: Neurocognitive assessment of a dysfluent reading intervention

    Get PDF
    The ability to read is essential to attain society’s literacy demands. Unfortunately, a significant percentage of the population experiences major difficulties in mastering reading and spelling skills. Individuals diagnosed with developmental dyslexia are at severe risk for adverse academic, economic, and psychosocial consequences, thus requiring clinical intervention. To date, there is no effective remediation for the lack of reading fluency, which remains as the most persistent symptom in dyslexia. This thesis aims at identifying factors involved in the failure to develop a functional reading network as well as factors of treatment success in addressing the notorious ‘fluency barrier’ in dyslexia. The present work combines a theoretical framework of dyslexia based on the multisensory integration deficit with recent advances in our knowledge of the brain networks specialized for reading. This thesis uses a longitudinal design including both behavioral and neurophysiological measures in dyslexics at 3rd grade of school. Between measurements, we provide an intervention aimed at improving reading fluency by training automation of letter-speech sound mappings. The studies presented in this thesis contribute to our understanding of dyslexics’ deficits and their remediation
    corecore