453 research outputs found

    Musical Expertise Induces Audiovisual Integration of Abstract Congruency Rules

    Get PDF
    Perception of everyday life events relies mostly on multisensory integration. Hence, studying the neural correlates of the integration of multiple senses constitutes an important tool in understanding perception within an ecologically valid framework. The present study used magnetoencephalography in human subjects to identify the neural correlates of an audiovisual incongruency response, which is not generated due to incongruency of the unisensory physical characteristics of the stimulation but from the violation of an abstract congruency rule. The chosen rule-"the higher the pitch of the tone, the higher the position of the circle"-was comparable to musical reading. In parallel, plasticity effects due to long-term musical training on this response were investigated by comparing musicians to nonmusicians. The applied paradigm was based on an appropriate modification of the multifeatured oddball paradigm incorporating, within one run, deviants based on a multisensory audiovisual incongruent condition and two unisensory mismatch conditions: an auditory and a visual one. Results indicated the presence of an audiovisual incongruency response, generated mainly in frontal regions, an auditory mismatch negativity, and a visual mismatch response. Moreover, results revealed that long-term musical training generates plastic changes in frontal, temporal, and occipital areas that affect this multisensory incongruency response as well as the unisensory auditory and visual mismatch responses

    Electrophysiological markers of predictive coding in multisensory integration and autism spectrum disorder

    Get PDF
    Publiekssamenvatting De manier waarop we de wereld om ons heen waarnemen is niet alleen gebaseerd op informatie die we via onze zintuigen ontvangen, maar wordt ook gevormd door onze ervaringen uit het verleden. Een recent geïntroduceerde theorie over de verwerking en integratie van sensorische informatie en eerdere ervaringen, de zogeheten predictive coding theorie, gaat ervan uit dat ons brein continu een intern predictiemodel van de wereld om ons heen genereert op basis van informatie die we ontvangen via onze zintuigen en gebeurtenissen die we in het verleden hebben meegemaakt. Het kunnen voorspellen wat we in bepaalde situaties zullen gaan zien, horen, voelen, ruiken en proeven, stelt ons in staat om te anticiperen op sensorische prikkels. Om deze reden reageren we vaak sneller en accurater op voorspelbare sensorische signalen. Op neuraal niveau zijn er ook aanwijzingen gevonden voor de aanwezigheid van een intern predictiemodel. Na het horen van een geluid, bijvoorbeeld van de claxon van een auto, genereert ons brein automatisch elektrische activiteit die met behulp van elektro-encefalografie (EEG) te meten is. Wanneer we datzelfde geluid zelf initiëren, door bijvoorbeeld zelf op de claxon te drukken, kunnen we beter voorspellen wanneer het geluid optreedt en hoe dit ongeveer zal klinken. Deze toename in voorspelbaarheid van het geluid is terug te zien in een vermindering van het EEG signaal. Wanneer we luisteren naar een reeks voorspelbare geluiden waarin onverwacht een geluid wordt weggelaten genereert het brein ook een duidelijk elektrisch signaal, een zogeheten predictie error, dat kan worden gemeten met behulp van EEG. De sterkte van dit signaal wordt verondersteld samen te hangen met de hoeveelheid cognitieve vermogens die aan de onverwachtse verstoring van de voorspelling worden toegewezen. De bevindingen beschreven in dit proefschrift laten zien dat het zelf initiëren van een geluid bij mensen met autisme spectrum stoornis (ASS) niet automatisch resulteert in een afname in elektrische hersenactiviteit. Ook is gevonden dat een plotselinge verstoring in sensorische stimulatie bij mensen met ASS kan resulteren in verhoogde elektrische hersenactiviteit. Deze bevindingen suggereren dat mensen met ASS minder goed in staat lijken te zijn om te anticiperen op sensorische prikkels, en mogelijk meer moeite hebben met de verwerking van onverwachtse verstoringen in sensorische stimulatie. Een verminderd vermogen om te kunnen anticiperen op sensorische prikkels en omgaan met onverwachtse verstoringen in sensorische stimulatie kan niet alleen leiden tot atypische gedragsreacties, waaronder onder- en overgevoeligheid voor sensorische prikkels (symptomen die veel voorkomen bij ASS), maar heeft mogelijk ook gevolgen voor de sociale cognitieve vaardigheden. In sociale situaties is het kunnen anticiperen op hetgeen een ander zegt of doet van cruciaal belang. Het begrijpen van sarcasme vereist bijvoorbeeld de integratie van subtiele verschillen in auditieve (toonhoogte en prosodie) en visuele informatie (gezichtsuitdrukkingen, lichaamstaal). Het juist interpreteren van dergelijke dubbelzinnige sociale signalen is vaak lastig voor mensen met ASS. De bevindingen beschreven in dit proefschrift suggereren dat de oorzaak hiervoor mogelijk ligt in verstoringen in het vermogen om te kunnen anticiperen op sensorische prikkels. Toekomstig onderzoek moet uitwijzen of mensen met ASS ook meer moeite hebben met het anticiperen op sensorische prikkels en verwerken van onverwachtse verstoringen in andere sensorische domeinen. Naast het vergroten van wetenschappelijke kennis over de sensorische informatieverwerking in ASS kan verder onderzoek naar de neurale mechanismen van sensorische anticipatie potentieel leiden tot een elektrofysiologische marker voor ASS die kan worden toegepast als diagnostisch hulpmiddel. Met name voor mensen waarbij de gedragskenmerken niet altijd goed te beoordelen zijn kan een dergelijke biomarker mogelijk als objectief meetinstrument worden ingezet in de klinische praktijk

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    The Effects of Musical Expertise on Sensory Processing

    Get PDF
    The goal of this thesis was to assess sensorimotor musical experience and its impact on the way that individuals perceive and interact with real-world musical stimuli. Experiment #1 investigated multisensory integration in 14 musicians and 10 non-musicians using a two alternative forced-choice (2AFC) discrimination task, and was designed to examine whether musical expertise augmented multisensory enhancement. Musical experience did not alter the outcomes of multisensory integration, but there may be asymmetries between musicians and non-musicians in their use of auditory cues. Experiment #2 was a neuroimaging case study investigating the influence of musical familiarity on the kinesthetic motor imagery of dance accompanied by music in expert dancers. Familiarity resulted in increased hemodynamic responses in the supplementary motor area (SMA) and decreased responses in Heschls gyrus (HG). These findings provide new evidence regarding the influence of musical expertise on sensory processing using real-world complex stimuli. This thesis suggests that expert practice shapes the way experts perceive and interact with their environments, and emphasizes the need for, and challenges of using naturalistic stimuli

    Suppression of the auditory N1 by visual anticipatory motion is modulated by temporal and identity predictability

    Get PDF
    The amplitude of the auditory N1 component of the event-related potential (ERP) is typically suppressed when a sound is accompanied by visual anticipatory information that reliably predicts the timing and identity of the sound. While this visually-induced suppression of the auditory N1 is considered an early electrophysiological marker of fulfilled prediction, it is not yet fully understood whether this internal predictive coding mechanism is primarily driven by the temporal characteristics, or by the identity features of the anticipated sound. The current study examined the impact of temporal and identity predictability on suppression of the auditory N1 by visual anticipatory motion with an ecologically valid audiovisual event (a video of a handclap). Predictability of auditory timing and identity was manipulated in three different conditions in which sounds were either played in isolation, or in conjunction with a video that either reliably predicted the timing of the sound, the identity of the sound, or both the timing and identity. The results showed that N1 suppression was largest when the video reliably predicted both the timing and identity of the sound, and reduced when either the timing or identity of the sound was unpredictable. The current results indicate that predictions of timing and identity are both essential elements for predictive coding in audition

    Multi-Level Audio-Visual Interactions in Speech and Language Perception

    Get PDF
    That we perceive our environment as a unified scene rather than individual streams of auditory, visual, and other sensory information has recently provided motivation to move past the long-held tradition of studying these systems separately. Although they are each unique in their transduction organs, neural pathways, and cortical primary areas, the senses are ultimately merged in a meaningful way which allows us to navigate the multisensory world. Investigating how the senses are merged has become an increasingly wide field of research in recent decades, with the introduction and increased availability of neuroimaging techniques. Areas of study range from multisensory object perception to cross-modal attention, multisensory interactions, and integration. This thesis focuses on audio-visual speech perception, with special focus on facilitatory effects of visual information on auditory processing. When visual information is concordant with auditory information, it provides an advantage that is measurable in behavioral response times and evoked auditory fields (Chapter 3) and in increased entrainment to multisensory periodic stimuli reflected by steady-state responses (Chapter 4). When the audio-visual information is incongruent, the combination can often, but not always, combine to form a third, non-physically present percept (known as the McGurk effect). This effect is investigated (Chapter 5) using real word stimuli. McGurk percepts were not robustly elicited for a majority of stimulus types, but patterns of responses suggest that the physical and lexical properties of the auditory and visual stimulus may affect the likelihood of obtaining the illusion. Together, these experiments add to the growing body of knowledge that suggests that audio-visual interactions occur at multiple stages of processing

    Audio-visual speech perception: a developmental ERP investigation

    Get PDF
    Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development

    Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Get PDF
    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV − V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40–60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing

    The timecourse of multisensory speech processing in unilaterally stimulated cochlear implant users revealed by ERPs.

    Get PDF
    A cochlear implant (CI) is an auditory prosthesis which can partially restore the auditory function in patients with severe to profound hearing loss. However, this bionic device provides only limited auditory information, and CI patients may compensate for this limitation by means of a stronger interaction between the auditory and visual system. To better understand the electrophysiological correlates of audiovisual speech perception, the present study used electroencephalography (EEG) and a redundant target paradigm. Postlingually deafened CI users and normal-hearing (NH) listeners were compared in auditory, visual and audiovisual speech conditions. The behavioural results revealed multisensory integration for both groups, as indicated by shortened response times for the audiovisual as compared to the two unisensory conditions. The analysis of the N1 and P2 event-related potentials (ERPs), including topographic and source analyses, confirmed a multisensory effect for both groups and showed a cortical auditory response which was modulated by the simultaneous processing of the visual stimulus. Nevertheless, the CI users in particular revealed a distinct pattern of N1 topography, pointing to a strong visual impact on auditory speech processing. Apart from these condition effects, the results revealed ERP differences between CI users and NH listeners, not only in N1/P2 ERP topographies, but also in the cortical source configuration. When compared to the NH listeners, the CI users showed an additional activation in the visual cortex at N1 latency, which was positively correlated with CI experience, and a delayed auditory-cortex activation with a reversed, rightward functional lateralisation. In sum, our behavioural and ERP findings demonstrate a clear audiovisual benefit for both groups, and a CI-specific alteration in cortical activation at N1 latency when auditory and visual input is combined. These cortical alterations may reflect a compensatory strategy to overcome the limited CI input, which allows the CI users to improve the lip-reading skills and to approximate the behavioural performance of NH listeners in audiovisual speech conditions. Our results are clinically relevant, as they highlight the importance of assessing the CI outcome not only in auditory-only, but also in audiovisual speech conditions
    corecore