926 research outputs found

    Dynamics of Vocalization-Induced Modulation of Auditory Cortical Activity at Mid-utterance

    Get PDF
    Background: Recent research has addressed the suppression of cortical sensory responses to altered auditory feedback that occurs at utterance onset regarding speech. However, there is reason to assume that the mechanisms underlying sensorimotor processing at mid-utterance are different than those involved in sensorimotor control at utterance onset. The present study attempted to examine the dynamics of event-related potentials (ERPs) to different acoustic versions of auditory feedback at mid-utterance. Methodology/Principal findings: Subjects produced a vowel sound while hearing their pitch-shifted voice (100 cents), a sum of their vocalization and pure tones, or a sum of their vocalization and white noise at mid-utterance via headphones. Subjects also passively listened to playback of what they heard during active vocalization. Cortical ERPs were recorded in response to different acoustic versions of feedback changes during both active vocalization and passive listening. The results showed that, relative to passive listening, active vocalization yielded enhanced P2 responses to the 100 cents pitch shifts, whereas suppression effects of P2 responses were observed when voice auditory feedback was distorted by pure tones or white noise. Conclusion/Significance: The present findings, for the first time, demonstrate a dynamic modulation of cortical activity as a function of the quality of acoustic feedback at mid-utterance, suggesting that auditory cortical responses can be enhanced or suppressed to distinguish self-produced speech from externally-produced sounds

    Role of the cerebellum in adaptation to delayed action effects

    Get PDF
    Actions are typically associated with sensory consequences. For example, knocking at a door results in predictable sounds. These self-initiated sensory stimuli are known to elicit smaller cortical responses compared to passively presented stimuli, e.g., early auditory evoked magnetic fields known as M100 and M200 components are attenuated. Current models implicate the cerebellum in the prediction of the sensory consequences of our actions. However, causal evidence is largely missing. In this study, we introduced a constant delay (of 100 ms) between actions and action-associated sounds, and we recorded magnetoencephalography (MEG) data as participants adapted to the delay. We found an increase in the attenuation of the M100 component over time for self-generated sounds, which indicates cortical adaptation to the introduced delay. In contrast, no change in M200 attenuation was found. Interestingly, disrupting cerebellar activity via transcranial magnetic stimulation (TMS) abolished the adaptation of M100 attenuation, while the M200 attenuation reverses to an M200 enhancement. Our results provide causal evidence for the involvement of the cerebellum in adapting to delayed action effects, and thus in the prediction of the sensory consequences of our actions

    Auditory associative learning and its neural correlates in the auditory midbrain

    Get PDF
    Interpreting the meaning of environmental stimuli to generate optimal behavioral responses is essential for survival. Simply sensing a sound, without accessing prior knowledge in the brain, will not benefit behavior. How sensation and memory interact to form behavior is one of the fundamental questions in the field of neuroscience. In this thesis, I have addressed this question from two perspectives: I investigated the behavioral outcome of this interaction using discrimination, and the circuit underlying this interaction using electrophysiological recordings in the behaving mouse. Behaviorally, we found that the physical difference between to-be-discriminated sounds, had a constraining effect on discrimination. This effect occurred even though physical differences were significantly larger than reported discrimination limens, thus reflecting a high overlap between the memory traces of the relevant stimuli. The results suggest a strong role of pre-wired tonotopic organization and the involvement of peripheral stations with wider tuning (Ehret and Merzenich, 1985; Taberner and Liberman, 2005). To further understand the influence of sensation on behavior, we tested the interaction between sound features with generalization. Using sounds that differed in two dimensions, we found that bi-dimensional generalization can be either biased towards a single dimension or an integration of both. Whether it was one or the other depended on the two dimensions used. As the first convergence station in the auditory system (Casseday et al., 2002), the role of the inferior colliculus in encoding behavioral relevant information is not well understood. Recording from freely behaving mouse, we found task engagement modulated neural activity in the IC in a stimulus-specific manner. Our lab found previously that relevant sound exposure induced enhancement in neural activity and shifts in tonal representation in the IC (Cruces-Solís et al., 2018). As a continuation, we found that movement-sound association is essential for this plasticity. Furthermore, recording in freely behaving mice also found that this association modulated the ongoing LFP in the IC, suggesting a new role of IC in filtering movement-related acoustic stimuli. To conclude, our results support the view that the IC is not simply an auditory structure that relays auditory information into the cortex, but plays important role in interpreting the meaning of the sound. The new role of IC in encoding movement-related information suggests that the filtering function of the auditory system starts already in subcortical stages of the auditory pathway

    Brain Responses Track Patterns in Sound

    Get PDF
    This thesis uses specifically structured sound sequences, with electroencephalography (EEG) recording and behavioural tasks, to understand how the brain forms and updates a model of the auditory world. Experimental chapters 3-7 address different effects arising from statistical predictability, stimulus repetition and surprise. Stimuli comprised tone sequences, with frequencies varying in regular or random patterns. In Chapter 3, EEG data demonstrate fast recognition of predictable patterns, shown by an increase in responses to regular relative to random sequences. Behavioural experiments investigate attentional capture by stimulus structure, suggesting that regular sequences are easier to ignore. Responses to repetitive stimulation generally exhibit suppression, thought to form a building block of regularity learning. However, the patterns used in this thesis show the opposite effect, where predictable patterns show a strongly enhanced brain response, compared to frequency-matched random sequences. Chapter 4 presents a study which reconciles auditory sequence predictability and repetition in a single paradigm. Results indicate a system for automatic predictability monitoring which is distinct from, but concurrent with, repetition suppression. The brain’s internal model can be investigated via the response to rule violations. Chapters 5 and 6 present behavioural and EEG experiments where violations are inserted in the sequences. Outlier tones within regular sequences evoked a larger response than matched outliers in random sequences. However, this effect was not present when the violation comprised a silent gap. Chapter 7 concerns the ability of the brain to update an existing model. Regular patterns transitioned to a different rule, keeping the frequency content constant. Responses show a period of adjustment to the rule change, followed by a return to tracking the predictability of the sequence. These findings are consistent with the notion that the brain continually maintains a detailed representation of ongoing sensory input and that this representation shapes the processing of incoming information

    Expectation suppression across sensory modalitites: a MEG investigation

    Get PDF
    140 p.In the last few decades, a lot of research focus has been to understand how the human brain generates expectation about the incoming sensory responses and how it deals with surprise or unpredictable input. It is evident in predictive processing literature that the human brain suppresses the neural responses to predictable/expected stimuli (termed as expectation suppression effect). This thesis provide evidence to how expectation suppression is affected by content-based expectations (what) and temporal uncertainty (when) across sensory modalities (visual and auditory) using state-of-art Magnetoencephalography (MEG) imaging. The result shows that visual domain is more sensitive to content-based expectations (what) more than the timing (when), also visual domain shows sensitivity to timing (when) only if what was predictable. However, Auditory domain is equally sensitive to what and when features, showing enhanced suppression to expectation compared to visual domain. This thesis concludes conclude that the sensory modalities deal differently with the contextual expectations and temporal predictability. This suggests that while investigating predictive processing in the human brain, the modality specific differences should be considered, since the predictive mechanism at work in one domain should not necessarily be generalized to other domains as well

    An electrophysiological investigation into the role of agency and contingency on sensory attenuation

    Full text link
    Stimuli generated by a person’s own willed actions generally elicit a suppressed neurophysiological response than physically identical stimuli that have been externally generated. This phenomenon, known as sensory attenuation, has primarily been studied by comparing the N1, Tb and P2 components of the event-related potentials (ERPs) evoked by self-initiated vs. externally generated sounds. Sensory attenuation has been implicated in some psychotic disorders such as schizophrenia, where symptoms such as auditory hallucinations and delusions of control have been conceptualised as reflecting a difficulty in distinguishing between internally and externally generated stimuli. This thesis employed a novel paradigm across five experiments to investigate the role of agency and contingency in sensory attenuation. The role of agency was investigated in in Chapter 2. In Experiment 1, participants watched a moving, marked tickertape while EEG was recorded. In the active condition, participants chose whether to press a button by a certain mark on the tickertape. If a button-press had not occurred by the mark, then a tone would be played one second later. If the button was pressed prior to the mark, the tone was not played. In the passive condition, participants passively watched the animation, and were informed about whether a tone would be played on each trial. The design for Experiment 2 was identical, except that the contingencies were reversed (i.e., pressing the button prior to the mark led to a tone). The results were consistent across the two experiments: while there were no differences in N1 amplitude between the active and passive conditions, the amplitude of the Tb component was suppressed in the active condition. The amplitude of the P2 component was enhanced in the active condition in both Experiments 1 and 2. These results suggest that agency and motor actions per se have differential effects on sensory attenuation to sounds and are indexed with different ERP components. In Chapter 3, we investigated the role of contingency in sensory attenuation while using a similar ticker-tape design in Chapter 2. In the Full Contingency (FC) condition, participants again chose whether to press a button by a certain mark on the tickertape. If a button-press had not occurred by the mark, a sound would be played (one second later) 100% of the time (Experiment 3). If the button was pressed prior to the mark, the sound was not played. In the Half Contingency (HC) condition, participants observed the same tickertape; however, if participants did not press the button by the mark, a sound would occur 50% of the time (HC-Inaction) while if the participant did press the button, a sound would also play 50% of the time (HC-Action). In Experiment 4, the design was identical, except that a button-press triggered the sound in the FC condition. The results were consistent across both Experiments in Chapter 3: while there were no differences in N1 amplitude across the FC and HC conditions, the amplitude of the Tb component was smaller in the FC condition when compared to the HC-Inaction condition. The amplitude of the P2 component was also smaller in the FC condition compared to both the HC-Action and HC-Inaction conditions. The results suggest that the effect of contingency on neurophysiological indices of sensory attenuation may be indexed by the Tb and P2 components, as opposed to the more heavily studied N1 component. Chapter 4 also investigated contingency but instead used a more ‘traditional’ self-stimulation paradigm, in which sounds immediately followed the button-press. In Chapter 4, participants observed a fixation cross while pressing a button to generate a sound. The probability of the sound occurring after the button-press was either 100% (active 100) or 50% (active 50). In the two passive conditions (passive 100 and passive 50), sounds generated in the corresponding active conditions were recorded and played back to participants while they passively listened. In contrast with the results of Chapter 3, the results of Chapter 4 showed both the classical N1 suppression effect, and also an effect of contingency of the N1, where sounds with a 50% probability generated higher N1 amplitudes compared to sounds with 100% probability. In contrast, Tb amplitude was modulated by contingency but did not show any differences between the active and passive conditions. The results of this study suggest that both sense of agency and sensory contingency can influence sensory attenuation, and thus should be considered in future studies investigating this theoretically and clinically important phenomenon

    Electrophysiological markers of predictive coding in multisensory integration and autism spectrum disorder

    Get PDF
    Publiekssamenvatting De manier waarop we de wereld om ons heen waarnemen is niet alleen gebaseerd op informatie die we via onze zintuigen ontvangen, maar wordt ook gevormd door onze ervaringen uit het verleden. Een recent geïntroduceerde theorie over de verwerking en integratie van sensorische informatie en eerdere ervaringen, de zogeheten predictive coding theorie, gaat ervan uit dat ons brein continu een intern predictiemodel van de wereld om ons heen genereert op basis van informatie die we ontvangen via onze zintuigen en gebeurtenissen die we in het verleden hebben meegemaakt. Het kunnen voorspellen wat we in bepaalde situaties zullen gaan zien, horen, voelen, ruiken en proeven, stelt ons in staat om te anticiperen op sensorische prikkels. Om deze reden reageren we vaak sneller en accurater op voorspelbare sensorische signalen. Op neuraal niveau zijn er ook aanwijzingen gevonden voor de aanwezigheid van een intern predictiemodel. Na het horen van een geluid, bijvoorbeeld van de claxon van een auto, genereert ons brein automatisch elektrische activiteit die met behulp van elektro-encefalografie (EEG) te meten is. Wanneer we datzelfde geluid zelf initiëren, door bijvoorbeeld zelf op de claxon te drukken, kunnen we beter voorspellen wanneer het geluid optreedt en hoe dit ongeveer zal klinken. Deze toename in voorspelbaarheid van het geluid is terug te zien in een vermindering van het EEG signaal. Wanneer we luisteren naar een reeks voorspelbare geluiden waarin onverwacht een geluid wordt weggelaten genereert het brein ook een duidelijk elektrisch signaal, een zogeheten predictie error, dat kan worden gemeten met behulp van EEG. De sterkte van dit signaal wordt verondersteld samen te hangen met de hoeveelheid cognitieve vermogens die aan de onverwachtse verstoring van de voorspelling worden toegewezen. De bevindingen beschreven in dit proefschrift laten zien dat het zelf initiëren van een geluid bij mensen met autisme spectrum stoornis (ASS) niet automatisch resulteert in een afname in elektrische hersenactiviteit. Ook is gevonden dat een plotselinge verstoring in sensorische stimulatie bij mensen met ASS kan resulteren in verhoogde elektrische hersenactiviteit. Deze bevindingen suggereren dat mensen met ASS minder goed in staat lijken te zijn om te anticiperen op sensorische prikkels, en mogelijk meer moeite hebben met de verwerking van onverwachtse verstoringen in sensorische stimulatie. Een verminderd vermogen om te kunnen anticiperen op sensorische prikkels en omgaan met onverwachtse verstoringen in sensorische stimulatie kan niet alleen leiden tot atypische gedragsreacties, waaronder onder- en overgevoeligheid voor sensorische prikkels (symptomen die veel voorkomen bij ASS), maar heeft mogelijk ook gevolgen voor de sociale cognitieve vaardigheden. In sociale situaties is het kunnen anticiperen op hetgeen een ander zegt of doet van cruciaal belang. Het begrijpen van sarcasme vereist bijvoorbeeld de integratie van subtiele verschillen in auditieve (toonhoogte en prosodie) en visuele informatie (gezichtsuitdrukkingen, lichaamstaal). Het juist interpreteren van dergelijke dubbelzinnige sociale signalen is vaak lastig voor mensen met ASS. De bevindingen beschreven in dit proefschrift suggereren dat de oorzaak hiervoor mogelijk ligt in verstoringen in het vermogen om te kunnen anticiperen op sensorische prikkels. Toekomstig onderzoek moet uitwijzen of mensen met ASS ook meer moeite hebben met het anticiperen op sensorische prikkels en verwerken van onverwachtse verstoringen in andere sensorische domeinen. Naast het vergroten van wetenschappelijke kennis over de sensorische informatieverwerking in ASS kan verder onderzoek naar de neurale mechanismen van sensorische anticipatie potentieel leiden tot een elektrofysiologische marker voor ASS die kan worden toegepast als diagnostisch hulpmiddel. Met name voor mensen waarbij de gedragskenmerken niet altijd goed te beoordelen zijn kan een dergelijke biomarker mogelijk als objectief meetinstrument worden ingezet in de klinische praktijk

    How does the brain extract acoustic patterns? A behavioural and neural study

    Get PDF
    In complex auditory scenes the brain exploits statistical regularities to group sound elements into streams. Previous studies using tones that transition from being randomly drawn to regularly repeating, have highlighted a network of brain regions involved during this process of regularity detection, including auditory cortex (AC) and hippocampus (HPC; Barascud et al., 2016). In this thesis, I seek to understand how the neurons within AC and HPC detect and maintain a representation of deterministic acoustic regularity. I trained ferrets (n = 6) on a GO/NO-GO task to detect the transition from a random sequence of tones to a repeating pattern of tones, with increasing pattern lengths (3, 5 and 7). All animals performed significantly above chance, with longer reaction times and declining performance as the pattern length increased. During performance of the behavioural task, or passive listening, I recorded from primary and secondary fields of AC with multi-electrode arrays (behaving: n = 3), or AC and HPC using Neuropixels probes (behaving: n = 1; passive: n = 1). In the local field potential, I identified no differences in the evoked response between presentations of random or regular sequences. Instead, I observed significant increases in oscillatory power at the rate of the repeating pattern, and decreases at the tone presentation rate, during regularity. Neurons in AC, across the population, showed higher firing with more repetitions of the pattern and for shorter pattern lengths. Single-units within AC showed higher precision in their firing when responding to their best frequency during regularity. Neurons in AC and HPC both entrained to the pattern rate during presentation of the regular sequence when compared to the random sequence. Lastly, development of an optogenetic approach to inactivate AC in the ferret paves the way for future work to probe the causal involvement of these brain regions

    When temporal prediction errs:ERP responses to delayed action-feedback onset

    Get PDF
    Sensory suppression effects observed in electroencephalography (EEG) index successful predictions of the type and timing of self-generated sensory feedback. However, it is unclear how precise the timing prediction of sensory feedback is, and how temporal delays between an action and its sensory feedback affect perception. The current study investigated how prediction errors induced by delaying tone onset times affect the processing of sensory feedback in audition. Participants listened to self-generated (via button press) or externally generated tones. Self-generated tones were presented either without or with various delays (50, 100, or 250 ms; in 30% of trials). Comparing listening to externally generated and self-generated tones resulted in action-related P50 amplitude suppression to tones presented immediately or 100 ms after the button press. Subsequent ERP responses became more sensitive to the type of delay. Whereas the comparison of actual and predicted sensory feedback (N1) tolerated temporal uncertainty up to 100 ms, P2 suppression was modulated by delay in a graded manner: suppression decreased with an increase in sensory feedback delay. Self-generated tones occurring 250 ms after the button press additionally elicited an enhanced N2 response. These findings suggest functionally dissociable processes within the forward model that are affected by the timing of sensory feedback to self-action: relative tolerance of temporal delay in the P50 and N1, confirming previous results, but increased sensitivity in the P2. Further, they indicate that temporal prediction errors are treated differently by the auditory system: only delays that occurred after a temporal integration window (∼100 ms) impact the conscious detection of altered sensory feedback

    Error Signals from the Brain: 7th Mismatch Negativity Conference

    Get PDF
    The 7th Mismatch Negativity Conference presents the state of the art in methods, theory, and application (basic and clinical research) of the MMN (and related error signals of the brain). Moreover, there will be two pre-conference workshops: one on the design of MMN studies and the analysis and interpretation of MMN data, and one on the visual MMN (with 20 presentations). There will be more than 40 presentations on hot topics of MMN grouped into thirteen symposia, and about 130 poster presentations. Keynote lectures by Kimmo Alho, Angela D. Friederici, and Israel Nelken will round off the program by covering topics related to and beyond MMN
    • …
    corecore