5,669 research outputs found

    Role of the cerebellum in adaptation to delayed action effects

    Get PDF
    Actions are typically associated with sensory consequences. For example, knocking at a door results in predictable sounds. These self-initiated sensory stimuli are known to elicit smaller cortical responses compared to passively presented stimuli, e.g., early auditory evoked magnetic fields known as M100 and M200 components are attenuated. Current models implicate the cerebellum in the prediction of the sensory consequences of our actions. However, causal evidence is largely missing. In this study, we introduced a constant delay (of 100 ms) between actions and action-associated sounds, and we recorded magnetoencephalography (MEG) data as participants adapted to the delay. We found an increase in the attenuation of the M100 component over time for self-generated sounds, which indicates cortical adaptation to the introduced delay. In contrast, no change in M200 attenuation was found. Interestingly, disrupting cerebellar activity via transcranial magnetic stimulation (TMS) abolished the adaptation of M100 attenuation, while the M200 attenuation reverses to an M200 enhancement. Our results provide causal evidence for the involvement of the cerebellum in adapting to delayed action effects, and thus in the prediction of the sensory consequences of our actions

    Role of N-methyl-D-aspartate receptors in action-based predictive coding deficits in schizophrenia

    Full text link
    Published in final edited form as:Biol Psychiatry. 2017 March 15; 81(6): 514–524. doi:10.1016/j.biopsych.2016.06.019.BACKGROUND: Recent theoretical models of schizophrenia posit that dysfunction of the neural mechanisms subserving predictive coding contributes to symptoms and cognitive deficits, and this dysfunction is further posited to result from N-methyl-D-aspartate glutamate receptor (NMDAR) hypofunction. Previously, by examining auditory cortical responses to self-generated speech sounds, we demonstrated that predictive coding during vocalization is disrupted in schizophrenia. To test the hypothesized contribution of NMDAR hypofunction to this disruption, we examined the effects of the NMDAR antagonist, ketamine, on predictive coding during vocalization in healthy volunteers and compared them with the effects of schizophrenia. METHODS: In two separate studies, the N1 component of the event-related potential elicited by speech sounds during vocalization (talk) and passive playback (listen) were compared to assess the degree of N1 suppression during vocalization, a putative measure of auditory predictive coding. In the crossover study, 31 healthy volunteers completed two randomly ordered test days, a saline day and a ketamine day. Event-related potentials during the talk/listen task were obtained before infusion and during infusion on both days, and N1 amplitudes were compared across days. In the case-control study, N1 amplitudes from 34 schizophrenia patients and 33 healthy control volunteers were compared. RESULTS: N1 suppression to self-produced vocalizations was significantly and similarly diminished by ketamine (Cohen’s d = 1.14) and schizophrenia (Cohen’s d = .85). CONCLUSIONS: Disruption of NMDARs causes dysfunction in predictive coding during vocalization in a manner similar to the dysfunction observed in schizophrenia patients, consistent with the theorized contribution of NMDAR hypofunction to predictive coding deficits in schizophrenia.This work was supported by AstraZeneca for an investigator-initiated study (DHM) and the National Institute of Mental Health Grant Nos. R01 MH-58262 (to JMF) and T32 MH089920 (to NSK). JHK was supported by the Yale Center for Clinical Investigation Grant No. UL1RR024139 and the US National Institute on Alcohol Abuse and Alcoholism Grant No. P50AA012879. (AstraZeneca for an investigator-initiated study (DHM); R01 MH-58262 - National Institute of Mental Health; T32 MH089920 - National Institute of Mental Health; UL1RR024139 - Yale Center for Clinical Investigation; P50AA012879 - US National Institute on Alcohol Abuse and Alcoholism)Accepted manuscrip

    Using transcranial direct-current stimulation (tDCS) to understand cognitive processing

    Full text link
    Noninvasive brain stimulation methods are becoming increasingly common tools in the kit of the cognitive scientist. In particular, transcranial direct-current stimulation (tDCS) is showing great promise as a tool to causally manipulate the brain and understand how information is processed. The popularity of this method of brain stimulation is based on the fact that it is safe, inexpensive, its effects are long lasting, and you can increase the likelihood that neurons will fire near one electrode and decrease the likelihood that neurons will fire near another. However, this method of manipulating the brain to draw causal inferences is not without complication. Because tDCS methods continue to be refined and are not yet standardized, there are reports in the literature that show some striking inconsistencies. Primary among the complications of the technique is that the tDCS method uses two or more electrodes to pass current and all of these electrodes will have effects on the tissue underneath them. In this tutorial, we will share what we have learned about using tDCS to manipulate how the brain perceives, attends, remembers, and responds to information from our environment. Our goal is to provide a starting point for new users of tDCS and spur discussion of the standardization of methods to enhance replicability.The authors declare that they had no conflicts of interest with respect to their authorship or the publication of this article. This work was supported by grants from the National Institutes of Health (R01-EY019882, R01-EY025272, P30-EY08126, F31-MH102042, and T32-EY007135). (R01-EY019882 - National Institutes of Health; R01-EY025272 - National Institutes of Health; P30-EY08126 - National Institutes of Health; F31-MH102042 - National Institutes of Health; T32-EY007135 - National Institutes of Health)Accepted manuscrip

    Post-training load-related changes of auditory working memory: An EEG study

    Get PDF
    Working memory (WM) refers to the temporary retention and manipulation of information, and its capacity is highly susceptible to training. Yet, the neural mechanisms that allow for increased performance under demanding conditions are not fully understood. We expected that post-training efficiency in WM performance modulates neural processing during high load tasks. We tested this hypothesis, using electroencephalography (EEG) (N = 39), by comparing source space spectral power of healthy adults performing low and high load auditory WM tasks. Prior to the assessment, participants either underwent a modality-specific auditory WM training, or a modality-irrelevant tactile WM training, or were not trained (active control). After a modality-specific training participants showed higher behavioral performance, compared to the control. EEG data analysis revealed general effects of WM load, across all training groups, in the theta-, alpha-, and beta-frequency bands. With increased load theta-band power increased over frontal, and decreased over parietal areas. Centro-parietal alpha-band power and central beta-band power decreased with load. Interestingly, in the high load condition a tendency toward reduced beta-band power in the right medial temporal lobe was observed in the modality-specific WM training group compared to the modality-irrelevant and active control groups. Our finding that WM processing during the high load condition changed after modality-specific WM training, showing reduced beta-band activity in voice-selective regions, possibly indicates a more efficient maintenance of task-relevant stimuli. The general load effects suggest that WM performance at high load demands involves complementary mechanisms, combining a strengthening of task-relevant and a suppression of task-irrelevant processing

    Being first matters: topographical representational similarity analysis of ERP signals reveals separate networks for audiovisual temporal binding depending on the leading sense

    Get PDF
    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Inter-sensory timing is crucial in this process as only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window (TBW), revealing asymmetries in its size and plasticity depending on the leading input (auditory-visual, AV; visual-auditory, VA). We here tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV/VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV/VA event-related potentials (ERPs) from the sum of their unisensory constituents, we run a time-resolved topographical representational similarity analysis (tRSA) comparing AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between AV- and VA-maps at each time point (500ms window post-stimulus) and then correlated with two alternative similarity model matrices: AVmaps=VAmaps vs. AVmaps≠VAmaps. The tRSA results favored the AVmaps≠VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems

    The mechanisms of tinnitus: perspectives from human functional neuroimaging

    Get PDF
    In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power

    When expectations do not reflect reality: do event-related-potential amplitudes for self-generated sounds reflect auditory prediction errors?

    Get PDF
    The ability to anticipate upcoming situations avoids us from being overwhelmed by the vast number of stimuli we experience in our perceptual world. Predictions are vast and various, involving cognitive, motor, and sensory computations. Some specific predictions of particular interest for the current work help us distinguish between stimuli generated by ourselves from environmentally caused stimuli. Numerous studies show that differences in the amplitude of N1 and P2, two ERP components in the EEG signal, appear to reflect the distinction between these types of stimuli. Nevertheless, predictions are not always reliable, but sometimes they fail to represent the upcoming situation. An update of the cognitive models from which they arise is fundamental to be capable of not making the same error in the future. This holds possible through the formation of prediction errors; like the word itself implies, such signals bring the error back to the areas of interest to allow a revision of the cognitive model. This pilot study aims to analyze the ERP variations linked to prediction errors after self-generated actions when the expectations of the subject are unfulfilled.The ability to anticipate upcoming situations avoids us from being overwhelmed by the vast number of stimuli we experience in our perceptual world. Predictions are vast and various, involving cognitive, motor, and sensory computations. Some specific predictions of particular interest for the current work help us distinguish between stimuli generated by ourselves from environmentally caused stimuli. Numerous studies show that differences in the amplitude of N1 and P2, two ERP components in the EEG signal, appear to reflect the distinction between these types of stimuli. Nevertheless, predictions are not always reliable, but sometimes they fail to represent the upcoming situation. An update of the cognitive models from which they arise is fundamental to be capable of not making the same error in the future. This holds possible through the formation of prediction errors; like the word itself implies, such signals bring the error back to the areas of interest to allow a revision of the cognitive model. This pilot study aims to analyze the ERP variations linked to prediction errors after self-generated actions when the expectations of the subject are unfulfilled

    Monitoring Self & World: A Novel Network Model of Hallucinations in Schizophrenia

    Get PDF
    Schizophrenia (Sz) is a psychotic disorder characterized by multifaceted symptoms including hallucinations (e.g. vivid perceptions that occur in the absence of external stimuli). Auditory hallucinations are the most common type of hallucination in Sz; roughly 70 percent of Sz patients report hearing voices specifically (e.g. auditory verbal hallucinations). Prior functional magnetic resonance imaging (fMRI) studies have provided initial insights into the neural mechanisms underlying hallucinations, implicating an anatomically-distributed network of cortical (sensory, insular, and inferior frontal cortex) and subcortical (hippocampal, striatal) regions. Yet, it remains unclear how this distributed network gives rise to hallucinations impacting different sensory modalities. The insular cortex is a central hub of a larger functional network called the salience network. By regulating default-mode network activity (associated with internally-directed thought), and fronto-parietal network activity (associated with externally-directed attention), the salience network is able to orient our attention to the most pressing matters (e.g. bodily pain, environmental threats, etc.). Abnormal salience monitoring is thought to underlie Sz symptoms; improper monitoring of salient internal events (e.g. auditory-verbal imagery, visual images) plausibly generates hallucinations, but no prior study has directly tested this hypothesis by exploring how sensory networks interact with the salience network in the context of hallucinations in Sz. This dissertation project combined exploratory and hypothesis-driven approaches to delineate functional neural markers of Sz symptoms. The first analysis explored the relationship between Sz symptom expression and altered functional communication between salience and default-mode networks. The second analysis explored fMRI signal fluctuations associated with modality-dependent (e.g. auditory, visual) hallucinations. The final analysis tested the hypothesis that abnormal functional communication between salience and sensory (e.g. auditory, visual) networks underlies hallucinations in Sz. The results suggest that there are three key players in the generation of auditory hallucinations in Sz: auditory cortex, hippocampus, and salience network. A novel functional network model of auditory hallucinations is proposed to account for these findings
    • …
    corecore