7,710 research outputs found

    Impairment in predictive processes during auditory mismatch negativity in ScZ: evidence from event-related fields

    Get PDF
    Patients with schizophrenia (ScZ) show pronounced dysfunctions in auditory perception but the underlying mechanisms as well as the localization of the deficit remain unclear. To examine these questions, the current study examined whether alterations in the neuromagnetic mismatch negativity (MMNm) in ScZ-patients could involve an impairment in sensory predictions in local sensory and higher auditory areas. Using a whole-head MEG-approach, we investigated the MMNm as well as P300m and N100m amplitudes during a hierarchical auditory novelty paradigm in 16 medicated ScZ-patients and 16 controls. In addition, responses to omitted sounds were investigated, allowing for a critical test of the predictive coding hypothesis. Source-localization was performed to identify the generators of the MMNm, omission responses as well as the P300m. Clinical symptoms were examined with the positive and negative syndrome scale. Event-related fields (ERFs) to standard sounds were intact in ScZ-patients. However, the ScZ-group showed a reduction in the amplitude of the MMNm during both local (within trials) and global (across trials) conditions as well as an absent P300m at the global level. Importantly, responses to sound omissions were reduced in ScZ-patients which overlapped both in latency and generators with the MMNm sources. Thus, our data suggest that auditory dysfunctions in ScZ involve impaired predictive processes that involve deficits in both automatic and conscious detection of auditory regularities

    Test-retest reliability of the magnetic mismatch negativity response to sound duration and omission deviants

    Get PDF
    Mismatch negativity (MMN) is a neurophysiological measure of auditory novelty detection that could serve as a translational biomarker of psychiatric disorders, such as schizophrenia. However, the replicability of its magnetoencephalographic (MEG) counterpart (MMNm) has been insufficiently addressed. In the current study, test-retest reliability of the MMNm response to both duration and omission deviants was evaluated over two MEG sessions in 16 healthy adults. MMNm amplitudes and latencies were obtained at both sensor- and source-level using a cortically-constrained minimum-norm approach. Intraclass correlations (ICC) were derived to assess stability of MEG responses over time. In addition, signal-to-noise ratios (SNR) and within-subject statistics were obtained in order to determine MMNm detectability in individual participants. ICC revealed robust values at both sensor- and source-level for both duration and omission MMNm amplitudes (ICC = 0.81-0.90), in particular in the right hemisphere, while moderate to strong values were obtained for duration MMNm and omission MMNm peak latencies (ICC = 0.74-0.88). Duration MMNm was robustly identified in individual participants with high SNR, whereas omission MMNm responses were only observed in half of the participants. Our data indicate that MMNm to unexpected duration changes and omitted sounds are highly reproducible, providing support for the use of MEG-parameters in basic and clinical research

    “What” and “when” predictions modulate auditory processing in a mutually congruent manner

    Get PDF
    Introduction: Extracting regularities from ongoing stimulus streams to form predictions is crucial for adaptive behavior. Such regularities exist in terms of the content of the stimuli and their timing, both of which are known to interactively modulate sensory processing. In real-world stimulus streams such as music, regularities can occur at multiple levels, both in terms of contents (e.g., predictions relating to individual notes vs. their more complex groups) and timing (e.g., pertaining to timing between intervals vs. the overall beat of a musical phrase). However, it is unknown whether the brain integrates predictions in a manner that is mutually congruent (e.g., if “beat” timing predictions selectively interact with “what” predictions falling on pulses which define the beat), and whether integrating predictions in different timing conditions relies on dissociable neural correlates. Methods: To address these questions, our study manipulated “what” and “when” predictions at different levels – (local) interval-defining and (global) beat-defining – within the same stimulus stream, while neural activity was recorded using electroencephalogram (EEG) in participants (N = 20) performing a repetition detection task. Results: Our results reveal that temporal predictions based on beat or interval timing modulated mismatch responses to violations of “what” predictions happening at the predicted time points, and that these modulations were shared between types of temporal predictions in terms of the spatiotemporal distribution of EEG signals. Effective connectivity analysis using dynamic causal modeling showed that the integration of “what” and “when” predictions selectively increased connectivity at relatively late cortical processing stages, between the superior temporal gyrus and the fronto-parietal network. Discussion: Taken together, these results suggest that the brain integrates different predictions with a high degree of mutual congruence, but in a shared and distributed cortical network. This finding contrasts with recent studies indicating separable mechanisms for beat-based and memory-based predictive processing

    Representation of statistical sound properties in human auditory cortex

    Get PDF
    The work carried out in this doctoral thesis investigated the representation of statistical sound properties in human auditory cortex. It addressed four key aspects in auditory neuroscience: the representation of different analysis time windows in auditory cortex; mechanisms for the analysis and segregation of auditory objects; information-theoretic constraints on pitch sequence processing; and the analysis of local and global pitch patterns. The majority of the studies employed a parametric design in which the statistical properties of a single acoustic parameter were altered along a continuum, while keeping other sound properties fixed. The thesis is divided into four parts. Part I (Chapter 1) examines principles of anatomical and functional organisation that constrain the problems addressed. Part II (Chapter 2) introduces approaches to digital stimulus design, principles of functional magnetic resonance imaging (fMRI), and the analysis of fMRI data. Part III (Chapters 3-6) reports five experimental studies. Study 1 controlled the spectrotemporal correlation in complex acoustic spectra and showed that activity in auditory association cortex increases as a function of spectrotemporal correlation. Study 2 demonstrated a functional hierarchy of the representation of auditory object boundaries and object salience. Studies 3 and 4 investigated cortical mechanisms for encoding entropy in pitch sequences and showed that the planum temporale acts as a computational hub, requiring more computational resources for sequences with high entropy than for those with high redundancy. Study 5 provided evidence for a hierarchical organisation of local and global pitch pattern processing in neurologically normal participants. Finally, Part IV (Chapter 7) concludes with a general discussion of the results and future perspectives

    Processing of Abstract Rule Violations in Audition

    Get PDF
    The ability to encode rules and to detect rule-violating events outside the focus of attention is vital for adaptive behavior. Our brain recordings reveal that violations of abstract auditory rules are processed even when the sounds are unattended. When subjects performed a task related to the sounds but not to the rule, rule violations impaired task performance and activated a network involving supratemporal, parietal and frontal areas although none of the subjects acquired explicit knowledge of the rule or became aware of rule violations. When subjects tried to behaviorally detect rule violations, the brain's automatic violation detection facilitated intentional detection. This shows the brain's capacity for abstraction – an important cognitive function necessary to model the world. Our study provides the first evidence for the task-independence (i.e. automaticity) of this ability to encode abstract rules and for its immediate consequences for subsequent mental processes

    Recursive music elucidates neural mechanisms supporting the generation and detection of melodic hierarchies

    Get PDF
    The ability to generate complex hierarchical structures is a crucial component of human cognition which can be expressed in the musical domain in the form of hierarchical melodic relations. The neural underpinnings of this ability have been investigated by comparing the perception of well-formed melodies with unexpected sequences of tones. However, these contrasts do not target specifically the representation of rules generating hierarchical structure. Here, we present a novel paradigm in which identical melodic sequences are generated in four steps, according to three different rules: The Recursive rule, generating new hierarchical levels at each step; The Iterative rule, adding tones within a fixed hierarchical level without generating new levels; and a control rule that simply repeats the third step. Using fMRI, we compared brain activity across these rules when participants are imagining the fourth step after listening to the third (generation phase), and when participants listened to a fourth step (test sound phase), either well-formed or a violation. We found that, in comparison with Repetition and Iteration, imagining the fourth step using the Recursive rule activated the superior temporal gyrus (STG). During the test sound phase, we found fronto-temporo-parietal activity and hippocampal de-activation when processing violations, but no differences between rules. STG activation during the generation phase suggests that generating new hierarchical levels from previous steps might rely on retrieving appropriate melodic hierarchy schemas. Previous findings highlighting the role of hippocampus and inferior frontal gyrus may reflect processing of unexpected melodic sequences, rather than hierarchy generation per se

    The state of tranquility: Subjective perception is shaped by contextual modulation of auditory connectivity

    Get PDF
    In this study, we investigated brain mechanisms for the generation of subjective experience from objective sensory inputs. Our experimental construct was subjective tranquility. Tranquility is a mental state more likely to occur in the presence of objective sensory inputs that arise from natural features in the environment. We used functional magnetic resonance imaging to examine the neural response to scenes that were visually distinct (beach images vs. freeway images) and experienced as tranquil (beach) or non-tranquil (freeway). Both sets of scenes had the same auditory component because waves breaking on a beach and vehicles moving on a freeway can produce similar auditory spectral and temporal characteristics, perceived as a constant roar. Compared with scenes experienced as non-tranquil, we found that subjectively tranquil scenes were associated with significantly greater effective connectivity between the auditory cortex and medial prefrontal cortex, a region implicated in the evaluation of mental states. Similarly enhanced connectivity was also observed between the auditory cortex and posterior cingulate gyrus, temporoparietal cortex and thalamus. These findings demonstrate that visual context can modulate connectivity of the auditory cortex with regions implicated in the generation of subjective states. Importantly, this effect arises under conditions of identical auditory input. Hence, the same sound may be associated with different percepts reflecting varying connectivity between the auditory cortex and other brain regions. This suggests that subjective experience is more closely linked to the connectivity state of the auditory cortex than to its basic sensory inputs
    • …
    corecore