22 research outputs found

    Explaining flexible continuous speech comprehension from individual motor rhythms

    Get PDF
    When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (Digit Span) and higher linguistic, model-based sentence predictability—particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role

    Data-Driven Classification of Spectral Profiles Reveals Brain Region-Specific Plasticity in Blindness

    Get PDF
    Congenital blindness has been shown to result in behavioral adaptation and neuronal reorganization, but the underlying neuronal mechanisms are largely unknown. Brain rhythms are characteristic for anatomically defined brain regions and provide a putative mechanistic link to cognitive processes. In a novel approach, using magnetoencephalography resting state data of congenitally blind and sighted humans, deprivation-related changes in spectral profiles were mapped to the cortex using clustering and classification procedures. Altered spectral profiles in visual areas suggest changes in visual alpha-gamma band inhibitory-excitatory circuits. Remarkably, spectral profiles were also altered in auditory and right frontal areas showing increased power in theta-to-beta frequency bands in blind compared with sighted individuals, possibly related to adaptive auditory and higher cognitive processing. Moreover, occipital alpha correlated with microstructural white matter properties extending bilaterally across posterior parts of the brain. We provide evidence that visual deprivation selectively modulates spectral profiles, possibly reflecting structural and functional adaptation

    Cortical and behavioural tracking of rhythm in music:Effects of pitch predictability, enjoyment, and expertise

    Get PDF
    Cortical tracking of stimulus features (such as the envelope) is a crucial tractable neural mechanism, allowing us to investigate how we process continuous music. We here tested whether cortical and behavioural tracking of beat, typically related to rhythm processing, are modulated by pitch predictability. In two experiments (n=20, n=52), participants’ ability to tap along to the beat of musical sequences was measured for tonal (high pitch predictability) and atonal (low pitch predictability) music. In Experiment 1, we additionally measured participants’ EEG and analysed cortical tracking of the acoustic envelope and of pitch surprisal (using IDyOM). In both experiments, finger-tapping performance was better in the tonal than the atonal condition, indicating a positive effect of pitch predictability on behavioural rhythm processing. Neural data revealed that the acoustic envelope was tracked stronger while listening to atonal than tonal music, potentially reflecting listeners’ violated pitch expectations. Our findings show that cortical envelope tracking, beyond reflecting musical rhythm processing, is modulated by pitch predictability (as well as musical expertise and enjoyment). Stronger cortical surprisal tracking was linked to overall worse envelope tracking, and worse finger-tapping performance for atonal music. Specifically, the low pitch predictability in atonal music seems to draw attentional resources resulting in a reduced ability to follow the rhythm behaviourally. Overall, cortical envelope and surprisal tracking were differentially related to behaviour in tonal and atonal music, likely reflecting differential processing under conditions of high and low predictability. Taken together, our results show diverse effects of pitch predictability on musical rhythm processing

    Region-specific endogenous brain rhythms and their role for speech and language

    No full text
    Brain rhythms at different timescales are observed ubiquitously across cortex. Despite this ubiquitousness, individual brain areas can be characterized by ‘spectral profiles’, which reflect distinct patterns of endogenous brain rhythms. Crucially, endogenous brain rhythms have often been explicitly or implicitly related to perceptual and cognitive functions. Regarding language, a vast amount of research investigates the role of brain rhythms for speech processing. Particularly, lower-level processes, such as speech segmentation and consecutive syllable encoding and the hemispheric lateralization of such processes, have been related to auditory cortex brain rhythms in the theta and gamma range and explained by neural oscillatory models. Other brain rhythms —particularly delta and beta— have been related to prosodic processing (delta) but also higher-level language processing, including phrasal and sentential processing. Delta and beta brain rhythms have also been related to predictions from the motor cortex, emphasizing the tight link between production and perception. More recently, neural oscillatory models were extended to include different levels of language processing. Attempts to directly relate these brain rhythms observed during task-related processing to endogenous brain rhythms are sparse. In summary, many questions remain: the functional relevance of brain rhythms with respect to speech and language continues to be a subject of heated discussion, and research that systematically links endogenous brain rhythms to specific computations and possible algorithmic implementations is rare

    Region-specific endogenous brain rhythms and their role for speech and language

    No full text
    Brain rhythms at different timescales are observed ubiquitously across cortex. Despite this ubiquitousness, individual brain areas can be characterized by ‘spectral profiles’, which reflect distinct patterns of endogenous brain rhythms. Crucially, endogenous brain rhythms have often been explicitly or implicitly related to perceptual and cognitive functions. Regarding language, a vast amount of research investigates the role of brain rhythms for speech processing. Particularly, lower-level processes, such as speech segmentation and consecutive syllable encoding and the hemispheric lateralization of such processes, have been related to auditory cortex brain rhythms in the theta and gamma range and explained by neural oscillatory models. Other brain rhythms —particularly delta and beta— have been related to prosodic processing (delta) but also higher-level language processing, including phrasal and sentential processing. Delta and beta brain rhythms have also been related to predictions from the motor cortex, emphasizing the tight link between production and perception. More recently, neural oscillatory models were extended to include different levels of language processing. Attempts to directly relate these brain rhythms observed during task-related processing to endogenous brain rhythms are sparse. In summary, many questions remain: the functional relevance of brain rhythms with respect to speech and language continues to be a subject of heated discussion, and research that systematically links endogenous brain rhythms to specific computations and possible algorithmic implementations is rare

    TenseMusic: An automatic prediction model for musical tension.

    No full text
    The perception of tension and release dynamics constitutes one of the essential aspects of music listening. However, modeling musical tension to predict perception of listeners has been a challenge to researchers. Seminal work demonstrated that tension is reported consistently by listeners and can be accurately predicted from a discrete set of musical features, combining them into a weighted sum of slopes reflecting their combined dynamics over time. However, previous modeling approaches lack an automatic pipeline for feature extraction that would make them widely accessible to researchers in the field. Here, we present TenseMusic: an open-source automatic predictive tension model that operates with a musical audio as the only input. Using state-of-the-art music information retrieval (MIR) methods, it automatically extracts a set of six features (i.e., loudness, pitch height, tonal tension, roughness, tempo, and onset frequency) to use as predictors for musical tension. The algorithm was optimized using Lasso regression to best predict behavioral tension ratings collected on 38 Western classical musical pieces. Its performance was then tested by assessing the correlation between the predicted tension and unseen continuous behavioral tension ratings yielding large mean correlations between ratings and predictions approximating r = .60 across all pieces. We hope that providing the research community with this well-validated open-source tool for predicting musical tension will motivate further work in music cognition and contribute to elucidate the neural and cognitive correlates of tension dynamics for various musical genres and cultures

    Dynamics of Functional Networks for Syllable and Word-Level Processing

    No full text
    AbstractSpeech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated. In two MEG experiments, we investigate lexical and sublexical word-level processing and the interactions with (acoustic) syllable processing using a frequency-tagging paradigm. Participants listened to disyllabic words presented at a rate of 4 syllables/s. Lexical content (native language), sublexical syllable-to-syllable transitions (foreign language), or mere syllabic information (pseudo-words) were presented. Two conjectures were evaluated: (i) syllable-to-syllable transitions contribute to word-level processing; and (ii) processing of words activates brain areas that interact with acoustic syllable processing. We show that syllable-to-syllable transition information compared to mere syllable information, activated a bilateral superior, middle temporal and inferior frontal network. Lexical content resulted, additionally, in increased neural activity. Evidence for an interaction of word- and acoustic syllable-level processing was inconclusive. Decreases in syllable tracking (cerebroacoustic coherence) in auditory cortex and increases in cross-frequency coupling between right superior and middle temporal and frontal areas were found when lexical content was present compared to all other conditions; however, not when conditions were compared separately. The data provide experimental insight into how subtle and sensitive syllable-to-syllable transition information for word-level processing is

    Tension model generation flow.

    No full text
    Displayed is a schematic representation of the model including the feature extraction, the tension prediction involving an attentional and a memory window, as well as the global integration of the feature trends. A: The features are extracted automatically using music information retrieval methods in Python. B: To predict tension, the feature time series are divided into sliding attentional windows (Step 1) and the slope of every feature is extracted in each attentional window (Step 2). Each slope is then integrated with the directly preceding slope using memory windows (Step 3). If the direction of the slope in the memory window matches the direction of the slope in the attentional window, the slope is amplified by β = 5. C: Tension is predicted from the weighted and summed smoothed feature trends.</p

    Comparison between the mean tension ratings and the tension predictions from the optimal model configurations.

    No full text
    Displayed are the tension predictions and the mean tension ratings for three example pieces taken from our sample. The mean tension ratings are displayed in black. Predictions from the time scale model are plotted in dark green and predictions from the weighted model are plotted in bright green. The error bands show the standard error around the mean of the tension ratings. The mean tension ratings have been shifted by 4.5 seconds to account for the delay in reporting behavioral tension and facilitate the visual evaluation of the overlap between the curves.</p
    corecore