12,029 research outputs found

    Exploring Frequency-Dependent Brain Networks from Ongoing EEG Using Spatial ICA During Music Listening

    Get PDF
    Recently, exploring brain activity based on functional networks during naturalistic stimuli especially music and video represents an attractive challenge because of the low signal-to-noise ratio in collected brain data. Although most efforts focusing on exploring the listening brain have been made through functional magnetic resonance imaging (fMRI), sensor-level electro- or magnetoencephalography (EEG/MEG) technique, little is known about how neural rhythms are involved in the brain network activity under naturalistic stimuli. This study exploited cortical oscillations through analysis of ongoing EEG and musical feature during freely listening to music. We used a data-driven method that combined music information retrieval with spatial Fourier Independent Components Analysis (spatial Fourier-ICA) to probe the interplay between the spatial profiles and the spectral patterns of the brain network emerging from music listening. Correlation analysis was performed between time courses of brain networks extracted from EEG data and musical feature time series extracted from music stimuli to derive the musical feature related oscillatory patterns in the listening brain. We found brain networks of musical feature processing were frequency-dependent. Musical feature time series, especially fluctuation centroid and key feature, were associated with an increased beta activation in the bilateral superior temporal gyrus. An increased alpha oscillation in the bilateral occipital cortex emerged during music listening, which was consistent with alpha functional suppression hypothesis in task-irrelevant regions. We also observed an increased delta-beta oscillatory activity in the prefrontal cortex associated with musical feature processing. In addition to these findings, the proposed method seems valuable for characterizing the large-scale frequency-dependent brain activity engaged in musical feature processing.Peer reviewe

    ECoG high gamma activity reveals distinct cortical representations of lyrics passages, harmonic and timbre-related changes in a rock song

    Get PDF
    Listening to music moves our minds and moods, stirring interest in its neural underpinnings. A multitude of compositional features drives the appeal of natural music. How such original music, where a composer's opus is not manipulated for experimental purposes, engages a listener's brain has not been studied until recently. Here, we report an in-depth analysis of two electrocorticographic (ECoG) data sets obtained over the left hemisphere in ten patients during presentation of either a rock song or a read-out narrative. First, the time courses of five acoustic features (intensity, presence/absence of vocals with lyrics, spectral centroid, harmonic change, and pulse clarity) were extracted from the audio tracks and found to be correlated with each other to varying degrees. In a second step, we uncovered the specific impact of each musical feature on ECoG high-gamma power (70–170 Hz) by calculating partial correlations to remove the influence of the other four features. In the music condition, the onset and offset of vocal lyrics in ongoing instrumental music was consistently identified within the group as the dominant driver for ECoG high-gamma power changes over temporal auditory areas, while concurrently subject-individual activation spots were identified for sound intensity, timbral, and harmonic features. The distinct cortical activations to vocal speech-related content embedded in instrumental music directly demonstrate that song integrated in instrumental music represents a distinct dimension in complex music. In contrast, in the speech condition, the full sound envelope was reflected in the high gamma response rather than the onset or offset of the vocal lyrics. This demonstrates how the contributions of stimulus features that modulate the brain response differ across the two examples of a full-length natural stimulus, which suggests a context-dependent feature selection in the processing of complex auditory stimuli

    Extracting human cortical responses to sound onsets and acoustic feature changes in real music, and their relation to event rate

    Get PDF
    Evoked cortical responses (ERs) have mainly been studied in controlled experiments using simplified stimuli. Though, an outstanding question is how the human cortex responds to the complex stimuli encountered in realistic situations. Few electroencephalography (EEG) studies have used Music Information Retrieval (MIR) tools to extract cortical P1/N1/P2 to acoustical changes in real music. However, less than ten events per music piece could be detected leading to ERs due to limitations in automatic detection of sound onsets. Also, the factors influencing a successful extraction of the ERs have not been identified. Finally, previous studies did not localize the sources of the cortical generators. This study is based on an EEG/MEG dataset from 48 healthy normal hearing participants listening to three real music pieces. Acoustic features were computed from the audio signal of the music with the MIR Toolbox. To overcome limits in automatic methods, sound onsets were also manually detected. The chance of obtaining detectable ERs based on ten randomly picked onset points was less than 1:10,000. For the first time, we show that naturalistic P1/N1/P2 ERs can be reliably measured across 100 manually identified sound onsets, substantially improving the signal-to-noise level compared to 2.5 Hz). Furthermore, during monophonic sections of the music only P1/P2 were measurable, and during polyphonic sections only N1. Finally, MEG source analysis revealed that naturalistic P2 is located in core areas of the auditory cortex.Peer reviewe

    A model of time-varying music engagement

    Get PDF
    The current paper offers a model of time-varying music engagement, defined as changes in curiosity, attention and positive valence, as music unfolds over time. First, we present research (including new data) showing that listeners tend to allocate attention to music in a manner that is guided by both features of the music and listeners’ individual differences. Next, we review relevant predictive processing literature before using this body of work to inform our model. In brief, we propose that music engagement, over the course of an extended listening episode, may constitute several cycles of curiosity, attention and positive valence that are interspersed with moments of mind-wandering. Further, we suggest that refocussing on music after an episode of mind-wandering can be due to triggers in the music or, conversely, mental action that occurs when the listener realizes they are mind-wandering. Finally, we argue that factors that modulate both overall levels of engagement and how it changes over time include music complexity, listener background and the listening context. Our paper highlights how music can be used to provide insights into the temporal dynamics of attention and into how curiosity might emerge in everyday contexts

    The Berlin Brain-Computer Interface: Progress Beyond Communication and Control

    Get PDF
    The combined effect of fundamental results about neurocognitive processes and advancements in decoding mental states from ongoing brain signals has brought forth a whole range of potential neurotechnological applications. In this article, we review our developments in this area and put them into perspective. These examples cover a wide range of maturity levels with respect to their applicability. While we assume we are still a long way away from integrating Brain-Computer Interface (BCI) technology in general interaction with computers, or from implementing neurotechnological measures in safety-critical workplaces, results have already now been obtained involving a BCI as research tool. In this article, we discuss the reasons why, in some of the prospective application domains, considerable effort is still required to make the systems ready to deal with the full complexity of the real world.EC/FP7/611570/EU/Symbiotic Mind Computer Interaction for Information Seeking/MindSeeEC/FP7/625991/EU/Hyperscanning 2.0 Analyses of Multimodal Neuroimaging Data: Concept, Methods and Applications/HYPERSCANNING 2.0DFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen Systeme

    Temporal dynamics of musical emotions examined through intersubject synchrony of brain activity

    Get PDF
    To study emotional reactions to music, it is important to consider the temporal dynamics of both affective responses and underlying brain activity. Here, we investigated emotions induced by music using functional magnetic resonance imaging (fMRI) with a data-driven approach based on intersubject correlations (ISC). This method allowed us to identify moments in the music that produced similar brain activity (i.e. synchrony) among listeners under relatively natural listening conditions. Continuous ratings of subjective pleasantness and arousal elicited by the music were also obtained for the music outside of the scanner. Our results reveal synchronous activations in left amygdala, left insula and right caudate nucleus that were associated with higher arousal, whereas positive valence ratings correlated with decreases in amygdala and caudate activity. Additional analyses showed that synchronous amygdala responses were driven by energy-related features in the music such as root mean square and dissonance, while synchrony in insula was additionally sensitive to acoustic event density. Intersubject synchrony also occurred in the left nucleus accumbens, a region critically implicated in reward processing. Our study demonstrates the feasibility and usefulness of an approach based on ISC to explore the temporal dynamics of music perception and emotion in naturalistic condition

    On the encoding of natural music in computational models and human brains

    Get PDF
    This article discusses recent developments and advances in the neuroscience of music to understand the nature of musical emotion. In particular, it highlights how system identification techniques and computational models of music have advanced our understanding of how the human brain processes the textures and structures of music and how the processed information evokes emotions. Musical models relate physical properties of stimuli to internal representations called features, and predictive models relate features to neural or behavioral responses and test their predictions against independent unseen data. The new frameworks do not require orthogonalized stimuli in controlled experiments to establish reproducible knowledge, which has opened up a new wave of naturalistic neuroscience. The current review focuses on how this trend has transformed the domain of the neuroscience of music

    Detecting and interpreting conscious experiences in behaviorally non-responsive patients

    Get PDF
    Decoding the contents of consciousness from brain activity is one of the most challenging frontiers of cognitive neuroscience. The ability to interpret mental content without recourse to behavior is most relevant for understanding patients who may be demonstrably conscious, but entirely unable to speak or move willfully in any way, precluding any systematic investigation of their conscious experience. The lack of consistent behavioral responsivity engenders unique challenges to decoding any conscious experiences these patients may have solely based on their brain activity. For this reason, paradigms that have been successful in healthy individuals cannot serve to interpret conscious mental states in this patient group. Until recently, patient studies have used structured instructions to elicit willful modulation of brain activity according to command, in order to decode the presence of willful brain-based responses in this patient group. In recent work, we have used naturalistic paradigms, such as watching a movie or listening to an audio-story, to demonstrate that a common neural code supports conscious experiences in different individuals. Moreover, we have demonstrated that this code can be used to interpret the conscious experiences of a patient who had remained non-responsive for several years. This approach is easy to administer, brief, and does not require compliance with task instructions. Rather, it engages attention naturally through meaningful stimuli that are similar to the real-world sensory information in a patient\u27s environment. Therefore, it may be particularly suited to probing consciousness and revealing residual brain function in highly impaired, acute, patients in a comatose state, thus helping to improve diagnostication and prognostication for this vulnerable patient group from the critical early stages of severe brain-injury

    The “Narratives” fMRI dataset for evaluating models of naturalistic language comprehension

    Get PDF
    The “Narratives” collection aggregates a variety of functional MRI datasets collected while human subjects listened to naturalistic spoken stories. The current release includes 345 subjects, 891 functional scans, and 27 diverse stories of varying duration totaling ~4.6 hours of unique stimuli (~43,000 words). This data collection is well-suited for naturalistic neuroimaging analysis, and is intended to serve as a benchmark for models of language and narrative comprehension. We provide standardized MRI data accompanied by rich metadata, preprocessed versions of the data ready for immediate use, and the spoken story stimuli with time-stamped phoneme- and word-level transcripts. All code and data are publicly available with full provenance in keeping with current best practices in transparent and reproducible neuroimaging
    • …
    corecore