817 research outputs found

    Music in the brain

    Get PDF
    Music is ubiquitous across human cultures — as a source of affective and pleasurable experience, moving us both physically and emotionally — and learning to play music shapes both brain structure and brain function. Music processing in the brain — namely, the perception of melody, harmony and rhythm — has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain’s fundamental capacity for prediction — as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective

    Principled Explanations in Comparative Biomusicology – Toward a Comparative Cognitive Biology of the Human Capacities for Music and Language

    Get PDF
    The current thesis tackles the question “Why is music the way it is?” within a comparative biomusicology framework by focusing on musical syntax and its relation to syntax in language. Comparative biomusicology integrates different comparative approaches, biological frameworks as well as levels of analysis in cognitive science, and puts forward principled explanations, regarding cognitive systems as different instances of the same principles, as its central research strategy. The main goal is to provide a preliminary answer to this question in form of hypotheses about neurocognitive mechanisms, i.e., cognitive and neural processes, underlying a core function of syntactic computation in language and music, i.e., mapping hierarchical structure and temporal sequence. In particular, the relationship between language and music is discussed on the basis of a top-down approach taking syntax as combinatorial principles and a bottom-up approach taking neural structures and operations as implementational principles. On the basis of the top-down approach, the thesis identifies computational problems of musical syntax, cognitive processes and neural correlates of music syntactic processing, and the relationship to language syntax and syntactic processing. The neural correlates of music syntactic processing are investigated by ALE meta-analyses. The bottom-up approach then studies the relationship between language and music on the basis of neural processes implemented in the cortico-basal ganglia-thalamocortical circuits. The main result of the current thesis suggests that the relationship between language and music syntactic processing can be explained in terms of the same neurocognitive mechanisms with different expressions on the motor-to-cognitive gradient. The current thesis, especially its bottom-up approach, opens up a possible way going toward comparative cognitive biology, i.e., a comparative approach to cognitive systems with a greater emphasis on the biology

    Right Neural Substrates of Language and Music Processing Left Out: Activation Likelihood Estimation (ALE) and Meta-Analytic Connectivity Modelling (MACM)

    Get PDF
    Introduction: Language and music processing have been investigated in neuro-based research for over a century. However, consensus of independent and shared neural substrates among the domains remains elusive due to varying neuroimaging methodologies. Identifying functional connectivity in language and music processing via neuroimaging meta-analytic methods provides neuroscientific knowledge of higher cognitive domains and normative models may guide treatment development in communication disorders based on principles of neural plasticity. Methods: Using BrainMap software and tools, the present coordinate-based meta-analysis analyzed 65 fMRI studies investigating language and music processing in healthy adult subjects. We conducted activation likelihood estimates (ALE) in language processing, music processing, and language + music (Omnibus) processing. Omnibus ALE clusters were used to elucidate functional connectivity by use of meta-analytic connectivity modelling (MACM). Paradigm Class and Behavioral Domain analyses were completed for the ten identified nodes to aid functional MACM interpretation. Results: The Omnibus ALE revealed ten peak activation clusters (bilateral inferior frontal gyri, left medial frontal gyrus, right superior temporal gyrus, left transverse temporal gyrus, bilateral claustrum, left superior parietal lobule, right precentral gyrus, and right anterior culmen). MACM demonstrates an interconnected network consisting of unidirectional and bidirectional connectivity. Subsequent analyses demonstrated nodal involvement across 44 BrainMap paradigms and 32 BrainMap domains. Discussion: These findings demonstrate functional connectivity among Omnibus areas of activation in language and music processing. We analyze ALE and MACM outcomes by comparing them to previously observed roles in cognitive processing and functional network connectivity. Finally, we discuss the importance of translational neuroimaging and need for normative models guiding intervention

    The Neural Mechanisms of Musical Rhythm Processing: Cross-Cultural Differences and the Stages of Beat Perception

    Get PDF
    Music is a universal human behaviour, is fundamentally temporal, and has unique temporal properties. This thesis presents research on the cognitive neuroscience of the temporal aspects of music: rhythm, beat, and metre. Specifically, this work investigates how cultural experience influences behavioural and neural measures of rhythm processing, and the different neural mechanisms (with particular interest in the role of the striatum) that underlie different stages of beat perception, as musical rhythms unfold. Chapter 1 presents an overview of the existing literature on the perceptual, cognitive, and neural processing of rhythm, including the entrainment of neural oscillations to rhythm and the neuroanatomical substrates of rhythm perception. Chapter 2 presents research on cross-cultural differences in the perception and production of musical rhythm and beat. Here, East African and North American participants performed three tasks (beat tapping, rhythm discrimination, and rhythm reproduction) using rhythms from East African and Western music. The results indicate an influence of culture on beat tapping and rhythm reproduction, but not rhythm discrimination. Chapter 3 presents electroencephalographic (EEG) research on cross-cultural differences in neural entrainment to rhythm and beat. The degree to which neural oscillations entrained to the different regular ‘metrical levels’ of rhythms differed between groups, suggesting an influence of culture. Moreover, across all participants, the proportion of trials in which different rates were tapped was correlated with the degree of neural entrainment to those rates. Chapter 4 presents functional magnetic resonance imaging (fMRI) research on the different neural mechanisms that underlie the different stages of beat perception (finding, continuation, and adjustment). Distinct regions of the striatum (dorsal vs. ventral putamen) were active to different extents in beat finding and adjustment, respectively. Activity in other regions (including the cerebellum, parietal cortex, supplementary motor area, and insula) also differed between stages. Additionally, when rhythms were metrically incongruent (polyrhythmic), additional activity was found in superior temporal gyri and the insula. Chapter 5 presents a general discussion of Chapters 2-4 in the context of the existing literature, limitations, and broader interpretations of how these results relate to future directions in the field

    Neural dynamics underlying successful auditory short-term memory performance

    Get PDF
    Listeners often operate in complex acoustic environments, consisting of many concurrent sounds. Accurately encoding and maintaining such auditory objects in short-term memory is crucial for communication and scene analysis. Yet, the neural underpinnings of successful auditory short-term memory (ASTM) performance are currently not well understood. To elucidate this issue, we presented a novel, challenging auditory delayed match-to-sample task while recording MEG. Human participants listened to ‘scenes’ comprising three concurrent tone pip streams. The task was to indicate, after a delay, whether a probe stream was present in the just-heard scene. We present three key findings: First, behavioural performance revealed faster responses in correct versus incorrect trials as well as in ‘probe present’ versus ‘probe absent’ trials, consistent with ASTM search. Second, successful compared with unsuccessful ASTM performance was associated with a significant enhancement of event-related fields and oscillatory activity in the theta, alpha and beta frequency ranges. This extends previous findings of an overall increase of persistent activity during short-term memory performance. Third, using distributed source modelling, we found these effects to be confined mostly to sensory areas during encoding, presumably related to ASTM contents per se. Parietal and frontal sources then became relevant during the maintenance stage, indicating that effective STM operation also relies on ongoing inhibitory processes suppressing task-irrelevant information. In summary, our results deliver a detailed account of the neural patterns that differentiate successful from unsuccessful ASTM performance in the context of a complex, multi-object auditory scene

    Bayesian methods in music modelling

    Get PDF
    This thesis presents several hierarchical generative Bayesian models of musical signals designed to improve the accuracy of existing multiple pitch detection systems and other musical signal processing applications whilst remaining feasible for real-time computation. At the lowest level the signal is modelled as a set of overlapping sinusoidal basis functions. The parameters of these basis functions are built into a prior framework based on principles known from musical theory and the physics of musical instruments. The model of a musical note optionally includes phenomena such as frequency and amplitude modulations, damping, volume, timbre and inharmonicity. The occurrence of note onsets in a performance of a piece of music is controlled by an underlying tempo process and the alignment of the timings to the underlying score of the music. A variety of applications are presented for these models under differing inference constraints. Where full Bayesian inference is possible, reversible-jump Markov Chain Monte Carlo is employed to estimate the number of notes and partial frequency components in each frame of music. We also use approximate techniques such as model selection criteria and variational Bayes methods for inference in situations where computation time is limited or the amount of data to be processed is large. For the higher level score parameters, greedy search and conditional modes algorithms are found to be sufficiently accurate. We emphasize the links between the models and inference algorithms developed in this thesis with that in existing and parallel work, and demonstrate the effects of making modifications to these models both theoretically and by means of experimental results

    Brain Music : Sistema generativo para la creación de música simbólica a partir de respuestas neuronales afectivas

    Get PDF
    gráficas, tablasEsta tesis de maestría presenta una metodología de aprendizaje profundo multimodal innovadora que fusiona un modelo de clasificación de emociones con un generador musical, con el propósito de crear música a partir de señales de electroencefalografía, profundizando así en la interconexión entre emociones y música. Los resultados alcanzan tres objetivos específicos: Primero, ya que el rendimiento de los sistemas interfaz cerebro-computadora varía considerablemente entre diferentes sujetos, se introduce un enfoque basado en la transferencia de conocimiento entre sujetos para mejorar el rendimiento de individuos con dificultades en sistemas de interfaz cerebro-computadora basados en el paradigma de imaginación motora. Este enfoque combina datos de EEG etiquetados con datos estructurados, como cuestionarios psicológicos, mediante un método de "Kernel Matching CKA". Utilizamos una red neuronal profunda (Deep&Wide) para la clasificación de la imaginación motora. Los resultados destacan su potencial para mejorar las habilidades motoras en interfaces cerebro-computadora. Segundo, proponemos una técnica innovadora llamada "Labeled Correlation Alignment"(LCA) para sonificar respuestas neurales a estímulos representados en datos no estructurados, como música afectiva. Esto genera características musicales basadas en la actividad cerebral inducida por las emociones. LCA aborda la variabilidad entre sujetos y dentro de sujetos mediante el análisis de correlación, lo que permite la creación de envolventes acústicos y la distinción entre diferente información sonora. Esto convierte a LCA en una herramienta prometedora para interpretar la actividad neuronal y su reacción a estímulos auditivos. Finalmente, en otro capítulo, desarrollamos una metodología de aprendizaje profundo de extremo a extremo para generar contenido musical MIDI (datos simbólicos) a partir de señales de actividad cerebral inducidas por música con etiquetas afectivas. Esta metodología abarca el preprocesamiento de datos, el entrenamiento de modelos de extracción de características y un proceso de emparejamiento de características mediante Deep Centered Kernel Alignment, lo que permite la generación de música a partir de señales EEG. En conjunto, estos logros representan avances significativos en la comprensión de la relación entre emociones y música, así como en la aplicación de la inteligencia artificial en la generación musical a partir de señales cerebrales. Ofrecen nuevas perspectivas y herramientas para la creación musical y la investigación en neurociencia emocional. Para llevar a cabo nuestros experimentos, utilizamos bases de datos públicas como GigaScience, Affective Music Listening y Deap Dataset (Texto tomado de la fuente)This master’s thesis presents an innovative multimodal deep learning methodology that combines an emotion classification model with a music generator, aimed at creating music from electroencephalography (EEG) signals, thus delving into the interplay between emotions and music. The results achieve three specific objectives: First, since the performance of brain-computer interface systems varies significantly among different subjects, an approach based on knowledge transfer among subjects is introduced to enhance the performance of individuals facing challenges in motor imagery-based brain-computer interface systems. This approach combines labeled EEG data with structured information, such as psychological questionnaires, through a "Kernel Matching CKA"method. We employ a deep neural network (Deep&Wide) for motor imagery classification. The results underscore its potential to enhance motor skills in brain-computer interfaces. Second, we propose an innovative technique called "Labeled Correlation Alignment"(LCA) to sonify neural responses to stimuli represented in unstructured data, such as affective music. This generates musical features based on emotion-induced brain activity. LCA addresses variability among subjects and within subjects through correlation analysis, enabling the creation of acoustic envelopes and the distinction of different sound information. This makes LCA a promising tool for interpreting neural activity and its response to auditory stimuli. Finally, in another chapter, we develop an end-to-end deep learning methodology for generating MIDI music content (symbolic data) from EEG signals induced by affectively labeled music. This methodology encompasses data preprocessing, feature extraction model training, and a feature matching process using Deep Centered Kernel Alignment, enabling music generation from EEG signals. Together, these achievements represent significant advances in understanding the relationship between emotions and music, as well as in the application of artificial intelligence in musical generation from brain signals. They offer new perspectives and tools for musical creation and research in emotional neuroscience. To conduct our experiments, we utilized public databases such as GigaScience, Affective Music Listening and Deap DatasetMaestríaMagíster en Ingeniería - Automatización IndustrialInvestigación en Aprendizaje Profundo y señales BiológicasEléctrica, Electrónica, Automatización Y Telecomunicaciones.Sede Manizale
    corecore