7,683 research outputs found

    Computational and Psycho-Physiological Investigations of Musical Emotions

    Get PDF
    The ability of music to stir human emotions is a well known fact (Gabrielsson & Lindstrom. 2001). However, the manner in which music contributes to those experiences remains obscured. One of the main reasons is the large number of syndromes that characterise emotional experiences. Another is their subjective nature: musical emotions can be affected by memories, individual preferences and attitudes, among other factors (Scherer & Zentner, 2001). But can the same music induce similar affective experiences in all listeners, somehow independently of acculturation or personal bias? A considerable corpus of literature has consistently reported that listeners agree rather strongly about what type of emotion is expressed in a particular piece or even in particular moments or sections (Juslin & Sloboda, 2001). Those studies suggest that music features encode important characteristics of affective experiences, by suggesting the influence of various structural factors of music on emotional expression. Unfortunately, the nature of these relationships is complex, and it is common to find rather vague and contradictory descriptions. This thesis presents a novel methodology to analyse the dynamics of emotional responses to music. It consists of a computational investigation, based on spatiotemporal neural networks sensitive to structural aspects of music, which "mimic" human affective responses to music and permit to predict new ones. The dynamics of emotional responses to music are investigated as computational representations of perceptual processes (psychoacoustic features) and self-perception of physiological activation (peripheral feedback). Modelling and experimental results provide evidence suggesting that spatiotemporal patterns of sound resonate with affective features underlying judgements of subjective feelings. A significant part of the listener's affective response is predicted from the a set of six psychoacoustic features of sound - tempo, loudness, multiplicity (texture), power spectrum centroid (mean pitch), sharpness (timbre) and mean STFT flux (pitch variation) - and one physiological variable - heart rate. This work contributes to new evidence and insights to the study of musical emotions, with particular relevance to the music perception and emotion research communities

    Music in the brain

    Get PDF
    Music is ubiquitous across human cultures — as a source of affective and pleasurable experience, moving us both physically and emotionally — and learning to play music shapes both brain structure and brain function. Music processing in the brain — namely, the perception of melody, harmony and rhythm — has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain’s fundamental capacity for prediction — as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective

    Enkinaesthetic polyphony: the underpinning for first-order languaging

    Get PDF
    We contest two claims: (1) that language, understood as the processing of abstract symbolic forms, is an instrument of cognition and rational thought, and (2) that conventional notions of turn-taking, exchange structure, and move analysis, are satisfactory as a basis for theorizing communication between living, feeling agents. We offer an enkinaesthetic theory describing the reciprocal affective neuro-muscular dynamical flows and tensions of co- agential dialogical sense-making relations. This “enkinaesthetic dialogue” is characterised by a preconceptual experientially recursive temporal dynamics forming the deep extended melodies of relationships in time. An understanding of how those relationships work, when we understand and are ourselves understood, when communication falters and conflict arises, will depend on a grasp of our enkinaesthetic intersubjectivity

    Computational musicology: An Artificial Life approach

    Get PDF
    Abstract — Artificial Life (A-Life) and Evolutionary Algorithms (EA) provide a variety of new techniques for making and studying music. EA have been used in different musical applications, ranging from new systems for composition and performance, to models for studying musical evolution in artificial societies. This paper starts with a brief introduction to three main fields of application of EA in Music, namely sound design, creativity and computational musicology. Then it presents our work in the field of computational musicology. Computational musicology is broadly defined as the study of Music with computational modelling and simulation. We are interested in developing A-Life-based models to study the evolution of musical cognition in an artificial society of agents. In this paper we present the main components of a model that we are developing to study the evolution of musical ontogenies, focusing on the evolution of rhythms and emotional systems. The paper concludes by suggesting that A-Life and EA provide a powerful paradigm for computational musicology. I

    Predictive cognition in dementia: the case of music

    Get PDF
    The clinical complexity and pathological diversity of neurodegenerative diseases impose immense challenges for diagnosis and the design of rational interventions. To address these challenges, there is a need to identify new paradigms and biomarkers that capture shared pathophysiological processes and can be applied across a range of diseases. One core paradigm of brain function is predictive coding: the processes by which the brain establishes predictions and uses them to minimise prediction errors represented as the difference between predictions and actual sensory inputs. The processes involved in processing unexpected events and responding appropriately are vulnerable in common dementias but difficult to characterise. In my PhD work, I have exploited key properties of music – its universality, ecological relevance and structural regularity – to model and assess predictive cognition in patients representing major syndromes of frontotemporal dementia – non-fluent variant PPA (nfvPPA), semantic-variant PPA (svPPA) and behavioural-variant FTD (bvFTD) - and Alzheimer’s disease relative to healthy older individuals. In my first experiment, I presented patients with well-known melodies containing no deviants or one of three types of deviant - acoustic (white-noise burst), syntactic (key-violating pitch change) or semantic (key-preserving pitch change). I assessed accuracy detecting melodic deviants and simultaneously-recorded pupillary responses to these deviants. I used voxel-based morphometry to define neuroanatomical substrates for the behavioural and autonomic processing of these different types of deviants, and identified a posterior temporo-parietal network for detection of basic acoustic deviants and a more anterior fronto-temporo-striatal network for detection of syntactic pitch deviants. In my second chapter, I investigated the ability of patients to track the statistical structure of the same musical stimuli, using a computational model of the information dynamics of music to calculate the information-content of deviants (unexpectedness) and entropy of melodies (uncertainty). I related these information-theoretic metrics to performance for detection of deviants and to ‘evoked’ and ‘integrative’ pupil reactivity to deviants and melodies respectively and found neuroanatomical correlates in bilateral dorsal and ventral striatum, hippocampus, superior temporal gyri, right temporal pole and left inferior frontal gyrus. Together, chapters 3 and 4 revealed new hypotheses about the way FTD and AD pathologies disrupt the integration of predictive errors with predictions: a retained ability of AD patients to detect deviants at all levels of the hierarchy with a preserved autonomic sensitivity to information-theoretic properties of musical stimuli; a generalized impairment of surprise detection and statistical tracking of musical information at both a cognitive and autonomic levels for svPPA patients underlying a diminished precision of predictions; the exact mirror profile of svPPA patients in nfvPPA patients with an abnormally high rate of false-alarms with up-regulated pupillary reactivity to deviants, interpreted as over-precise or inflexible predictions accompanied with normal cognitive and autonomic probabilistic tracking of information; an impaired behavioural and autonomic reactivity to unexpected events with a retained reactivity to environmental uncertainty in bvFTD patients. Chapters 5 and 6 assessed the status of reward prediction error processing and updating via actions in bvFTD. I created pleasant and aversive musical stimuli by manipulating chord progressions and used a classic reinforcement-learning paradigm which asked participants to choose the visual cue with the highest probability of obtaining a musical ‘reward’. bvFTD patients showed reduced sensitivity to the consequence of an action and lower learning rate in response to aversive stimuli compared to reward. These results correlated with neuroanatomical substrates in ventral and dorsal attention networks, dorsal striatum, parahippocampal gyrus and temporo-parietal junction. Deficits were governed by the level of environmental uncertainty with normal learning dynamics in a structured and binarized environment but exacerbated deficits in noisier environments. Impaired choice accuracy in noisy environments correlated with measures of ritualistic and compulsive behavioural changes and abnormally reduced learning dynamics correlated with behavioural changes related to empathy and theory-of-mind. Together, these experiments represent the most comprehensive attempt to date to define the way neurodegenerative pathologies disrupts the perceptual, behavioural and physiological encoding of unexpected events in predictive coding terms

    Music Listening as Therapy

    Get PDF
    Music is a universal phenomenon and is a real, physical thing. It is processed in neural circuits that overlap with language circuits, and it exerts cognitive, emotional, and physiological effects on humans. Many of those effects are therapeutic, such as reduced symptoms of physical and mental ailments. Music is the result of the elements rhythm, melody, harmony, timbre, dynamics, and form. Rhythm is the focus of pop music, and melody is the focus of classical music. The mind perceives and organizes music in learned, consistent ways in order to generate predictions and extract meaning. There are perceptual laws and information processing limitations to this process. Predictions are based in schematic and veridical approaches, which give rise to expectations. Frustrated expectations result in an effective response. Music only has meaning unto itself and the music listener ascribes any extra-musical meaning. This includes any emotional meaning. The unfolding of a song is much like how Gestalt Therapy theory conceptualizes human experience. Mindfulness offers a clear definition of how one can frame and approach experience to support health and well-being. MinMuList (said “min-mew-list”) is an evidenced-based workshop that offers a concise discussion and straightforward methods for implementation of these aspects of music and psychology

    A dynamically minimalist cognitive explanation of musical preference: is familiarity everything?

    Get PDF
    This paper examines the idea that attraction to music is generated at a cognitive level through the formation and activation of networks of interlinked “nodes.” Although the networks involved are vast, the basic mechanism for activating the links is relatively simple. Two comprehensive cognitive-behavioral models of musical engagement are examined with the aim of identifying the underlying cognitive mechanisms and processes involved in musical experience. A “dynamical minimalism” approach (after Nowak, 2004) is applied to re-interpret musical engagement (listening, performing, composing, or imagining any of these) and to revise the latest version of the reciprocal-feedback model (RFM) of music processing. Specifically, a single cognitive mechanism of “spreading activation” through previously associated networks is proposed as a pleasurable outcome of musical engagement. This mechanism underlies the dynamic interaction of the various components of the RFM, and can thereby explain the generation of positive affects in the listener’s musical experience. This includes determinants of that experience stemming from the characteristics of the individual engaging in the musical activity (whether listener, composer, improviser, or performer), the situation and contexts (e.g., social factors), and the music (e.g., genre, structural features). The theory calls for new directions for future research, two being (1) further investigation of the components of the RFM to better understand musical experience and (2) more rigorous scrutiny of common findings about the salience of familiarity in musical experience and preference
    corecore