370 research outputs found
Music therapy for depression
Abstract not available. Full text available at https://onlinelibrary.wiley.com/doi/full/10.1002/nur.2200
From Sound to Significance: Exploring the Mechanisms Underlying Emotional Reactions to Music
A common approach to studying emotional reactions to music is to attempt to obtain direct links between musical surface features such as tempo and a listener’s responses. however, such an analysis ultimately fails to explain why emotions are aroused in the listener. in this article we explore an alternative approach, which aims to account for musical emotions in terms of a set of psychological mechanisms that are activated by different types of information in a musical event. this approach was tested in 4 experiments that manipulated 4 mechanisms (brain stem reflex, contagion, episodic memory, musical expectancy) by selecting existing musical pieces that featured information relevant for each mechanism. the excerpts were played to 60 listeners, who were asked to rate their felt emotions on 15 scales. skin conductance levels and facial expressions were measured, and listeners reported subjective impressions of relevance to specific mechanisms. results indicated that the target mechanism conditions evoked emotions largely as predicted by a multimechanism framework and that mostly similar effects occurred across the experiments that included different pieces of music. we conclude that a satisfactory account of musical emotions requires consideration of how musical features and responses are mediated by a range of underlying mechanisms
Temperament Systems Influence Emotion Induction but not Makam Recognition Performance in Turkish Makam Music
We tested how induced emotions and Turkish makam recognition are influenced by participation in an ear training classes, and if either is influenced by the temperament system employed. The ear training class was attended by 19 music students and was based on the Hicaz makam presented as a between-subjects factor in either unfamiliar Turkish Original Temperament (OT, pitches unequally divided into 24 intervals) or familiar Western Equal Temperament (ET, pitches equally divided into 12 intervals). Before the and after the class, participants listened to 20 music excerpts from five different Turkish makams (in both OT and ET versions). Emotion-induction was assessed via GEMS-25, and participants were also asked to identify the makam that was present in the excerpt. The unfamiliar original temperament was experienced as less vital and more uneasy before the ear training class, and recognition of the Hicaz makam increased after ear training classes (independent of the temperament system employed). Results suggest that unfamiliar temperament systems are experienced as less vital and more uneasy. Furthermore, being exposed to this temperament system for just one hour does not seem to be enough to change participants’ mental representations of it or their emotional responses to it
Mapping a beautiful voice : theoretical considerations
The prime purpose of this paper is to draw on a range of diverse literatures to clarify those elements thatare perceived to constitute a ‘beautiful’ sung performance. The text rehearses key findings from existingliteratures in order to determine the extent to which particular elements might appear the most salientfor an individual listener and also ‘quantifiable’ (in the sense of being open to empirical study). Thepaper concludes with a theoretical framework for the elements that are likely to construct and shape ourresponses to particular sung performances
From Motion to Emotion : Accelerometer Data Predict Subjective Experience of Music
Music is often discussed to be emotional because it reflects expressive movements in audible form. Thus, a valid approach to measure musical emotion could be to assess movement stimulated by music. In two experiments we evaluated the discriminative power of mobile-device generated acceleration data produced by free movement during music listening for the prediction of ratings on the Geneva Emotion Music Scales (GEMS-9). The quality of prediction for different dimensions of GEMS varied between experiments for tenderness (R12(first experiment) = 0.50, R22(second experiment) = 0.39), nostalgia (R12 = 0.42, R22 = 0.30), wonder (R12 = 0.25, R22 = 0.34), sadness (R12 = 0.24, R22 = 0.35), peacefulness (R12 = 0.20, R22 = 0.35) and joy (R12 = 0.19, R22 = 0.33) and transcendence (R12 = 0.14, R22 = 0.00). For others like power (R12 = 0.42, R22 = 0.49) and tension (R12 = 0.28, R22 = 0.27) results could be almost reproduced. Furthermore, we extracted two principle components from GEMS ratings, one representing arousal and the other one valence of the experienced feeling. Both qualities, arousal and valence, could be predicted by acceleration data, indicating, that they provide information on the quantity and quality of experience. On the one hand, these findings show how music-evoked movement patterns relate to music-evoked feelings. On the other hand, they contribute to integrate findings from the field of embodied music cognition into music recommender systems
Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity
BACKGROUND: There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown.
METHODOLOGY/PRINCIPAL FINDINGS: This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition.
CONCLUSIONS/SIGNIFICANCE: These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music
Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema
In this paper, a psychologically-inspired binary cascade classification schema is proposed for speech emotion recognition. Performance is enhanced because commonly confused pairs of emotions are distinguishable from one another. Extracted features are related to statistics of pitch, formants, and energy contours, as well as spectrum, cepstrum, perceptual and temporal features, autocorrelation, MPEG-7 descriptors, Fujisakis model parameters, voice quality, jitter, and shimmer. Selected features are fed as input to K nearest neighborhood classifier and to support vector machines. Two kernels are tested for the latter: Linear and Gaussian radial basis function. The recently proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately. The best emotion recognition accuracy, achieved by support vector machines with linear kernel, equals 87.7%, outperforming state-of-the-art approaches. Statistical analysis is first carried out with respect to the classifiers error rates and then to evaluate the information expressed by the classifiers confusion matrices. © Springer Science+Business Media, LLC 2011
Affective brain–computer music interfacing
We aim to develop and evaluate an affective brain–computer music interface
(aBCMI) for modulating the affective states of its users. Approach. An aBCMI is constructed to
detect a userʼs current affective state and attempt to modulate it in order to achieve specific
objectives (for example, making the user calmer or happier) by playing music which is generated
according to a specific affective target by an algorithmic music composition system and a casebased
reasoning system. The system is trained and tested in a longitudinal study on a population
of eight healthy participants, with each participant returning for multiple sessions. Main results.
The final online aBCMI is able to detect its users current affective states with classification
accuracies of up to 65% (3 class, p < 0.01) and modulate its userʼs affective states significantly
above chance level (p < 0.05). Significance. Our system represents one of the first
demonstrations of an online aBCMI that is able to accurately detect and respond to userʼs
affective states. Possible applications include use in music therapy and entertainmen
- …