810 research outputs found

    Effects of Unexpected Chords and of Performer's Expression on Brain Responses and Electrodermal Activity

    Get PDF
    BACKGROUND: There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown. METHODOLOGY/PRINCIPAL FINDINGS: This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition. CONCLUSIONS/SIGNIFICANCE: These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music

    The role of artist and genre on music emotion recognition

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceThe goal of this study is to classify a dataset of songs according to their emotion and to understand the impact that the artist and genre have on the accuracy of the classification model. This will help market players such as Spotify and Apple Music to retrieve useful songs in the right context. This analysis was performed by extracting audio and non-audio features from the DEAM dataset and classifying them. The correlation between artist, song genre and other audio features was also analyzed. Furthermore, the classification performance of different machine learning algorithms was evaluated and compared, e.g., Support Vector Machines (SVM), Decision Trees, Naive Bayes and K-Nearest Neighbors. We found that Support Vector Machines attained the highest performance when using either only Audio features or a combination of Audio Features and Genre. Namely, an F-measure of 0.46 and 0.45 was achieved, respectively. We concluded that the Artist variable was not impactful to the emotion of the songs. Therefore, by using Support Vector Machines with the combination of Audio and Genre variables, we analyzed the results and created a dashboard to visualize the incorrectly classified songs. This information helped to understand if these variables are useful to improve the emotion classification model developed and what were the relationships between them and other audio and non-audio features

    PERSONALIZED INDEXING OF MUSIC BY EMOTIONS

    Get PDF
    How a person interprets music and what prompts a person to feel certain emotions are two very subjective things. This dissertation presents a method where a system can learn and track a user’s listening habits with the purpose of recommending songs that fit the user’s specific way of interpreting music and emotions. First a literature review is presented which shows an overview of the current state of recommender systems, as well as describing classifiers; then the process of collecting user data is discussed; then the process of training and testing personalized classifiers is described; finally a system combining the personalized classifiers with clustered data into a hierarchy of recommender systems is presented

    Automatic musical key detection

    Get PDF
    Selles töös oleme pakkunud mudeli tonaalsuse avastamiseks, mis on võimeline tegelema muusikaga erinevatest muusikalisest traditsioonedest ilma, et nende põhjalik analüüs oleks nõutud. Meie mudel põhineb eeldusel, et enamik muusikalisi traditsioone kasutavad hieraarhia kehtestaniseks helide kestust. Oleme pakkunud algoritmi automaatseks helilaadi avastamiseks. Meetod oli hinnatud nii sümboolse kui ka audio andmestiku peal.In this thesis we have proposed a model for tonality estimation, which is capable of handling music coming from various musical traditions and does not require their thorough analysis. In our model we have employed an assumption, that most musical traditions use duration to maintain pitch salience. Proceeding from this assumption, we have proposed an algorithm for automatic key detection, based on a distributional approach. The proposed method was evaluated on both symbolic and acoustic datasets

    Changing musical emotion: A computational rule system for modifying score and performance

    Get PDF
    CMERS system architecture has been implemented in the programming language scheme, and it uses the improvised music programming environment with the objective to provide researchers with a tool for testing the relationships between musical features and emotion. A music work represented in CMERS uses the music object hierarchy that is based on GTTM's grouping structure and is automatically generated from the phrase boundary markup and MIDI file. The Mode rule type of CMERS converts a note into those of the parallel mode and no change in pitch height occurs when converting to the parallel mode. It is reported that the odds of correctness with CMERS are approximately five times greater than that of DM. The repeated-measures analysis of variance for valence shows a significant difference between systems with F (1, 17) = 45.49, p < .0005 and the interaction between system and quadrant is significant with F (3, 51) = 4.23, p = .01, which indicates that CMERS is extensively more effective at correctly influencing valence than DM. c 2010 Massachusetts Institute of Technology

    A Review of Intelligent Music Generation Systems

    Full text link
    With the introduction of ChatGPT, the public's perception of AI-generated content (AIGC) has begun to reshape. Artificial intelligence has significantly reduced the barrier to entry for non-professionals in creative endeavors, enhancing the efficiency of content creation. Recent advancements have seen significant improvements in the quality of symbolic music generation, which is enabled by the use of modern generative algorithms to extract patterns implicit in a piece of music based on rule constraints or a musical corpus. Nevertheless, existing literature reviews tend to present a conventional and conservative perspective on future development trajectories, with a notable absence of thorough benchmarking of generative models. This paper provides a survey and analysis of recent intelligent music generation techniques, outlining their respective characteristics and discussing existing methods for evaluation. Additionally, the paper compares the different characteristics of music generation techniques in the East and West as well as analysing the field's development prospects

    Cognitive and affective judgements of syncopated musical themes

    Get PDF
    This study investigated cognitive and emotional effects of syncopation, a feature of musical rhythm that produces expectancy violations in the listener by emphasising weak temporal locations and de-emphasising strong locations in metric structure. Stimuli consisting of pairs of unsyncopated and syncopated musical phrases were rated by 35 musicians for perceived complexity, enjoyment, happiness, arousal, and tension. Overall, syncopated patterns were more enjoyed, and rated as happier, than unsyncopated patterns, while differences in perceived tension were unreliable. Complexity and arousal ratings were asymmetric by serial order, increasing when patterns moved from unsyncopated to syncopated, but not significantly changing when order was reversed. These results suggest that syncopation influences emotional valence (positively), and that while syncopated rhythms are objectively more complex than unsyncopated rhythms, this difference is more salient when complexity increases than when it decreases. It is proposed that composers and improvisers may exploit this asymmetry in perceived complexity by favoring formal structures that progress from rhythmically simple to complex, as can be observed in the initial sections of musical forms such as theme and variations
    corecore