25,355 research outputs found

    Detecting Emotion in Music

    Get PDF
    Detection of emotion in music sounds is an important problem in music indexing. This paper studies the problem of identifying emotion in music by sound signal processing. The problem is cast as a multiclass classification problem, decomposed as a multiple binary classification problem, and is resolved with the use of Support Vector Machines trained on the timbral textures, rhythmic contents, and pitch contents extracted from the sound data. Experiments were carried out on a data set consisting of 499 30-second long music sounds over ambient, classical, fusion, and jazz. Classification into the ten adjective groups of Farnsworth (plus three additional groups) as well as classification into six supergroups that are formed by combining these basic groups was attempted. For some groups and supergroups reasonably accurate performance was achieved

    Music Emotion Capture: Ethical issues around emotion-based music generation

    Get PDF
    Peopleā€™s emotions are not always detectable, e.g. if a person has difficulties/lack of skills in expressing emotions, or if people are geographically separated/communicating online). Brain-computer interfaces (BCI) could enhance non-verbal communication of emotion, particularly in detecting and responding to usersā€™ emotions e.g. music therapy, interactive software. Our pilot study Music Emotion Capture 1 detects, models and sonifies peopleā€™s emotions based on their real-time emotional state, measured by mapping EEG feedback onto a valence-arousal emotional model 2 based on [3]. Though many practical applications emerge, the work raises several ethical questions, which need careful consideration. This poster discusses these ethical issues. Are the workā€™s benefits (e.g. improved user experiences; music therapy; increased emotion communication abilities; enjoyable applications) important enough to justify navigating the ethical issues that arise? (e.g. privacy issues; control of representation of/reaction to usersā€™ emotional state; consequences of detection errors; the loop of using emotion to generate music and music affecting the emotion, with the human in the process as an ā€œintruderā€). 1 Langroudi, G., Jordanous, A., & Li, L. (2018). Music Emotion Capture: emotion-based generation of music using EEG. Emotion Modelling and Detection in Social Media and Online Interaction symposium @ AISB 2018, Liverpool. 2 Paltoglou, G., & Thelwall, M. (2012). Seeing stars of valence and arousal in blog posts. IEEE Transactions on Affective Computing, 4(1) [3] Russell, J.A. (1980). ā€˜A circumplex model of affectā€™, Journal of Personality and Social Psychology, 3

    EMOTION BASED MUSIC PLAYER

    Get PDF
    The work presents described the development of Emotion Based Music Player, which is a computer application meant for all type of users, specifically the music lovers. Due to the troublesome workloads in songs selection, most people will choose to randomly play the songs in the playlist. As a result, some of the songs selected not matching the usersā€™ current emotion. Moreover, there is no commonly used music player which able to play the songs based on userā€™s emotion. The proposed model is able to extract userā€™s facial expression and thus detect userā€™s emotion. The music player in the proposed model will then play the songs according to the category of emotion detected. It is aimed to provide a better enjoyment to music lovers in music listening. The scope of emotions in the proposed model involve normal, sad, surprise and happy. The system involves the major of image processing and facial detection technologies. The input for this proposed model is the .jpeg format still images which available online. The performance of this model is evaluated by loading forty still images (ten for each emotion category) into the proposed model to test on the accuracy in detecting the emotions. Based on the testing result, the proposed model has the Recognition Rate of 85%

    Biometric responses to music-rich segments in films: the CDVPlex

    Get PDF
    Summarising or generating trailers for films or movies involves finding the highlights within those films, those segments where we become most afraid, happy, sad, annoyed, excited, etc. In this paper we explore three questions related to automatic detection of film highlights by measuring the physiological responses of viewers of those films. Firstly, whether emotional highlights can be detected through viewer biometrics, secondly whether individuals watching a film in a group experience similar emotional reactions as others in the group and thirdly whether the presence of music in a film correlates with the occurrence of emotional highlights. We analyse the results of an experiment known as the CDVPlex, where we monitored and recorded physiological reactions from people as they viewed films in a controlled cinema-like environment. A selection of films were manually annotated for the locations of their emotive contents. We then studied the physiological peaks identified among participants while viewing the same film and how these correlated with emotion tags and with music. We conclude that these are highly correlated and that music-rich segments of a film do act as a catalyst in stimulating viewer response, though we don't know what exact emotions the viewers were experiencing. The results of this work could impact the way in which we index movie content on PVRs for example, paying special significance to movie segments which are most likely to be highlights

    The CDVPlex biometric cinema: sensing physiological responses to emotional stimuli in film

    Get PDF
    We describe a study conducted to investigate the potential correlations between human subject responses to emotional stimuli in movies, and observed biometric responses. The experimental set-up and procedure are described, including details of the range of sensors used to detect and record observed physiological data (such as heart-rate, galvanic skin response, body temperature and movement). Finally, applications and future analysis of the results of the study are discussed

    EMOTION BASED MUSIC PLAYER

    Get PDF
    The work presents described the development of Emotion Based Music Player, which is a computer application meant for all type of users, specifically the music lovers. Due to the troublesome workloads in songs selection, most people will choose to randomly play the songs in the playlist. As a result, some of the songs selected not matching the usersā€™ current emotion. Moreover, there is no commonly used music player which able to play the songs based on userā€™s emotion. The proposed model is able to extract userā€™s facial expression and thus detect userā€™s emotion. The music player in the proposed model will then play the songs according to the category of emotion detected. It is aimed to provide a better enjoyment to music lovers in music listening. The scope of emotions in the proposed model involve normal, sad, surprise and happy. The system involves the major of image processing and facial detection technologies. The input for this proposed model is the .jpeg format still images which available online. The performance of this model is evaluated by loading forty still images (ten for each emotion category) into the proposed model to test on the accuracy in detecting the emotions. Based on the testing result, the proposed model has the Recognition Rate of 85%

    Musical Robots For Children With ASD Using A Client-Server Architecture

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)People with Autistic Spectrum Disorders (ASD) are known to have difficulty recognizing and expressing emotions, which affects their social integration. Leveraging the recent advances in interactive robot and music therapy approaches, and integrating both, we have designed musical robots that can facilitate social and emotional interactions of children with ASD. Robots communicate with children with ASD while detecting their emotional states and physical activities and then, make real-time sonification based on the interaction data. Given that we envision the use of multiple robots with children, we have adopted a client-server architecture. Each robot and sensing device plays a role as a terminal, while the sonification server processes all the data and generates harmonized sonification. After describing our goals for the use of sonification, we detail the system architecture and on-going research scenarios. We believe that the present paper offers a new perspective on the sonification application for assistive technologies
    • ā€¦
    corecore