390 research outputs found

    Using EEG-validated Music Emotion Recognition Techniques to Classify Multi-Genre Popular Music for Therapeutic Purposes

    Get PDF
    Music is observed to possess significant beneficial effects to human mental health, especially for patients undergoing therapy and older adults. Prior research focusing on machine recognition of the emotion music induces by classifying low-level music features has utilized subjective annotation to label data for classification. We validate this approach by using an electroencephalography-based approach to cross-check the predictions of music emotion made with the predictions from low-level music feature data as well as collected subjective annotation data. Collecting 8-channel EEG data from 10 participants listening to segments of 40 songs from 5 different genres, we obtain a subject-independent classification accuracy for EEG test data of 98.2298% using an ensemble classifier. We also classify low-level music features to cross-check music emotion predictions from music features with the predictions from EEG data, obtaining a classification accuracy of 94.9774% using an ensemble classifier. We establish links between specific genre preference and perceived valence, validating individualized approaches towards music therapy. We then use the classification predictions from the EEG data and combine it with the predictions from music feature data and subjective annotations, showing the similarity of the predictions made by these approaches, validating an integrated approach with music features and subjective annotation to classify music emotion. We use the music feature-based approach to classify 250 popular songs from 5 genres and create a musical playlist application to create playlists based on existing psychological theory to contribute emotional benefit to individuals, validating our playlist methodology as an effective method to induce positive emotional response

    An Analysis of Facial Expression Recognition Techniques

    Get PDF
    In present era of technology , we need applications which could be easy to use and are user-friendly , that even people with specific disabilities use them easily. Facial Expression Recognition has vital role and challenges in communities of computer vision, pattern recognition which provide much more attention due to potential application in many areas such as human machine interaction, surveillance , robotics , driver safety, non- verbal communication, entertainment, health- care and psychology study. Facial Expression Recognition has major importance ration in face recognition for significant image applications understanding and analysis. There are many algorithms have been implemented on different static (uniform background, identical poses, similar illuminations ) and dynamic (position variation, partial occlusion orientation, varying lighting )conditions. In general way face expression recognition consist of three main steps first is face detection then feature Extraction and at last classification. In this survey paper we discussed different types of facial expression recognition techniques and various methods which is used by them and their performance measures

    A Comprehensive Review on Audio based Musical Instrument Recognition: Human-Machine Interaction towards Industry 4.0

    Get PDF
    Over the last two decades, the application of machine technology has shifted from industrial to residential use. Further, advances in hardware and software sectors have led machine technology to its utmost application, the human-machine interaction, a multimodal communication. Multimodal communication refers to the integration of various modalities of information like speech, image, music, gesture, and facial expressions. Music is the non-verbal type of communication that humans often use to express their minds. Thus, Music Information Retrieval (MIR) has become a booming field of research and has gained a lot of interest from the academic community, music industry, and vast multimedia users. The problem in MIR is accessing and retrieving a specific type of music as demanded from the extensive music data. The most inherent problem in MIR is music classification. The essential MIR tasks are artist identification, genre classification, mood classification, music annotation, and instrument recognition. Among these, instrument recognition is a vital sub-task in MIR for various reasons, including retrieval of music information, sound source separation, and automatic music transcription. In recent past years, many researchers have reported different machine learning techniques for musical instrument recognition and proved some of them to be good ones. This article provides a systematic, comprehensive review of the advanced machine learning techniques used for musical instrument recognition. We have stressed on different audio feature descriptors of common choices of classifier learning used for musical instrument recognition. This review article emphasizes on the recent developments in music classification techniques and discusses a few associated future research problems

    Speech emotion recognition with artificial intelligence for contact tracing in the COVIDā€19 pandemic

    Get PDF
    If understanding sentiments is already a difficult task in humanā€human communication, this becomes extremely challenging when a humanā€computer interaction happens, as for instance in chatbot conversations. In this work, a machine learning neural networkā€based Speech Emotion Recognition system is presented to perform emotion detection in a chatbot virtual assistant whose task was to perform contact tracing during the COVIDā€19 pandemic. The system was tested on a novel dataset of audio samples, provided by the company Blu Pantheon, which developed virtual agents capable of autonomously performing contacts tracing for individuals positive to COVIDā€19. The dataset provided was unlabelled for the emotions associated to the conversations. Therefore, the work was structured using a sort of transfer learning strategy. First, the model is trained using the labelled and publicly available Italianā€language dataset EMOVO Corpus. The accuracy achieved in testing phase reached 92%. To the best of their knowledge, thiswork represents the first example in the context of chatbot speech emotion recognition for contact tracing, shedding lights towards the importance of the use of such techniques in virtual assistants and chatbot conversational contexts for psychological human status assessment. The code of this work was publicly released at: https://github.com/fp1acm8/SE

    A methodology for contextual recommendation using artificial neural networks

    Get PDF
    ā€œA thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophyā€.Recommender systems are an advanced form of software applications, more specifically decision-support systems, that efficiently assist the users in finding items of their interest. Recommender systems have been applied to many domains from music to e-commerce, movies to software services delivery and tourism to news by exploiting available information to predict and provide recommendations to end user. The suggestions generated by recommender systems tend to narrow down the list of items which a user may overlook due to the huge variety of similar items or usersā€™ lack of experience in the particular domain of interest. While the performance of traditional recommender systems, which rely on relatively simpler information such as content and usersā€™ filters, is widely accepted, their predictive capability perfomrs poorly when local context of the user and situated actions have significant role in the final decision. Therefore, acceptance and incorporation of context of the user as a significant feature and development of recommender systems utilising the premise becomes an active area of research requiring further investigation of the underlying algorithms and methodology. This thesis focuses on categorisation of contextual and non-contextual features within the domain of context-aware recommender system and their respective evaluation. Further, application of the Multilayer Perceptron Model (MLP) for generating predictions and ratings from the contextual and non-contextual features for contextual recommendations is presented with support from relevant literature and empirical evaluation. An evaluation of specifically employing artificial neural networks (ANNs) in the proposed methodology is also presented. The work emphasizes on both algorithms and methodology with three points of consideration:\ud contextual features and ratings of particular items/movies are exploited in several representations to improve the accuracy of recommendation process using artificial neural networks (ANNs), context features are combined with user-features to further improve the accuracy of a context-aware recommender system and lastly, a combination of the item/movie features are investigated within the recommendation process. The proposed approach is evaluated on the LDOS-CoMoDa dataset and the results are compared with state-of-the-art approaches from relevant published literature

    Affective Man-Machine Interface: Unveiling human emotions through biosignals

    Get PDF
    As is known for centuries, humans exhibit an electrical profile. This profile is altered through various psychological and physiological processes, which can be measured through biosignals; e.g., electromyography (EMG) and electrodermal activity (EDA). These biosignals can reveal our emotions and, as such, can serve as an advanced man-machine interface (MMI) for empathic consumer products. However, such a MMI requires the correct classification of biosignals to emotion classes. This chapter starts with an introduction on biosignals for emotion detection. Next, a state-of-the-art review is presented on automatic emotion classification. Moreover, guidelines are presented for affective MMI. Subsequently, a research is presented that explores the use of EDA and three facial EMG signals to determine neutral, positive, negative, and mixed emotions, using recordings of 21 people. A range of techniques is tested, which resulted in a generic framework for automated emotion classification with up to 61.31% correct classification of the four emotion classes, without the need of personal profiles. Among various other directives for future research, the results emphasize the need for parallel processing of multiple biosignals

    What you see is what you feel:Top-down emotional effects in face detection

    Get PDF
    Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Furthermore, whilst emotional state has been found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. In four studies and a computational model, we investigated how emotions affect mental representations of faces and how facial representations could be used to transmit and communicate peopleā€™s emotional states. To this end, we used an adapted reverse correlation techniquesuggested by Gill et al. (2019) which was based on an earlier idea of the ā€˜Superstitious Approachā€™ (Gosselin & Schyns, 2003). In Experiment 1 we measured how naturally occurring anxiety and depression, caused by external factors, affected peopleā€™s mental representations of faces. In two sessions, on separate days, participants (coders) were presented with ā€˜colourfulā€™ visual noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified by the coders as a face, we reconstructed the pictorial mental representation utilised by each participant in the identification process. Across coders, we found significant correlations between changes in the size of the mental representation of faces and changes in their level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participantsā€™ mental representations can reflect their emotional state, we conducted a validation study (Experiment 2) with a group of naĆÆve participants (verifiers) who were asked to classify the reconstructed mental representations of faces by emotion. Thus, we assessed whether the mental representations communicate codersā€™ emotional states to others. The analysis showed no significant correlation between codersā€™ emotional states, depicted in their mental representation of faces and verifiersā€™ evaluation scores. In Experiment 3, we investigated how different induced moods, negative and positive, affected mental representation of faces. Coders underwent two different mood induction conditions during two separate sessions. They were presented with the same ā€˜colourfulā€™ noise stimuli used in Experiment 1 and asked to detect faces. We were able to reconstruct pictorial mental representations of faces based on the identified fragments. The analysis showed a significant negative correlation between changes in codersā€™ mood along the dimension of arousal and changes in size of their mental representation of faces. Similar to Experiment 2, we conducted a validation study (Experiment 4) to investigate if codersā€™ mood could have been communicated to others through their mental representations of faces. Similarly, to Experiment 2, we found no correlation between codersā€™ mood, depicted in their mental representations of faces and verifiersā€™ evaluation of the intensity of transmitted emotional expression. Lastly, we tested a preliminary computational model (Experiment 5) to classify and predict codersā€™ emotional states based on their reconstructed mental representations of faces. In spite of the small number of training examples and the high dimensionality of the input, the model was successful just above chance level. Future studies should look at the possibility of improving the computational model by using a larger training set and testing other classifiers. Overall, the present work confirmed the presence of facial templates used during face detection. It provides an adapted version of a reverse correlation technique that can be used to access mental representation of faces, with a significant reduction in number of trials. Lastly, it provides evidence on how emotions can influence and affect the size of mental representations of faces
    • ā€¦
    corecore