2,033 research outputs found
Affective Music Information Retrieval
Much of the appeal of music lies in its power to convey emotions/moods and to
evoke them in listeners. In consequence, the past decade witnessed a growing
interest in modeling emotions from musical signals in the music information
retrieval (MIR) community. In this article, we present a novel generative
approach to music emotion modeling, with a specific focus on the
valence-arousal (VA) dimension model of emotion. The presented generative
model, called \emph{acoustic emotion Gaussians} (AEG), better accounts for the
subjectivity of emotion perception by the use of probability distributions.
Specifically, it learns from the emotion annotations of multiple subjects a
Gaussian mixture model in the VA space with prior constraints on the
corresponding acoustic features of the training music pieces. Such a
computational framework is technically sound, capable of learning in an online
fashion, and thus applicable to a variety of applications, including
user-independent (general) and user-dependent (personalized) emotion
recognition and emotion-based music retrieval. We report evaluations of the
aforementioned applications of AEG on a larger-scale emotion-annotated corpora,
AMG1608, to demonstrate the effectiveness of AEG and to showcase how
evaluations are conducted for research on emotion-based MIR. Directions of
future work are also discussed.Comment: 40 pages, 18 figures, 5 tables, author versio
Music emotion recognition: a multimodal machine learning approach
Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc
Enabling Embodied Analogies in Intelligent Music Systems
The present methodology is aimed at cross-modal machine learning and uses
multidisciplinary tools and methods drawn from a broad range of areas and
disciplines, including music, systematic musicology, dance, motion capture,
human-computer interaction, computational linguistics and audio signal
processing. Main tasks include: (1) adapting wisdom-of-the-crowd approaches to
embodiment in music and dance performance to create a dataset of music and
music lyrics that covers a variety of emotions, (2) applying
audio/language-informed machine learning techniques to that dataset to identify
automatically the emotional content of the music and the lyrics, and (3)
integrating motion capture data from a Vicon system and dancers performing on
that music.Comment: 4 page
Text-based Sentiment Analysis and Music Emotion Recognition
Nowadays, with the expansion of social media, large amounts of user-generated
texts like tweets, blog posts or product reviews are shared online. Sentiment polarity
analysis of such texts has become highly attractive and is utilized in recommender
systems, market predictions, business intelligence and more. We also witness deep
learning techniques becoming top performers on those types of tasks. There are
however several problems that need to be solved for efficient use of deep neural
networks on text mining and text polarity analysis.
First of all, deep neural networks are data hungry. They need to be fed with
datasets that are big in size, cleaned and preprocessed as well as properly labeled.
Second, the modern natural language processing concept of word embeddings as a
dense and distributed text feature representation solves sparsity and dimensionality
problems of the traditional bag-of-words model. Still, there are various uncertainties
regarding the use of word vectors: should they be generated from the same dataset
that is used to train the model or it is better to source them from big and popular
collections that work as generic text feature representations? Third, it is not easy for
practitioners to find a simple and highly effective deep learning setup for various
document lengths and types. Recurrent neural networks are weak with longer texts
and optimal convolution-pooling combinations are not easily conceived. It is thus
convenient to have generic neural network architectures that are effective and can
adapt to various texts, encapsulating much of design complexity.
This thesis addresses the above problems to provide methodological and practical
insights for utilizing neural networks on sentiment analysis of texts and achieving
state of the art results. Regarding the first problem, the effectiveness of various
crowdsourcing alternatives is explored and two medium-sized and emotion-labeled
song datasets are created utilizing social tags. One of the research interests of Telecom
Italia was the exploration of relations between music emotional stimulation and
driving style. Consequently, a context-aware music recommender system that aims
to enhance driving comfort and safety was also designed. To address the second
problem, a series of experiments with large text collections of various contents and
domains were conducted. Word embeddings of different parameters were exercised
and results revealed that their quality is influenced (mostly but not only) by the
size of texts they were created from. When working with small text datasets, it is
thus important to source word features from popular and generic word embedding
collections. Regarding the third problem, a series of experiments involving convolutional
and max-pooling neural layers were conducted. Various patterns relating
text properties and network parameters with optimal classification accuracy were
observed. Combining convolutions of words, bigrams, and trigrams with regional
max-pooling layers in a couple of stacks produced the best results. The derived
architecture achieves competitive performance on sentiment polarity analysis of
movie, business and product reviews.
Given that labeled data are becoming the bottleneck of the current deep learning
systems, a future research direction could be the exploration of various data programming
possibilities for constructing even bigger labeled datasets. Investigation
of feature-level or decision-level ensemble techniques in the context of deep neural
networks could also be fruitful. Different feature types do usually represent complementary
characteristics of data. Combining word embedding and traditional text
features or utilizing recurrent networks on document splits and then aggregating the
predictions could further increase prediction accuracy of such models
Crowdsourcing Emotions in Music Domain
An important source of intelligence for music emotion recognition today comes from user-provided
community tags about songs or artists. Recent crowdsourcing approaches such as harvesting social tags,
design of collaborative games and web services or the use of Mechanical Turk, are becoming popular in
the literature. They provide a cheap, quick and efficient method, contrary to professional labeling of songs
which is expensive and does not scale for creating large datasets. In this paper we discuss the viability of
various crowdsourcing instruments providing examples from research works. We also share our own
experience, illustrating the steps we followed using tags collected from Last.fm for the creation of two
music mood datasets which are rendered public. While processing affect tags of Last.fm, we observed that
they tend to be biased towards positive emotions; the resulting dataset thus contain more positive songs
than negative ones
Using EEG-validated Music Emotion Recognition Techniques to Classify Multi-Genre Popular Music for Therapeutic Purposes
Music is observed to possess significant beneficial effects to human mental health, especially for patients undergoing therapy and older adults. Prior research focusing on machine recognition of the emotion music induces by classifying low-level music features has utilized subjective annotation to label data for classification. We validate this approach by using an electroencephalography-based approach to cross-check the predictions of music emotion made with the predictions from low-level music feature data as well as collected subjective annotation data. Collecting 8-channel EEG data from 10 participants listening to segments of 40 songs from 5 different genres, we obtain a subject-independent classification accuracy for EEG test data of 98.2298% using an ensemble classifier. We also classify low-level music features to cross-check music emotion predictions from music features with the predictions from EEG data, obtaining a classification accuracy of 94.9774% using an ensemble classifier. We establish links between specific genre preference and perceived valence, validating individualized approaches towards music therapy. We then use the classification predictions from the EEG data and combine it with the predictions from music feature data and subjective annotations, showing the similarity of the predictions made by these approaches, validating an integrated approach with music features and subjective annotation to classify music emotion. We use the music feature-based approach to classify 250 popular songs from 5 genres and create a musical playlist application to create playlists based on existing psychological theory to contribute emotional benefit to individuals, validating our playlist methodology as an effective method to induce positive emotional response
- …