2,536 research outputs found
Recommended from our members
People-Powered Music: Using User-Generated Tags and Structure in Recommendations
Music recommenders often rely on experts to classify song facets like genre and mood, but user-generated folksonomies hold some advantages over expert classifications—folksonomies can reflect the same real-world vocabularies and categorizations that end users employ. We present an approach for using crowd-sourced common sense knowledge to structure user-generated music tags into a folksonomy, and describe how to use this approach to make music recommendations. We then empirically evaluate our “people-powered” structured content recommender against a more traditional recommender. Our results show that participants slightly preferred the unstructured recommender, rating more of its recommendations as “perfect” than they did for our approach. An exploration of the reasons behind participants’ ratings revealed that users behaved differently when tagging songs than when evaluating recommendations, and we discuss the implications of our results for future tagging and recommendation approaches
Modeling Emotion Influence from Images in Social Networks
Images become an important and prevalent way to express users' activities,
opinions and emotions. In a social network, individual emotions may be
influenced by others, in particular by close friends. We focus on understanding
how users embed emotions into the images they uploaded to the social websites
and how social influence plays a role in changing users' emotions. We first
verify the existence of emotion influence in the image networks, and then
propose a probabilistic factor graph based emotion influence model to answer
the questions of "who influences whom". Employing a real network from Flickr as
experimental data, we study the effectiveness of factors in the proposed model
with in-depth data analysis. Our experiments also show that our model, by
incorporating the emotion influence, can significantly improve the accuracy
(+5%) for predicting emotions from images. Finally, a case study is used as the
anecdotal evidence to further demonstrate the effectiveness of the proposed
model
Social software for music
Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200
FMA: A Dataset For Music Analysis
We introduce the Free Music Archive (FMA), an open and easily accessible
dataset suitable for evaluating several tasks in MIR, a field concerned with
browsing, searching, and organizing large music collections. The community's
growing interest in feature and end-to-end learning is however restrained by
the limited availability of large audio datasets. The FMA aims to overcome this
hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio
from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a
hierarchical taxonomy of 161 genres. It provides full-length and high-quality
audio, pre-computed features, together with track- and user-level metadata,
tags, and free-form text such as biographies. We here describe the dataset and
how it was created, propose a train/validation/test split and three subsets,
discuss some suitable MIR tasks, and evaluate some baselines for genre
recognition. Code, data, and usage examples are available at
https://github.com/mdeff/fmaComment: ISMIR 2017 camera-read
- …