3 research outputs found

    Crowdsourcing Emotions in Music Domain

    Get PDF
    An important source of intelligence for music emotion recognition today comes from user-provided community tags about songs or artists. Recent crowdsourcing approaches such as harvesting social tags, design of collaborative games and web services or the use of Mechanical Turk, are becoming popular in the literature. They provide a cheap, quick and efficient method, contrary to professional labeling of songs which is expensive and does not scale for creating large datasets. In this paper we discuss the viability of various crowdsourcing instruments providing examples from research works. We also share our own experience, illustrating the steps we followed using tags collected from Last.fm for the creation of two music mood datasets which are rendered public. While processing affect tags of Last.fm, we observed that they tend to be biased towards positive emotions; the resulting dataset thus contain more positive songs than negative ones

    Semantic annotation of digital music

    Get PDF
    AbstractIn recent times, digital music items on the internet have been evolving in a vast information space where consumers try to find/locate the piece of music of their choice by means of search engines. The current trend of searching for music by means of music consumersʼ keywords/tags is unable to provide satisfactory search results. It is argued that search and retrieval of music can be significantly improved provided end-usersʼ tags are associated with semantic information in terms of acoustic metadata – the latter being easy to extract automatically from digital music items. This paper presents a lightweight ontology that will enable music producers to annotate music against MPEG-7 description (with its acoustic metadata) and the generated annotation may in turn be used to deliver meaningful search results. Several potential multimedia ontologies have been explored and a music annotation ontology, named mpeg-7Music, has been designed so that it can be used as a backbone for annotating music items

    A light-weight concept ontology for annotating digital music.

    Get PDF
    In the recent time, the digital music items on the internet have been evolving to an enormous information space where we try to find/locate the piece of information of our choice by means of search engine. The current trend of searching for music by means of music consumers' keywords/tags is unable to provide satisfactory search results; and search and retrieval of music may be potentially improved if music metadata is created from semantic information provided by association of end-users' tags with acoustic metadata which is easy to extract automatically from digital music items. Based on this observation, our research objective was to investigate how music producers may be able to annotate music against MPEG-7 description (with its acoustic metadata) to deliver meaningful search results. In addressing this question, we investigated the potential of multimedia ontologies to serve as backbone for annotating music items and prospective application scenarios of semantic technologies in the digital music industry. We achieved with our main contribution under this thesis is the first prototype of mpeg-7Music annotation ontology that establishes a mapping of end-users tags with MPEG-7 acoustic metadata as well as extends upper level multimedia ontologies with end-user tags. Additionally, we have developed a semi-automatic annotation tool to demonstrate the potential of the mpeg-7Music ontology to serve as light weight concept ontology for annotating digital music by music producers. The proposed ontology has been encoded in dominant semantic web ontology standard OWL1.0 and provides a standard interoperable representation of the generated semantic metadata. Our innovations in designing the semantic annotation tool were focussed on supporting the music annotation vocabulary (i.e. the mpeg-7Music) in an attempt to turn the music metadata information space to a knowledgebase
    corecore