1 research outputs found

    The TUM Approach to the MediaEval Music Emotion Task Using Generic Affective Audio Features

    No full text
    This paper describes the TUM approach for the MediaEval Emotion in Music task which consists of non-prototypical music retrieved from the web, annotated by crowdsourcing. We use Support Vector Machines and BLSTM recurrent neural networks for static and dynamic arousal and valence regression. A generic set of acoustic features is used that has been proven effective for affect prediction across multiple domains. In the result, the best models explain 64 and 48 % of the annotations ’ variance for arousal and valence in the static case, and an average Kendall’s tau with the songs’ emotion contour of.18 and.12 is achieved in the dynamic case. 1
    corecore