2 research outputs found

    Automatic Recognition of Emotion for Music Recommendation

    Get PDF
    Music is widely associated with emotions. The automatic recognition of emotions from audio is very challenging because important factors such as personal experience and cultural background are not captured by the musical sounds. Currently, there are challenges associated with most steps of music emotion recognition (MER) systems, namely feature selection, the model of emotions, annotation methods, and machine learning techniques used. This project uses different machine learning techniques to automatically associate musical features calculated from audio to annotations of emotions made by human listeners. The map between the feature space and the model of emotions learned by the model can be used to estimate the emotions associated with music that the system has not been previously exposed to. Consequently, the system has the potential to recommend music to listeners based on emotional content
    corecore