905 research outputs found

    WASABI: a Two Million Song Database Project with Audio and Cultural Metadata plus WebAudio enhanced Client Applications

    Get PDF
    This paper presents the WASABI project, started in 2017, which aims at (1) the construction of a 2 million song knowledge base that combines metadata collected from music databases on the Web, metadata resulting from the analysis of song lyrics, and metadata resulting from the audio analysis, and (2) the development of semantic applications with high added value to exploit this semantic database. A preliminary version of the WASABI database is already online1 and will be enriched all along the project. The main originality of this project is the collaboration between the algorithms that will extract semantic metadata from the web and from song lyrics with the algorithms that will work on the audio. The following WebAudio enhanced applications will be associated with each song in the database: an online mixing table, guitar amp simulations with a virtual pedal-board, audio analysis visualization tools, annotation tools, a similarity search tool that works by uploading audio extracts or playing some melody using a MIDI device are planned as companions for the WASABI database

    Sounds like meritocracy to my ears: exploring the link between inequality in popular music and personal culture

    Get PDF
    Extant research documents the impact of meritocratic narratives in news media that justify economic inequality. This paper inductively explores whether popular music is a source of cultural frames about inequality. We construct an original dataset combining user data from Spotify with lyrics from Genius and employ unsupervised computational text analysis to classify the content of the 3,660 most popular songs across 23 European countries. Drawing on Lizardo’s enculturation framework, we analyze lyrics through the lens of public culture and explore their link with individual beliefs as a reflection of personal culture. We find that, in more unequal societies, songs that frame inequalities as a structural issue (lyrics about ‘Struggle’ or omnipresent ‘Risks’) are more popular than those adopting a meritocratic frame (songs we describe as ‘Bragging Rights’ or those telling a ‘Rags to Riches’ tale). Moreover, we find that the presence in public culture of a certain frame is associated with the expression of frame-consistent individual beliefs about inequality. We conclude by reflecting on the promise of automatic text classification for the study of lyrics, the theorized role of popular music in the study of culture, and by proposing venues for future research.Accepted manuscrip

    Explorative Visual Analysis of Rap Music

    Get PDF
    Detecting references and similarities in music lyrics can be a difficult task. Crowdsourced knowledge platforms such as Genius. can help in this process through user-annotated information about the artist and the song but fail to include visualizations to help users find similarities and structures on a higher and more abstract level. We propose a prototype to compute similarities between rap artists based on word embedding of their lyrics crawled from Genius. Furthermore, the artists and their lyrics can be analyzed using an explorative visualization system applying multiple visualization methods to support domain-specific tasks

    Amplifying the Music Listening Experience through Song Comments on Music Streaming Platforms

    Full text link
    Music streaming services are increasingly popular among younger generations who seek social experiences through personal expression and sharing of subjective feelings in comments. However, such emotional aspects are often ignored by current platforms, which affects the listeners' ability to find music that triggers specific personal feelings. To address this gap, this study proposes a novel approach that leverages deep learning methods to capture contextual keywords, sentiments, and induced mechanisms from song comments. The study augments a current music app with two features, including the presentation of tags that best represent song comments and a novel map metaphor that reorganizes song comments based on chronological order, content, and sentiment. The effectiveness of the proposed approach is validated through a usage scenario and a user study that demonstrate its capability to improve the user experience of exploring songs and browsing comments of interest. This study contributes to the advancement of music streaming services by providing a more personalized and emotionally rich music experience for younger generations.Comment: In the Proceedings of ChinaVis 202

    Music emotion recognition: a multimodal machine learning approach

    Get PDF
    Music emotion recognition (MER) is an emerging domain of the Music Information Retrieval (MIR) scientific community, and besides, music searches through emotions are one of the major selection preferred by web users. As the world goes to digital, the musical contents in online databases, such as Last.fm have expanded exponentially, which require substantial manual efforts for managing them and also keeping them updated. Therefore, the demand for innovative and adaptable search mechanisms, which can be personalized according to users’ emotional state, has gained increasing consideration in recent years. This thesis concentrates on addressing music emotion recognition problem by presenting several classification models, which were fed by textual features, as well as audio attributes extracted from the music. In this study, we build both supervised and semisupervised classification designs under four research experiments, that addresses the emotional role of audio features, such as tempo, acousticness, and energy, and also the impact of textual features extracted by two different approaches, which are TF-IDF and Word2Vec. Furthermore, we proposed a multi-modal approach by using a combined feature-set consisting of the features from the audio content, as well as from context-aware data. For this purpose, we generated a ground truth dataset containing over 1500 labeled song lyrics and also unlabeled big data, which stands for more than 2.5 million Turkish documents, for achieving to generate an accurate automatic emotion classification system. The analytical models were conducted by adopting several algorithms on the crossvalidated data by using Python. As a conclusion of the experiments, the best-attained performance was 44.2% when employing only audio features, whereas, with the usage of textual features, better performances were observed with 46.3% and 51.3% accuracy scores considering supervised and semi-supervised learning paradigms, respectively. As of last, even though we created a comprehensive feature set with the combination of audio and textual features, this approach did not display any significant improvement for classification performanc
    • …
    corecore