210 research outputs found

    Combining audio-based similarity with web-based data to accelerate automatic music playlist generation

    Full text link
    We present a technique for combining audio signal-based music similarity with web-based musical artist similarity to accelerate the task of automatic playlist generation. We demonstrate the applicability of our proposed method by extending a recently published interface for music players that benefits from intelligent structuring of audio collections. While the original approach involves the calculation of similarities between every pair of songs in a collection, we incorporate web-based data to reduce the number of necessary similarity calculations. More precisely, we exploit artist similarity determined automatically by means of web retrieval to avoid similarity calculation between tracks of dissimilar and/or unrelated artists. We evaluate our acceleration technique on two audio collections with different characteristics. It turns out that the proposed combination of audio- and text-based similarity not only reduces the number of necessary calculations considerably but also yields better results, in terms of musical quality, than the initial approach based on audio data only. Additionally, we conducted a small user study that further confirms the quality of the resulting playlists

    Music Recommender Systems Challenges and Opportunities for Non-Superstar Artists

    Get PDF
    Music Recommender Systems (MRS) are important drivers in music industry and are widely adopted by music platforms. Other than most MRS research exploring MRS from a technical or from a consumers’ perspective, this work focuses on the impact, value generation, challenges and opportunities for those, who contribute the core value, i.e. the artists. We outline the non-superstar artist’s perspective on MRS, and explore the question if and how non-superstar artists may benefit from MRS to foster their professional advancement. Thereby, we explain several techniques how MRS generate recommendations and discuss their impact on non- superstar artists

    AN EVALUATION OF AUDIO FEATURE EXTRACTION TOOLBOXES

    Get PDF
    Audio feature extraction underpins a massive proportion of audio processing, music information retrieval, audio effect design and audio synthesis. Design, analysis, synthesis and evaluation often rely on audio features, but there are a large and diverse range of feature extraction tools presented to the community. An evaluation of existing audio feature extraction libraries was undertaken. Ten libraries and toolboxes were evaluated with the Cranfield Model for evaluation of information retrieval systems, reviewing the cov-erage, effort, presentation and time lag of a system. Comparisons are undertaken of these tools and example use cases are presented as to when toolboxes are most suitable. This paper allows a soft-ware engineer or researcher to quickly and easily select a suitable audio feature extraction toolbox. 1

    CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval

    Full text link
    We introduce CLaMP: Contrastive Language-Music Pre-training, which learns cross-modal representations between natural language and symbolic music using a music encoder and a text encoder trained jointly with a contrastive loss. To pre-train CLaMP, we collected a large dataset of 1.4 million music-text pairs. It employed text dropout as a data augmentation technique and bar patching to efficiently represent music data which reduces sequence length to less than 10\%. In addition, we developed a masked music model pre-training objective to enhance the music encoder's comprehension of musical context and structure. CLaMP integrates textual information to enable semantic search and zero-shot classification for symbolic music, surpassing the capabilities of previous models. To support the evaluation of semantic search and music classification, we publicly release WikiMusicText (WikiMT), a dataset of 1010 lead sheets in ABC notation, each accompanied by a title, artist, genre, and description. In comparison to state-of-the-art models that require fine-tuning, zero-shot CLaMP demonstrated comparable or superior performance on score-oriented datasets. Our models and code are available at https://github.com/microsoft/muzic/tree/main/clamp.Comment: 11 pages, 5 figures, 5 tables, accepted by ISMIR 202

    Scholarly Music Editions as Graph: Semantic Modelling of the Anton Webern Gesamtausgabe

    Get PDF
    This paper presents a first draft of the ongoing research at the Anton Webern Gesamt- ausgabe (Basel, CH) to apply RDF-based semantic models for the purpose of a scholarly digital music edition. A brief overview of different historical positions to approach music from a graph-theoretical perspective is followed by a list of music- related and other RDF vocabularies that may support this goal, such as MusicOWL, DoReMus, CIDOC CRMinf, or the NIE-INE ontologies. Using the example of some of Webern’s sketches for two drafted Goethe settings (M306 & M307), a preliminary graph-based model for philological knowledge and processes is envisioned, which incorporates existing ontologies from the context of cultural heritage and music. Finally, possible use-cases, and the consequences of such an approach to scholarly music editions, are discussed

    Matemaattisen morfologian käyttö geometrisessa musiikinhaussa

    Get PDF
    The usual task in music information retrieval (MIR) is to find occurrences of a monophonic query pattern within a music database, which can contain both monophonic and polyphonic content. The so-called query-by-humming systems are a famous instance of content-based MIR. In such a system, the user's hummed query is converted into symbolic form to perform search operations in a similarly encoded database. The symbolic representation (e.g., textual, MIDI or vector data) is typically a quantized and simplified version of the sampled audio data, yielding to faster search algorithms and space requirements that can be met in real-life situations. In this thesis, we investigate geometric approaches to MIR. We first study some musicological properties often needed in MIR algorithms, and then give a literature review on traditional (e.g., string-matching-based) MIR algorithms and novel techniques based on geometry. We also introduce some concepts from digital image processing, namely the mathematical morphology, which we will use to develop and implement four algorithms for geometric music retrieval. The symbolic representation in the case of our algorithms is a binary 2-D image. We use various morphological pre- and post-processing operations on the query and the database images to perform template matching / pattern recognition for the images. The algorithms are basically extensions to classic image correlation and hit-or-miss transformation techniques used widely in template matching applications. They aim to be a future extension to the retrieval engine of C-BRAHMS, which is a research project of the Department of Computer Science at University of Helsinki

    A Comparison of Deep Learning Methods for Timbre Analysis in Polyphonic Automatic Music Transcription

    Get PDF
    Automatic music transcription (AMT) is a critical problem in the field of music information retrieval (MIR). When AMT is faced with deep neural networks, the variety of timbres of different instruments can be an issue that has not been studied in depth yet. The goal of this work is to address AMT transcription by analyzing how timbre affect monophonic transcription in a first approach based on the CREPE neural network and then to improve the results by performing polyphonic music transcription with different timbres with a second approach based on the Deep Salience model that performs polyphonic transcription based on the Constant-Q Transform. The results of the first method show that the timbre and envelope of the onsets have a high impact on the AMT results and the second method shows that the developed model is less dependent on the strength of the onsets than other state-of-the-art models that deal with AMT on piano sounds such as Google Magenta Onset and Frames (OaF). Our polyphonic transcription model for non-piano instruments outperforms the state-of-the-art model, such as for bass instruments, which has an F-score of 0.9516 versus 0.7102. In our latest experiment we also show how adding an onset detector to our model can outperform the results given in this work
    • …
    corecore