51,962 research outputs found
Multimodal music information processing and retrieval: survey and future challenges
Towards improving the performance in various music information processing
tasks, recent studies exploit different modalities able to capture diverse
aspects of music. Such modalities include audio recordings, symbolic music
scores, mid-level representations, motion, and gestural data, video recordings,
editorial or cultural tags, lyrics and album cover arts. This paper critically
reviews the various approaches adopted in Music Information Processing and
Retrieval and highlights how multimodal algorithms can help Music Computing
applications. First, we categorize the related literature based on the
application they address. Subsequently, we analyze existing information fusion
approaches, and we conclude with the set of challenges that Music Information
Retrieval and Sound and Music Computing research communities should focus in
the next years
Information-theoretic measures of music listening behaviour
We present an information-theoretic approach to the mea-
surement of users’ music listening behaviour and selection of music features. Existing
ethnographic studies of mu- sic use have guided the design of music retrieval systems however are
typically qualitative and exploratory in nature. We introduce the SPUD dataset, comprising 10, 000
hand- made playlists, with user and audio stream metadata. With this, we illustrate the use of
entropy for analysing music listening behaviour, e.g. identifying when a user changed music
retrieval system. We then develop an approach to identifying music features that reflect users’
criteria for playlist curation, rejecting features that are independent of user behaviour. The
dataset and the code used to produce it are made available. The techniques described support a
quantitative yet user-centred approach to the evaluation of music features and retrieval systems,
without assuming objective ground truth labels
Recommended from our members
Organising music for movies
Purpose - The purpose of this paper is to examine and discuss the classification of commercial popular music when large digital collections are organised for use in films.
Design/methodology/approach - A range of systems are investigated and their organization is discussed, focusing on an analysis of the metadata used by the systems and choices given to the end-user to construct a query. The indexing of the music is compared to a checklist of music facets which has been derived from recent musicological literature on semiotic analysis of popular music. These facets include aspects of communication, cultural and musical expression, codes and competences.
Findings -In addition to bibliographic detail, descriptive metadata is used to organise music in these systems. Genre, subject and mood are used widely; some musical facets also appear. The extent to which attempts are being made to reflect these facets in the organization of these systems is discussed. A number of recommendations are made which may help to improve this process.
Originality/value - This paper discusses an area of creative music search which has not previously been investigated in any depth and makes recommendations based on findings and the literature which may be used in the development of commercial systems as well as making a contribution to the literature
Sequential Complexity as a Descriptor for Musical Similarity
We propose string compressibility as a descriptor of temporal structure in
audio, for the purpose of determining musical similarity. Our descriptors are
based on computing track-wise compression rates of quantised audio features,
using multiple temporal resolutions and quantisation granularities. To verify
that our descriptors capture musically relevant information, we incorporate our
descriptors into similarity rating prediction and song year prediction tasks.
We base our evaluation on a dataset of 15500 track excerpts of Western popular
music, for which we obtain 7800 web-sourced pairwise similarity ratings. To
assess the agreement among similarity ratings, we perform an evaluation under
controlled conditions, obtaining a rank correlation of 0.33 between intersected
sets of ratings. Combined with bag-of-features descriptors, we obtain
performance gains of 31.1% and 10.9% for similarity rating prediction and song
year prediction. For both tasks, analysis of selected descriptors reveals that
representing features at multiple time scales benefits prediction accuracy.Comment: 13 pages, 9 figures, 8 tables. Accepted versio
Pop Music Highlighter: Marking the Emotion Keypoints
The goal of music highlight extraction is to get a short consecutive segment
of a piece of music that provides an effective representation of the whole
piece. In a previous work, we introduced an attention-based convolutional
recurrent neural network that uses music emotion classification as a surrogate
task for music highlight extraction, for Pop songs. The rationale behind that
approach is that the highlight of a song is usually the most emotional part.
This paper extends our previous work in the following two aspects. First,
methodology-wise we experiment with a new architecture that does not need any
recurrent layers, making the training process faster. Moreover, we compare a
late-fusion variant and an early-fusion variant to study which one better
exploits the attention mechanism. Second, we conduct and report an extensive
set of experiments comparing the proposed attention-based methods against a
heuristic energy-based method, a structural repetition-based method, and a few
other simple feature-based methods for this task. Due to the lack of
public-domain labeled data for highlight extraction, following our previous
work we use the RWC POP 100-song data set to evaluate how the detected
highlights overlap with any chorus sections of the songs. The experiments
demonstrate the effectiveness of our methods over competing methods. For
reproducibility, we open source the code and pre-trained model at
https://github.com/remyhuang/pop-music-highlighter/.Comment: Transactions of the ISMIR vol. 1, no.
Information-theoretic measures of music listening behaviour
We present an information-theoretic approach to the mea-
surement of users’ music listening behaviour and selection of music features. Existing
ethnographic studies of mu- sic use have guided the design of music retrieval systems however are
typically qualitative and exploratory in nature. We introduce the SPUD dataset, comprising 10, 000
hand- made playlists, with user and audio stream metadata. With this, we illustrate the use of
entropy for analysing music listening behaviour, e.g. identifying when a user changed music
retrieval system. We then develop an approach to identifying music features that reflect users’
criteria for playlist curation, rejecting features that are independent of user behaviour. The
dataset and the code used to produce it are made available. The techniques described support a
quantitative yet user-centred approach to the evaluation of music features and retrieval systems,
without assuming objective ground truth labels
Using Generic Summarization to Improve Music Information Retrieval Tasks
In order to satisfy processing time constraints, many MIR tasks process only
a segment of the whole music signal. This practice may lead to decreasing
performance, since the most important information for the tasks may not be in
those processed segments. In this paper, we leverage generic summarization
algorithms, previously applied to text and speech summarization, to summarize
items in music datasets. These algorithms build summaries, that are both
concise and diverse, by selecting appropriate segments from the input signal
which makes them good candidates to summarize music as well. We evaluate the
summarization process on binary and multiclass music genre classification
tasks, by comparing the performance obtained using summarized datasets against
the performances obtained using continuous segments (which is the traditional
method used for addressing the previously mentioned time constraints) and full
songs of the same original dataset. We show that GRASSHOPPER, LexRank, LSA,
MMR, and a Support Sets-based Centrality model improve classification
performance when compared to selected 30-second baselines. We also show that
summarized datasets lead to a classification performance whose difference is
not statistically significant from using full songs. Furthermore, we make an
argument stating the advantages of sharing summarized datasets for future MIR
research.Comment: 24 pages, 10 tables; Submitted to IEEE/ACM Transactions on Audio,
Speech and Language Processin
Swamp dredge: Research into grunge
For this project I have researched grunge music and created a body of work influenced by this genre. During my extended contextual research into the genre, I looked at both the artists and producers. I wrote/co-wrote the songs, played some of the instruments and produced the recordings. These are now available for download on www.soundcloud.com/swampdredg
- …