4,475 research outputs found
Multimodal music information processing and retrieval: survey and future challenges
Towards improving the performance in various music information processing
tasks, recent studies exploit different modalities able to capture diverse
aspects of music. Such modalities include audio recordings, symbolic music
scores, mid-level representations, motion, and gestural data, video recordings,
editorial or cultural tags, lyrics and album cover arts. This paper critically
reviews the various approaches adopted in Music Information Processing and
Retrieval and highlights how multimodal algorithms can help Music Computing
applications. First, we categorize the related literature based on the
application they address. Subsequently, we analyze existing information fusion
approaches, and we conclude with the set of challenges that Music Information
Retrieval and Sound and Music Computing research communities should focus in
the next years
FMA: A Dataset For Music Analysis
We introduce the Free Music Archive (FMA), an open and easily accessible
dataset suitable for evaluating several tasks in MIR, a field concerned with
browsing, searching, and organizing large music collections. The community's
growing interest in feature and end-to-end learning is however restrained by
the limited availability of large audio datasets. The FMA aims to overcome this
hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio
from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a
hierarchical taxonomy of 161 genres. It provides full-length and high-quality
audio, pre-computed features, together with track- and user-level metadata,
tags, and free-form text such as biographies. We here describe the dataset and
how it was created, propose a train/validation/test split and three subsets,
discuss some suitable MIR tasks, and evaluate some baselines for genre
recognition. Code, data, and usage examples are available at
https://github.com/mdeff/fmaComment: ISMIR 2017 camera-read
Audio Event Detection using Weakly Labeled Data
Acoustic event detection is essential for content analysis and description of
multimedia recordings. The majority of current literature on the topic learns
the detectors through fully-supervised techniques employing strongly labeled
data. However, the labels available for majority of multimedia data are
generally weak and do not provide sufficient detail for such methods to be
employed. In this paper we propose a framework for learning acoustic event
detectors using only weakly labeled data. We first show that audio event
detection using weak labels can be formulated as an Multiple Instance Learning
problem. We then suggest two frameworks for solving multiple-instance learning,
one based on support vector machines, and the other on neural networks. The
proposed methods can help in removing the time consuming and expensive process
of manually annotating data to facilitate fully supervised learning. Moreover,
it can not only detect events in a recording but can also provide temporal
locations of events in the recording. This helps in obtaining a complete
description of the recording and is notable since temporal information was
never known in the first place in weakly labeled data.Comment: ACM Multimedia 201
Improving Music Genre Classification from multi-modal properties of music and genre correlations Perspective
Music genre classification has been widely studied in past few years for its
various applications in music information retrieval. Previous works tend to
perform unsatisfactorily, since those methods only use audio content or jointly
use audio content and lyrics content inefficiently. In addition, as genres
normally co-occur in a music track, it is desirable to capture and model the
genre correlations to improve the performance of multi-label music genre
classification. To solve these issues, we present a novel multi-modal method
leveraging audio-lyrics contrastive loss and two symmetric cross-modal
attention, to align and fuse features from audio and lyrics. Furthermore, based
on the nature of the multi-label classification, a genre correlations
extraction module is presented to capture and model potential genre
correlations. Extensive experiments demonstrate that our proposed method
significantly surpasses other multi-label music genre classification methods
and achieves state-of-the-art result on Music4All dataset.Comment: Accepted by ICASSP 202
Final Research Report on Auto-Tagging of Music
The deliverable D4.7 concerns the work achieved by IRCAM until M36 for the “auto-tagging of music”. The deliverable is a research report. The software libraries resulting from the research have been integrated into Fincons/HearDis! Music Library Manager or are used by TU Berlin. The final software libraries are described in D4.5.
The research work on auto-tagging has concentrated on four aspects:
1) Further improving IRCAM’s machine-learning system ircamclass. This has been done by developing the new MASSS audio features, including audio augmentation and audio segmentation into ircamclass. The system has then been applied to train HearDis! “soft” features (Vocals-1, Vocals-2, Pop-Appeal, Intensity, Instrumentation, Timbre, Genre, Style). This is described in Part 3.
2) Developing two sets of “hard” features (i.e. related to musical or musicological concepts) as specified by HearDis! (for integration into Fincons/HearDis! Music Library Manager) and TU Berlin (as input for the prediction model of the GMBI attributes). Such features are either derived from previously estimated higher-level concepts (such as structure, key or succession of chords) or by developing new signal processing algorithm (such as HPSS) or main melody estimation. This is described in Part 4.
3) Developing audio features to characterize the audio quality of a music track. The goal is to describe the quality of the audio independently of its apparent encoding. This is then used to estimate audio degradation or music decade. This is to be used to ensure that playlists contain tracks with similar audio quality. This is described in Part 5.
4) Developing innovative algorithms to extract specific audio features to improve music mixes. So far, innovative techniques (based on various Blind Audio Source Separation algorithms and Convolutional Neural Network) have been developed for singing voice separation, singing voice segmentation, music structure boundaries estimation, and DJ cue-region estimation. This is described in Part 6.EC/H2020/688122/EU/Artist-to-Business-to-Business-to-Consumer Audio Branding System/ABC D
- …