6,624 research outputs found
Generating Music Medleys via Playing Music Puzzle Games
Generating music medleys is about finding an optimal permutation of a given
set of music clips. Toward this goal, we propose a self-supervised learning
task, called the music puzzle game, to train neural network models to learn the
sequential patterns in music. In essence, such a game requires machines to
correctly sort a few multisecond music fragments. In the training stage, we
learn the model by sampling multiple non-overlapping fragment pairs from the
same songs and seeking to predict whether a given pair is consecutive and is in
the correct chronological order. For testing, we design a number of puzzle
games with different difficulty levels, the most difficult one being music
medley, which requiring sorting fragments from different songs. On the basis of
state-of-the-art Siamese convolutional network, we propose an improved
architecture that learns to embed frame-level similarity scores computed from
the input fragment pairs to a common space, where fragment pairs in the correct
order can be more easily identified. Our result shows that the resulting model,
dubbed as the similarity embedding network (SEN), performs better than
competing models across different games, including music jigsaw puzzle, music
sequencing, and music medley. Example results can be found at our project
website, https://remyhuang.github.io/DJnet.Comment: Accepted at AAAI 201
Music Boundary Detection using Convolutional Neural Networks: A comparative analysis of combined input features
The analysis of the structure of musical pieces is a task that remains a
challenge for Artificial Intelligence, especially in the field of Deep
Learning. It requires prior identification of structural boundaries of the
music pieces. This structural boundary analysis has recently been studied with
unsupervised methods and \textit{end-to-end} techniques such as Convolutional
Neural Networks (CNN) using Mel-Scaled Log-magnitude Spectograms features
(MLS), Self-Similarity Matrices (SSM) or Self-Similarity Lag Matrices (SSLM) as
inputs and trained with human annotations. Several studies have been published
divided into unsupervised and \textit{end-to-end} methods in which
pre-processing is done in different ways, using different distance metrics and
audio characteristics, so a generalized pre-processing method to compute model
inputs is missing. The objective of this work is to establish a general method
of pre-processing these inputs by comparing the inputs calculated from
different pooling strategies, distance metrics and audio characteristics, also
taking into account the computing time to obtain them. We also establish the
most effective combination of inputs to be delivered to the CNN in order to
establish the most efficient way to extract the limits of the structure of the
music pieces. With an adequate combination of input matrices and pooling
strategies we obtain a measurement accuracy of 0.411 that outperforms the
current one obtained under the same conditions
A walk through the webâs video clips
Approximately 10^5 video clips are posted every day on the Web. The popularity of Web-based video databases poses a number of challenges to machine vision scientists: how do we organize, index and search such large wealth of data? Content-based video search and classification have been proposed in the literature and applied successfully to analyzing movies, TV broadcasts and lab-made videos. We explore the performance of some of these algorithms on a large data-set of approximately 3000 videos. We collected our data-set directly from the Web minimizing bias for content or quality, way so as to have a faithful representation of the statistics of this medium. We find that the algorithms that we have come to trust do not work well on video clips, because their quality is lower and their subject is more varied. We will make the data publicly available to encourage further research
The Skipping Behavior of Users of Music Streaming Services and its Relation to Musical Structure
The behavior of users of music streaming services is investigated from the
point of view of the temporal dimension of individual songs; specifically, the
main object of the analysis is the point in time within a song at which users
stop listening and start streaming another song ("skip"). The main contribution
of this study is the ascertainment of a correlation between the distribution in
time of skipping events and the musical structure of songs. It is also shown
that such distribution is not only specific to the individual songs, but also
independent of the cohort of users and, under stationary conditions, date of
observation. Finally, user behavioral data is used to train a predictor of the
musical structure of a song solely from its acoustic content; it is shown that
the use of such data, available in large quantities to music streaming
services, yields significant improvements in accuracy over the customary
fashion of training this class of algorithms, in which only smaller amounts of
hand-labeled data are available
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
Acoustic Scene Classification
This work was supported by the Centre for Digital Music Platform (grant EP/K009559/1) and a Leadership Fellowship
(EP/G007144/1) both from the United Kingdom Engineering and Physical Sciences Research Council
- âŠ