243 research outputs found
Unsupervised automatic music genre classification
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia InformáticaIn this study we explore automatic music genre recognition and classification of digital music.
Music has always been a reflection of culture di erences and an influence in our society.
Today’s digital content development triggered the massive use of digital music. Nowadays,digital music is manually labeled without following a universal taxonomy, thus, the labeling process to audio indexing is prone to errors. A human labeling will always be influenced by culture di erences, education, tastes, etc. Nonetheless, this indexing process is primordial to
guarantee a correct organization of huge databases that contain thousands of music titles. In this study, our interest is about music genre organization.
We propose a learning and classification methodology for automatic genre classification able to group several music samples based on their characteristics (this is achieved by the proposed learning process) as well as classify a new test music into the previously learned created groups(this is achieved by the proposed classification process). The learning method intends to group the music samples into di erent clusters only based on audio features and without any previous knowledge on the genre of the samples, and therefore it follows an unsupervised methodology.
In addition a Model-Based approach is followed to generate clusters as we do not provide any information about the number of genres in the dataset. Features are related with rhythm analysis, timbre, melody, among others. In addition, Mahalanobis distance was used so that the classification method can deal with non-spherical clusters.
The proposed learning method achieves a clustering accuracy of 55% when the dataset contains 11 di erent music genres: Blues, Classical, Country, Disco, Fado, Hiphop, Jazz, Metal,Pop, Reggae and Rock. The clustering accuracy improves significantly when the number of genres is reduced; with 4 genres (Classical, Fado, Metal and Reggae), we obtain an accuracy of 100%. As for the classification process, 82% of the submitted music samples were correctly classified
A computational framework for sound segregation in music signals
Tese de doutoramento. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 200
Creating music by listening
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.Includes bibliographical references (p. 127-139).Machines have the power and potential to make expressive music on their own. This thesis aims to computationally model the process of creating music using experience from listening to examples. Our unbiased signal-based solution models the life cycle of listening, composing, and performing, turning the machine into an active musician, instead of simply an instrument. We accomplish this through an analysis-synthesis technique by combined perceptual and structural modeling of the musical surface, which leads to a minimal data representation. We introduce a music cognition framework that results from the interaction of psychoacoustically grounded causal listening, a time-lag embedded feature representation, and perceptual similarity clustering. Our bottom-up analysis intends to be generic and uniform by recursively revealing metrical hierarchies and structures of pitch, rhythm, and timbre. Training is suggested for top-down un-biased supervision, and is demonstrated with the prediction of downbeat. This musical intelligence enables a range of original manipulations including song alignment, music restoration, cross-synthesis or song morphing, and ultimately the synthesis of original pieces.by Tristan Jehan.Ph.D
Recommended from our members
Automatic sound synthesizer programming: techniques and applications
The aim of this thesis is to investigate techniques for, and applications of automatic sound synthesizer programming. An automatic sound synthesizer programmer is a system which removes the requirement to explicitly specify parameter settings for a sound synthesis algorithm from the user. Two forms of these systems are discussed in this thesis:
tone matching programmers and synthesis space explorers. A tone matching programmer takes at its input a sound synthesis algorithm and a desired target sound. At its output it produces a configuration for the sound synthesis algorithm which causes it to emit a
similar sound to the target. The techniques for achieving this that are investigated are
genetic algorithms, neural networks, hill climbers and data driven approaches. A synthesis
space explorer provides a user with a representation of the space of possible sounds
that a synthesizer can produce and allows them to interactively explore this space. The
applications of automatic sound synthesizer programming that are investigated include
studio tools, an autonomous musical agent and a self-reprogramming drum machine. The
research employs several methodologies: the development of novel software frameworks
and tools, the examination of existing software at the source code and performance levels
and user trials of the tools and software. The main contributions made are: a method
for visualisation of sound synthesis space and low dimensional control of sound synthesizers; a general purpose framework for the deployment and testing of sound synthesis and optimisation algorithms in the SuperCollider language sclang; a comparison of a variety of optimisation techniques for sound synthesizer programming; an analysis of sound synthesizer error surfaces; a general purpose sound synthesizer programmer compatible with industry standard tools; an automatic improviser which passes a loose equivalent of the Turing test for Jazz musicians, i.e. being half of a man-machine duet which was rated as one of the best sessions of 2009 on the BBC's 'Jazz on 3' programme
Proceedings of the 7th Sound and Music Computing Conference
Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010
Automatic annotation of musical audio for interactive applications
PhDAs machines become more and more portable, and part of our everyday life, it becomes
apparent that developing interactive and ubiquitous systems is an important
aspect of new music applications created by the research community. We are interested
in developing a robust layer for the automatic annotation of audio signals, to
be used in various applications, from music search engines to interactive installations,
and in various contexts, from embedded devices to audio content servers. We
propose adaptations of existing signal processing techniques to a real time context.
Amongst these annotation techniques, we concentrate on low and mid-level tasks
such as onset detection, pitch tracking, tempo extraction and note modelling. We
present a framework to extract these annotations and evaluate the performances of
different algorithms.
The first task is to detect onsets and offsets in audio streams within short latencies.
The segmentation of audio streams into temporal objects enables various
manipulation and analysis of metrical structure. Evaluation of different algorithms
and their adaptation to real time are described. We then tackle the problem of
fundamental frequency estimation, again trying to reduce both the delay and the
computational cost. Different algorithms are implemented for real time and experimented
on monophonic recordings and complex signals. Spectral analysis can be
used to label the temporal segments; the estimation of higher level descriptions is
approached. Techniques for modelling of note objects and localisation of beats are
implemented and discussed.
Applications of our framework include live and interactive music installations,
and more generally tools for the composers and sound engineers. Speed optimisations
may bring a significant improvement to various automated tasks, such as
automatic classification and recommendation systems. We describe the design of
our software solution, for our research purposes and in view of its integration within
other systems.EU-FP6-IST-507142 project SIMAC (Semantic Interaction with Music
Audio Contents);
EPSRC grants GR/R54620; GR/S75802/01
Navigating the space of your music
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.Includes bibliographical references (p. 121-124).Navigating increasingly large personal music libraries is commonplace. Yet most music browsers do not enable their users to explore their collections in a guided and manipulable fashion, often requiring them to have a specific target in mind. MusicBox is a new music browser that provides this interactive control by mapping a music collection into a two-dimensional space, applying principal components analysis (PCA) to a combination of contextual and content-based features of each of the musical tracks. The resulting map shows similar songs close together and dissimilar songs farther apart. MusicBox is fully interactive and highly flexible: users can add and remove features from the included feature list, with PCA recomputed on the fly to remap the data. MusicBox is also extensible; we invite other music researchers to contribute features to its PCA engine. A small user study has shown that MusicBox helps users to find music in their libraries, to discover new music, and to challenge their assumptions about relationships between types of music.by Anita Shen Lillie.S.M
- …