7,845 research outputs found
Timbre-invariant Audio Features for Style Analysis of Classical Music
Copyright: (c) 2014 Christof Weiß et al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Recommended from our members
Improving music genre classification using automatically induced harmony rules
We present a new genre classification framework using both low-level signal-based features and high-level harmony features. A state-of-the-art statistical genre classifier based on timbral features is extended using a first-order random forest containing for each genre rules derived from harmony or chord sequences. This random forest has been automatically induced, using the first-order logic induction algorithm TILDE, from a dataset, in which for each chord the degree and chord category are identified, and covering classical, jazz and pop genre classes. The audio descriptor-based genre classifier contains 206 features, covering spectral, temporal, energy, and pitch characteristics of the audio signal. The fusion of the harmony-based classifier with the extracted feature vectors is tested on three-genre subsets of the GTZAN and ISMIR04 datasets, which contain 300 and 448 recordings, respectively. Machine learning classifiers were tested using 5 × 5-fold cross-validation and feature selection. Results indicate that the proposed harmony-based rules combined with the timbral descriptor-based genre classification system lead to improved genre classification rates
Recommended from our members
Improving music genre classification using automatically induced harmony rules
We present a new genre classification framework using both low-level signal-based features and high-level harmony features. A state-of-the-art statistical genre classifier based on timbral features is extended using a first-order random forest containing for each genre rules derived from harmony or chord sequences. This random forest has been automatically induced, using the first-order logic induction algorithm TILDE, from a dataset, in which for each chord the degree and chord category are identified, and covering classical, jazz and pop genre classes. The audio descriptor-based genre classifier contains 206 features, covering spectral, temporal, energy, and pitch characteristics of the audio signal. The fusion of the harmony-based classifier with the extracted feature vectors is tested on three-genre subsets of the GTZAN and ISMIR04 datasets, which contain 300 and 448 recordings, respectively. Machine learning classifiers were tested using 5 × 5-fold cross-validation and feature selection. Results indicate that the proposed harmony-based rules combined with the timbral descriptor-based genre classification system lead to improved genre classification rates
On the Complex Network Structure of Musical Pieces: Analysis of Some Use Cases from Different Music Genres
This paper focuses on the modeling of musical melodies as networks. Notes of
a melody can be treated as nodes of a network. Connections are created whenever
notes are played in sequence. We analyze some main tracks coming from different
music genres, with melodies played using different musical instruments. We find
out that the considered networks are, in general, scale free networks and
exhibit the small world property. We measure the main metrics and assess
whether these networks can be considered as formed by sub-communities. Outcomes
confirm that peculiar features of the tracks can be extracted from this
analysis methodology. This approach can have an impact in several multimedia
applications such as music didactics, multimedia entertainment, and digital
music generation.Comment: accepted to Multimedia Tools and Applications, Springe
Logic-based Modelling of Musical Harmony for Automatic Characterisation and Classification
The copyright of this thesis rests with the author and no quotation from it or information derived from it may be published without the prior written consent of the authorMusic like other online media is undergoing an information explosion. Massive online
music stores such as the iTunes Store1 or Amazon MP32, and their counterparts, the streaming
platforms, such as Spotify3, Rdio4 and Deezer5, offer more than 30 million6 pieces of music to
their customers, that is to say anybody with a smart phone. Indeed these ubiquitous devices
offer vast storage capacities and cloud-based apps that can cater any music request. As Paul
Lamere puts it7:
“we can now have a virtually endless supply of music in our pocket. The ‘bottomless iPod’
will have as big an effect on how we listen to music as the original iPod had back in 2001.
But with millions of songs to chose from, we will need help finding music that we want to
hear [...]. We will need new tools that help us manage our listening experience.”
Retrieval, organisation, recommendation, annotation and characterisation of musical data is
precisely what the Music Information Retrieval (MIR) community has been working on for
at least 15 years (Byrd and Crawford, 2002). It is clear from its historical roots in practical
fields such as Information Retrieval, Information Systems, Digital Resources and Digital
Libraries but also from the publications presented at the first International Symposium on Music
Information Retrieval in 2000 that MIR has been aiming to build tools to help people to navigate,
explore and make sense of music collections (Downie et al., 2009). That also includes analytical
tools to suppor
- …