16,129 research outputs found
Recommended from our members
Improving music genre classification using automatically induced harmony rules
We present a new genre classification framework using both low-level signal-based features and high-level harmony features. A state-of-the-art statistical genre classifier based on timbral features is extended using a first-order random forest containing for each genre rules derived from harmony or chord sequences. This random forest has been automatically induced, using the first-order logic induction algorithm TILDE, from a dataset, in which for each chord the degree and chord category are identified, and covering classical, jazz and pop genre classes. The audio descriptor-based genre classifier contains 206 features, covering spectral, temporal, energy, and pitch characteristics of the audio signal. The fusion of the harmony-based classifier with the extracted feature vectors is tested on three-genre subsets of the GTZAN and ISMIR04 datasets, which contain 300 and 448 recordings, respectively. Machine learning classifiers were tested using 5 × 5-fold cross-validation and feature selection. Results indicate that the proposed harmony-based rules combined with the timbral descriptor-based genre classification system lead to improved genre classification rates
Musicians and Machines: Bridging the Semantic Gap In Live Performance
PhDThis thesis explores the automatic extraction of musical information from
live performances – with the intention of using that information to create
novel, responsive and adaptive performance tools for musicians.
We focus specifically on two forms of musical analysis – harmonic analysis
and beat tracking. We present two harmonic analysis algorithms –
specifically we present a novel chroma vector analysis technique which
we later use as the input for a chord recognition algorithm. We also
present a real-time beat tracker, based upon an extension of state of the
art non-causal models, that is computationally efficient and capable of
strong performance compared to other models. Furthermore, through a
modular study of several beat tracking algorithms we attempt to establish
methods to improve beat tracking and apply these lessons to our model.
Building upon this work, we show that these analyses can be combined
to create a beat-synchronous musical representation, with harmonic information
segmented at the level of the beat. We present a number of ways
of calculating these representations and discuss their relative merits.
We proceed by introducing a technique, which we call Performance
Following, for recognising repeated patterns in live musical performances.
Through examining the real-time beat-synchronous musical representation,
this technique makes predictions of future harmonic content in musical
performances with no prior knowledge in the form of a score.
Finally, we present a number of potential applications for live performances
that incorporate the real-time musical analysis techniques outlined
previously. The applications presented include audio effects informed by
beat tracking, a technique for synchronising video to a live performance,
the use of harmonic information to control visual displays and an automatic
accompaniment system based upon our performance following
technique.EPSR
The Audio Degradation Toolbox and its Application to Robustness Evaluation
We introduce the Audio Degradation Toolbox (ADT) for the controlled degradation of audio signals, and propose its usage as a means of evaluating and comparing the robustness of audio processing algorithms. Music recordings encountered in practical applications are subject to varied, sometimes unpredictable degradation. For example, audio is degraded by low-quality microphones, noisy recording environments, MP3 compression, dynamic compression in broadcasting or vinyl decay. In spite of this, no standard software for the degradation of audio exists, and music processing methods are usually evaluated against clean data. The ADT fills this gap by providing Matlab scripts that emulate a wide range of degradation types. We describe 14 degradation units, and how they can be chained to create more complex, `real-world' degradations. The ADT also provides functionality to adjust existing ground-truth, correcting for temporal distortions introduced by degradation. Using four different music informatics tasks, we show that performance strongly depends on the combination of method and degradation applied. We demonstrate that specific degradations can reduce or even reverse the performance difference between two competing methods. ADT source code, sounds, impulse responses and definitions are freely available for download
Harmony and Technology Enhanced Learning
New technologies offer rich opportunities to support education in harmony. In this chapter we consider theoretical perspectives and underlying principles behind technologies for learning and teaching harmony. Such perspectives help in matching existing and future technologies to educational purposes, and to inspire the creative re-appropriation of technologies
Kinect-ed Piano
We describe a gesturally-controlled improvisation system for an experimental pianist, developed over several laboratory sessions and used during a performance [1] at the 2011 Conference on New Inter- faces for Musical Expression (NIME). We discuss the architecture and performative advantages and limitations of our gesturally-controlled improvisation system, and reflect on the lessons learned throughout its development. KEYWORDS: piano; improvisation; gesture recognition; machine learning
Recommended from our members
A machine learning approach to voice separation in lute tablature
[TODO] Add abstract here
- …