16,129 research outputs found

    Musicians and Machines: Bridging the Semantic Gap In Live Performance

    Get PDF
    PhDThis thesis explores the automatic extraction of musical information from live performances – with the intention of using that information to create novel, responsive and adaptive performance tools for musicians. We focus specifically on two forms of musical analysis – harmonic analysis and beat tracking. We present two harmonic analysis algorithms – specifically we present a novel chroma vector analysis technique which we later use as the input for a chord recognition algorithm. We also present a real-time beat tracker, based upon an extension of state of the art non-causal models, that is computationally efficient and capable of strong performance compared to other models. Furthermore, through a modular study of several beat tracking algorithms we attempt to establish methods to improve beat tracking and apply these lessons to our model. Building upon this work, we show that these analyses can be combined to create a beat-synchronous musical representation, with harmonic information segmented at the level of the beat. We present a number of ways of calculating these representations and discuss their relative merits. We proceed by introducing a technique, which we call Performance Following, for recognising repeated patterns in live musical performances. Through examining the real-time beat-synchronous musical representation, this technique makes predictions of future harmonic content in musical performances with no prior knowledge in the form of a score. Finally, we present a number of potential applications for live performances that incorporate the real-time musical analysis techniques outlined previously. The applications presented include audio effects informed by beat tracking, a technique for synchronising video to a live performance, the use of harmonic information to control visual displays and an automatic accompaniment system based upon our performance following technique.EPSR

    The Audio Degradation Toolbox and its Application to Robustness Evaluation

    Get PDF
    We introduce the Audio Degradation Toolbox (ADT) for the controlled degradation of audio signals, and propose its usage as a means of evaluating and comparing the robustness of audio processing algorithms. Music recordings encountered in practical applications are subject to varied, sometimes unpredictable degradation. For example, audio is degraded by low-quality microphones, noisy recording environments, MP3 compression, dynamic compression in broadcasting or vinyl decay. In spite of this, no standard software for the degradation of audio exists, and music processing methods are usually evaluated against clean data. The ADT fills this gap by providing Matlab scripts that emulate a wide range of degradation types. We describe 14 degradation units, and how they can be chained to create more complex, `real-world' degradations. The ADT also provides functionality to adjust existing ground-truth, correcting for temporal distortions introduced by degradation. Using four different music informatics tasks, we show that performance strongly depends on the combination of method and degradation applied. We demonstrate that specific degradations can reduce or even reverse the performance difference between two competing methods. ADT source code, sounds, impulse responses and definitions are freely available for download

    Harmony and Technology Enhanced Learning

    Get PDF
    New technologies offer rich opportunities to support education in harmony. In this chapter we consider theoretical perspectives and underlying principles behind technologies for learning and teaching harmony. Such perspectives help in matching existing and future technologies to educational purposes, and to inspire the creative re-appropriation of technologies

    Kinect-ed Piano

    Get PDF
    We describe a gesturally-controlled improvisation system for an experimental pianist, developed over several laboratory sessions and used during a performance [1] at the 2011 Conference on New Inter- faces for Musical Expression (NIME). We discuss the architecture and performative advantages and limitations of our gesturally-controlled improvisation system, and reflect on the lessons learned throughout its development. KEYWORDS: piano; improvisation; gesture recognition; machine learning
    • …
    corecore