145 research outputs found

    Musical instrument mapping design with Echo State Networks

    Get PDF
    Echo State Networks (ESNs), a form of recurrent neural network developed in the field of Reservoir Computing, show significant potential for use as a tool in the design of mappings for digital musical instruments. They have, however, seldom been used in this area, so this paper explores their possible applications. This project contributes a new open source library, which was developed to allow ESNs to run in the Pure Data dataflow environment. Several use cases were explored, focusing on addressing current issues in mapping research. ESNs were found to work successfully in scenarios of pattern classification, multiparametric control, explorative mapping and the design of nonlinearities and uncontrol. 'Un-trained' behaviours are proposed, as augmentations to the conventional reservoir system that allow the player to introduce potentially interesting non-linearities and uncontrol into the reservoir. Interactive evolution style controls are proposed as strategies to help design these behaviours, which are otherwise dependent on arbitrary values and coarse global controls. A study on sound classification showed that ESNs could reliably differentiate between two drum sounds, and also generalise to other similar input. Following evaluation of the use cases, heuristics are proposed to aid the use of ESNs in computer music scenarios

    Evaluating the Wiimote as a musical controller

    Get PDF
    The Nintendo Wiimote is growing in popularity with mu-sicians as a controller. This mode of use is an adaptationfrom its intended use as a game controller, and requiresevaluation of its functions in a musical context in orderto understand its possibilities and limits. Drawing on Hu-man Computer Interaction methodology, we assessed thecore musical applications of the Wiimote and designeda usability experiment to test them. 17 participants tookpart, performing musical tasks in four contexts: trigger-ing; precise and expressive continuous control; and ges-ture recognition. Interviews and empirical evidence wereutilised to probe the device’s limitations and its creativestrengths. This study should help potential users to planthe Wiimote’s employment in their projects, and should beuseful as a case study in HCI evaluation of musical con-trollers

    Multiparametric interfaces for fine-grained control of digital music

    Get PDF
    Digital technology provides a very powerful medium for musical creativity, and the way in which we interface and interact with computers has a huge bearing on our ability to realise our artistic aims. The standard input devices available for the control of digital music tools tend to afford a low quality of embodied control; they fail to realise our innate expressiveness and dexterity of motion. This thesis looks at ways of capturing more detailed and subtle motion for the control of computer music tools; it examines how this motion can be used to control music software, and evaluates musicians’ experience of using these systems. Two new musical controllers were created, based on a multiparametric paradigm where multiple, continuous, concurrent motion data streams are mapped to the control of musical parameters. The first controller, Phalanger, is a markerless video tracking system that enables the use of hand and finger motion for musical control. EchoFoam, the second system, is a malleable controller, operated through the manipulation of conductive foam. Both systems use machine learning techniques at the core of their functionality. These controllers are front ends to RECZ, a high-level mapping tool for multiparametric data streams. The development of these systems and the evaluation of musicians’ experience of their use constructs a detailed picture of multiparametric musical control. This work contributes to the developing intersection between the fields of computer music and human-computer interaction. The principal contributions are the two new musical controllers, and a set of guidelines for the design and use of multiparametric interfaces for the control of digital music. This work also acts as a case study of the application of HCI user experience evaluation methodology to musical interfaces. The results highlight important themes concerning multiparametric musical control. These include the use of metaphor and imagery, choreography and language creation, individual differences and uncontrol. They highlight how this style of interface can fit into the creative process, and advocate a pluralistic approach to the control of digital music tools where different input devices fit different creative scenarios

    Towards new modes of collective musical expression through audio augmented reality

    Get PDF
    We investigate how audio augmented reality can engender new collective modes of musical expression in the context of a sound art installation, Listening Mirrors, exploring the creation of interactive sound environments for musicians and non-musicians alike. Listening Mirrors is designed to incorporate physical objects and computational systems for altering the acoustic environment, to enhance collective listening and challenge traditional musician-instrument performance. At a formative stage in exploring audio AR technology, we conducted an audience experience study investigating questions around the potential of audio AR in creating sound installation environments for collective musical expression. We collected interview evidence about the participants' experience and analysed the data with using a grounded theory approach. The results demonstrated that the technology has the potential to create immersive spaces where an audience can feel safe to experiment musically, and showed how AR can intervene in sound perception to instrumentalise an environment. The results also revealed caveats about the use of audio AR, mainly centred on social inhibition and seamlessness of experience, and finding a balance between mediated worlds to create space for interplay between the two

    Toward a synthetic acoustic ecology: sonically situated, evolutionary agent based models of the acoustic niche hypothesis

    Get PDF
    We introduce the idea of Synthetic Acoustic Ecology (SAC) as a vehicle for transdisciplinary investigation to develop methods and address open theoretical, applied and aesthetic questions in scientific and artistic disciplines of acoustic ecology. Ecoacoustics is an emerging science that investigates and interprets the ecological role of sound. It draws conceptually from, and is reinvigorating the related arts-humanities disciplines historically associated with acoustic ecology, which are concerned with sonically-mediated relationships between human beings and their environments. Both study the acoustic environment, or soundscape, as the literal and conceptual site of interaction of human and non-human organisms. However, no coherent theories exist to frame the ecological role of the soundscape, or to elucidate the evolutionary processes through which it is structured. Similarly there is a lack of appropriate computational methods to analyse the macro soundscape which hampers application in conservation. We propose that a sonically situated flavour of Alife evolutionary agent-based model could build a productive bridge between the art, science and technologies of acoustic ecological investigations to the benefit of all. As a first step, two simple models of the acoustic niche hypothesis are presented which are shown to exhibit emergence of complex spectro-temporal soundscape structures and adaptation to and recovery from noise pollution events. We discuss the potential of SAC as a lingua franca between empirical and theoretical ecoacoustics, and wider transdisciplinary research in ecoacoustic ecology
    corecore