1,190 research outputs found
Comparison of input devices in an ISEE direct timbre manipulation task
The representation and manipulation of sound within multimedia systems is an important and currently under-researched area. The paper gives an overview of the authors' work on the direct manipulation of audio information, and describes a solution based upon the navigation of four-dimensional scaled timbre spaces. Three hardware input devices were experimentally evaluated for use in a timbre space navigation task: the Apple Standard Mouse, Gravis Advanced Mousestick II joystick (absolute and relative) and the Nintendo Power Glove. Results show that the usability of these devices significantly affected the efficacy of the system, and that conventional low-cost, low-dimensional devices provided better performance than the low-cost, multidimensional dataglove
Interaction Design for Digital Musical Instruments
The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model.
Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon:
1. The accepted design conventions of the hardware in use
2. Established musical systems, acoustic or digital
3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests
This thesis proposes an alternate way to approach the design of digital musical instrument behaviour â examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware.
This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician
Design Strategies for Adaptive Social Composition: Collaborative Sound Environments
In order to develop successful collaborative music systems a variety
of subtle interactions need to be identified and integrated. Gesture
capture, motion tracking, real-time synthesis, environmental
parameters and ubiquitous technologies can each be effectively used
for developing innovative approaches to instrument design, sound
installations, interactive music and generative systems. Current
solutions tend to prioritise one or more of these approaches, refining
a particular interface technology, software design or compositional
approach developed for a specific composition, performer or
installation environment. Within this diverse field a group of novel
controllers, described as âTangible Interfacesâ have been developed.
These are intended for use by novices and in many cases follow a
simple model of interaction controlling synthesis parameters through
simple user actions. Other approaches offer sophisticated
compositional frameworks, but many of these are idiosyncratic and
highly personalised. As such they are difficult to engage with and
ineffective for groups of novices. The objective of this research is to
develop effective design strategies for implementing collaborative
sound environments using key terms and vocabulary drawn from the
available literature. This is articulated by combining an empathic
design process with controlled sound perception and interaction
experiments. The identified design strategies have been applied to
the development of a new collaborative digital instrument. A range
of technical and compositional approaches was considered to define
this process, which can be described as Adaptive Social Composition.
Dan Livingston
Electrifying Opera, Amplifying Agency: Designing a performer-controlled interactive audio system for opera singers
This artistic research project examines the artistic, technical, and pedagogical challenges of developing a performer-controlled interactive technology for real-time vocal processing of the operatic voice. As a classically trained singer-composer, I have explored ways to merge the compositional aspects of transforming electronic sound with the performative aspects of embodied singing.
I set out to design, develop, and test a prototype for an interactive vocal processing system using sampling and audio processing methods. The aim was to foreground and accommodate an unamplified operatic voice interacting with the room's acoustics and the extended disembodied voices of the same performer. The iterative prototyping explored the performer's relationship to the acoustic space, the relationship between the embodied acoustic voice and disembodied processed voice(s), and the relationship to memory and time.
One of the core challenges was to design a system that would accommodate mobility and allow interaction based on auditory and haptic cues rather than visual. In other words, a system allowing the singer to control their sonic output without standing behind a laptop. I wished to highlight and amplify the performer's agency with a system that would enable nuanced and variable vocal processing, be robust, teachable, and suitable for use in various settings: solo performances, various types and sizes of ensembles, and opera. This entailed mediating different needs, training, and working methods of both electronic music and opera practitioners.
One key finding was that even simple audio processing could achieve complex musical results. The audio processes used were primarily combinations of feedback and delay lines. However, performers could get complex musical results quickly through continuous gestural control and the ability to route signals to four channels. This complexity sometimes led to surprising results, eliciting improvisatory responses also from singers without musical improvisation experience.
The project has resulted in numerous vocal solo, chamber, and operatic performances in Norway, the Netherlands, Belgium, and the United States. The research contributes to developing emerging technologies for live electronic vocal processing in opera, developing the improvisational performance skills needed to engage with those technologies, and exploring alternatives for sound diffusion conducive to working with unamplified operatic voices.
Links:
Exposition and documentation of PhD research in Research Catalogue: Electrifying Opera, Amplifying Agency. Artistic results. Reflection and Public Presentations (PhD) (2023):
https://www.researchcatalogue.net/profile/show-exposition?exposition=2222429
Home/Reflections:
https://www.researchcatalogue.net/view/2222429/2222460
Mapping & Prototyping:
https://www.researchcatalogue.net/view/2222429/2247120
Space & Speakers:
https://www.researchcatalogue.net/view/2222429/2222430
Presentations:
https://www.researchcatalogue.net/view/2222429/2247155
Artistic Results:
https://www.researchcatalogue.net/view/2222429/222248
Recommended from our members
Intra-Actions: Experiments with Velocity and Position in Continuous Controllers
Continuous MIDI controllers commonly output their position only, with no influence of the performative energy with which they were set. In this paper, creative uses of time as a parameter in continuous controller mapping are demonstrated: the speed of movement affects the position mapping and control output. A set of SuperCollider classes are presented, developed in the authorâs practice in computer music, where they have been used together with commercial MIDI controllers. The creative applications employ various approaches and metaphors for scaling time, but also machine learning for recognising patterns. In the techniques, performer, controller and synthesis âintra-actâ, to use Karen Baradâs term: because position and velocity are derived from the same data, sound output cannot be predicted without the temporal context of performance
The integrated sound, space and movement environment : The uses of analogue and digital technologies to correlate topographical and gestural movement with sound
This thesis investigates correlations between auditory parameters and parameters associated with movement in a sensitised space. The research examines those aspects of sound that form correspondences with movement, force or position of a body or bodies in a space sensitised by devices for acquiring gestural or topographical data. A wide range of digital technologies are scrutinised to establish what the most effective technologies are in order to achieve detailed and accurate information about movement in a given space, and the methods and procedures for analysis, transposition and synthesis into sound. The thesis describes pertinent work in the field from the last 20 years, the issues that have been raised in those works and issues raised by my work in the area. The thesis draws conclusions that point to further development of an integrated model of a space that is sensitised to movement, and responds in sound in such a way that it can be appreciated by performers and audiences. The artistic and research practices that are cited, are principally from the areas of danceand- technology, sound installation and alternative gestural controllers for musical applications
Interaction and the Art of User-Centered Digital Musical Instrument Design
This thesis documents the formulation of a research-based practice in multimedia art, technology and digital musical instrument design. The primary goal of my research was to investigate the principles and methodologies involved in the structural design of new interactive digital musical instruments aimed at performance by members of the general public, and to identify ways that the design process could be optimized to increase user adoption of these new instruments. The research was performed over three years and moved between studies at the University of Maine, internships in New York, and specialized research at the Input Devices and Music Interaction Laboratory at McGill University.
My work is presented in two sections. The first covers early studies in user interaction and exploratory works in web and visual design, sound art, installation, and music performance. While not specifically tied to the research topic of user adoption of digital musical instruments, this work serves as the conceptual and technical background for the dedicated work to follow. The second section is dedicated to focused research on digital musical instrument design through two major projects carried out as a Graduate Research Trainee at McGill University. The first was the design and prototype of the Noisebox, a new digital musical instrument. The purpose of this project was to learn the various stages of instrument design through practical application. A working prototype has been presented and tested, and a second version is currently being built. The second project was a user study that surveyed musicians about digital musical instrument use. It asked questions about background, instrument choice, music styles played, and experiences with and attitudes towards new digital musical instruments.
Based on the results of the two research projects, a model of digital musical instrument design is proposed that adopts a user-centered focus, soliciting user input and feedback throughout the design process from conception to final testing. This approach aims to narrow the gap between conceptual design of new instruments and technologies and the actual musicians who would use them
- âŠ