5 research outputs found
Exploring the Motivations for Building New Digital Musical Instruments
Over the past four decades, the number, diversity and complexity of digital musical instruments (DMIs) has increased rapidly. There are very few constraints on DMI design as such systems can be easily reconfigured, offering near limitless flexibility for music-making. Given that new acoustic musical instruments have in many cases been created in response to the limitations of available technologies, what motivates the development of new DMIs? We conducted an interview study with ten designers of new DMIs, in order to explore 1) the motivations electronic musicians may have for wanting to build their own instruments; and 2) the extent to which these motivations relate to the context in which the artist works and performs (academic vs. club settings). We found that four categories of motivation were mentioned most often: M1: wanting to bring greater embodiment to the activity of performing and producing electronic music; M2: wanting to improve audience experiences of DMI performances; M3: wanting to develop new sounds, and M4: wanting to build responsive systems for improvisation. There were also some detectable trends in motivation according to the context in which the artists work and perform. Our results offer the first systematically gathered insights into the motivations for new DMI design. It appears that the challenges of controlling digital sound synthesis drive the development of new DMIs, rather than the shortcomings of any one particular design or existing technology
Musical Gesture through the Human Computer Interface: An Investigation using Information Theory
This study applies information theory to investigate human ability to communicate using continuous control sensors with a particular focus on informing the design of digital musical instruments. There is an active practice of building and evaluating such instruments, for instance, in the New Interfaces for Musical Expression (NIME) conference community. The fidelity of the instruments can depend on the included sensors, and although much anecdotal evidence and craft experience informs the use of these sensors, relatively little is known about the ability of humans to control them accurately. This dissertation addresses this issue and related concerns, including continuous control performance in increasing degrees-of-freedom, pursuit tracking in comparison with pointing, and the estimations of musical interface designers and researchers of human performance with continuous control sensors. The methodology used models the human-computer system as an information channel while applying concepts from information theory to performance data collected in studies of human subjects using sensing devices. These studies not only add to knowledge about human abilities, but they also inform on issues in musical mappings, ergonomics, and usability
Dialogic coding: a performance practice for co-creative computer improvisation
This research project explores Dialogic Coding β a performance
practice situated within the field of live computer music which works
towards a dialogic relationship with the computer as a programmable
musical instrument.
The writing articulates a Practice-as-Research (PaR) inquiry that
places my practice within specific contextual, analytical and
philosophical frameworks. The point of departure is the assumption
that following the concept of dialogue a more reflexive way of
performing music with a computer becomes possible. This approach
may produce innovative results through transformations of musical
ideas, embodied interactions as well as the performer's self-concept
within a situation of improvised group performance. Dialogic Coding
employs the concept of nontriviality to create an independent but at
the same time programmable musical agent β the apparatus β which
so becomes a co-creator of the improvised music.
As a context for Dialogic Coding practice serve other dialogic forms
of music making such as free improvised music as well as dynamic
performances of programming found in live coding practice. A
dialogic approach in music performance is based on listening and the
ability to speak one's voice in response to the situation. Here,
listening is understood beyond the auditory domain on the level of
abstract thinking and physical interaction (interface affordance).
This research presents a first-hand account of a computer
performance praxis and thus makes a contribution to academic
knowledge. For this it makes some implicit or tacit 'knowings'
contained in the practice accessible for an outside community
through this writing. Dialogic Coding practice was developed through
participating in free improvised music 'sessions' with other musicians
as well as composing pieces in program code with which I then
performed live (solo and group). This writing contextualizes the
developed practice in a historic lineage, discusses it within the
conceptual framework of dialogism and delineated how a dialogic
approach fosters creativity, learning, surprise and flow. As a
conclusion I summarise the ethical dimension of Dialogic Coding as a
form of human-computer interaction (HCI)
Creativity, Exploration and Control in Musical Parameter Spaces.
PhDThis thesis investigates the use of multidimensional control of synthesis parameters
in electronic music, and the impact of controller mapping techniques on creativity.
The theoretical contribution of this work, the EARS model, provides a rigorous
application of creative cognition research to this topic. EARS provides a cognitive
model of creative interaction with technology, retrodicting numerous prior findings
in musical interaction research. The model proposes four interaction modes, and
characterises them in terms of parameter-space traversal mechanisms. Recommendations
for properties of controller-synthesiser mappings that support each of the
modes are given.
This thesis proposes a generalisation of Fitts' law that enables throughput-based
evaluation of multi-dimensional control devices.
Three experiments were run that studied musicians performing sound design tasks
with various interfaces. Mappings suited to three of the four EARS modes were
quantitatively evaluated.
Experiment one investigated the notion of a `divergent interface'. A mapping geometry
that caters to early-stage exploratory creativity was developed, and evaluated
via a publicly available tablet application. Dimension reduction of a 10D synthesiser
parameter space to 2D surface was achieved using Hilbert space-filling curves. Interaction
data indicated that this divergent mapping was used for early-stage creativity,
and that the traditional sliders were used for late-stage one tuning.
Experiment two established a `minimal experimental paradigm' for sound design
interface evaluation. This experiment showed that multidimensional controllers were
faster than 1D sliders for locating a target sound in two and three timbre dimensions.
iv
The final study tested a novel embodied interaction technique: ViBEAMP. This
system utilised a hand tracker and a 3D visualisation to train users to control 6
synthesis parameters simultaneously. Throughput was recorded as triple that of
six sliders, and working memory load was signiffcantly reduced. This experiment
revealed that musical, time-targeted interactions obey a different speed-accuracy
trade-of law from accuracy-targeted interactions.Electronic Engineering and Computer Science at Queen Mar