6 research outputs found

    Musical control gestures in mobile handheld devices: Design guidelines informed by daily user experience

    Get PDF
    Mobile handheld devices, such as smartphones and tablets, have become some of the most prominent ubiquitous terminals within the information and communication technology landscape. Their transformative power within the digital music domain changed the music ecosystem from production to distribution and consumption. Of interest here is the ever-expanding number of mobile music applications. Despite their growing popularity, their design in terms of interaction perception and control is highly arbitrary. It remains poorly addressed in related literature and lacks a clear, systematized approach. In this context, our paper aims to provide the first steps towards defining guidelines for optimal sonic interaction design practices in mobile music applications. Our design approach is informed by user data in appropriating mobile handheld devices. We conducted an experiment to learn links between control gestures and musical parameters, such as pitch, duration, and amplitude. A twofold actionā€”reflection protocol and tool-set for evaluating the aforementioned linksā€”are also proposed. The results collected from the experiment show statistically significant trends in pitch and duration control gesture mappings. On the other hand, amplitude appears to elicit a more diverse mapping approach, showing no definitive trend in this experiment.info:eu-repo/semantics/publishedVersio

    Sensor-rich real-time adaptive gesture and affordance learning platform for electronic music control

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.Includes bibliographical references (p. [151]-156).Acoustic musical instruments have traditionally featured static mappings from input gesture to output sound, their input affordances being tied to the physics of their sound-production mechanism. More recently, the advent of digital sound synthesizers and electronic music controllers has abolished the tight coupling between input gesture and resultant sound, making an exponentially large range of input-to-output mappings possible, as well as an infinite set of possible timbres. This revolutionary change in the way sound can be produced and controlled brings with it the burden of design: Compelling and natural mappings from gesture to sound now must be created in order to create a playable electronic music instrument. The goal of this thesis is to present a device that allows flexible assignment of input gesture to output sound, so acting as a laboratory to help further understanding about the connection from gesture to sound. An embodied multi-degree-of-freedom gestural input device was constructed. The device was built to support six-degree-of-freedom inertial sensing, five isometric buttons, two digital buttons, two-axis bend sensing, isometric rotation sensing, and isotonic electric field sensing of position. Software was written to handle the incoming serial data, and to implement a trainable interface by which a user can explore the sounds possible with the device, associate a custom inertial gesture with a sound for later playback, make custom input degree-of-freedom (DOF) to effect modulation mappings, and play with the resulting configuration. A user study with 25 subjects was run to evaluate the system in terms of its engaging-ness, enjoyability, ability to inspire interest in future play and performance,(cont.) ease of gesturing and novelty. In addition to these subjective measures, implicit data was collected about the types of gesture-to-sound and input-DOF-to-effect mappings that the subjects created. Favorable and interesting results were found in the data from the study, indicating that a flexible trainable musical instrument is not only a compelling performance tool, but is a useful laboratory for understanding the connection between human gesture and sound.by Jeffrey Merrill.S.M

    Multiparametric interfaces for fine-grained control of digital music

    Get PDF
    Digital technology provides a very powerful medium for musical creativity, and the way in which we interface and interact with computers has a huge bearing on our ability to realise our artistic aims. The standard input devices available for the control of digital music tools tend to afford a low quality of embodied control; they fail to realise our innate expressiveness and dexterity of motion. This thesis looks at ways of capturing more detailed and subtle motion for the control of computer music tools; it examines how this motion can be used to control music software, and evaluates musiciansā€™ experience of using these systems. Two new musical controllers were created, based on a multiparametric paradigm where multiple, continuous, concurrent motion data streams are mapped to the control of musical parameters. The first controller, Phalanger, is a markerless video tracking system that enables the use of hand and finger motion for musical control. EchoFoam, the second system, is a malleable controller, operated through the manipulation of conductive foam. Both systems use machine learning techniques at the core of their functionality. These controllers are front ends to RECZ, a high-level mapping tool for multiparametric data streams. The development of these systems and the evaluation of musiciansā€™ experience of their use constructs a detailed picture of multiparametric musical control. This work contributes to the developing intersection between the fields of computer music and human-computer interaction. The principal contributions are the two new musical controllers, and a set of guidelines for the design and use of multiparametric interfaces for the control of digital music. This work also acts as a case study of the application of HCI user experience evaluation methodology to musical interfaces. The results highlight important themes concerning multiparametric musical control. These include the use of metaphor and imagery, choreography and language creation, individual differences and uncontrol. They highlight how this style of interface can fit into the creative process, and advocate a pluralistic approach to the control of digital music tools where different input devices fit different creative scenarios

    Real-time Gesture Mapping in Pd Environment using Neural Networks

    No full text
    In this paper, we describe an adaptive approach to gesture mapping for musical applications which serves as a mapping system for music instrument design. A neural network approach is chosen for this goal and all the required interfaces and abstractions are developed and demonstrated in the Pure Data environment. In this paper, we will focus on neural network representation and implementation in a real-time musical environment. This adaptive mapping is evaluated in different static and dynamic situations by a network of sensors sampled at a rate of 200Hz in real-time. Finally, some remarks are given on the network design and future works. Keywords Real-time gesture control, adaptive interfaces, Sensor and actuator technologies for musical applications, Musical mapping algorithms and intelligent controllers, Pure Data. 1

    Choreographing the extended agent : performance graphics for dance theater

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.Includes bibliographical references (v. 2, leaves 448-458).The marriage of dance and interactive image has been a persistent dream over the past decades, but reality has fallen far short of potential for both technical and conceptual reasons. This thesis proposes a new approach to the problem and lays out the theoretical, technical and aesthetic framework for the innovative art form of digitally augmented human movement. I will use as example works a series of installations, digital projections and compositions each of which contains a choreographic component - either through collaboration with a choreographer directly or by the creation of artworks that automatically organize and understand purely virtual movement. These works lead up to two unprecedented collaborations with two of the greatest choreographers working today; new pieces that combine dance and interactive projected light using real-time motion capture live on stage. The existing field of"dance technology" is one with many problems. This is a domain with many practitioners, few techniques and almost no theory; a field that is generating "experimental" productions with every passing week, has literally hundreds of citable pieces and no canonical works; a field that is oddly disconnected from modern dance's history, pulled between the practical realities of the body and those of computer art, and has no influence on the prevailing digital art paradigms that it consumes.(cont.) This thesis will seek to address each of these problems: by providing techniques and a basis for "practical theory"; by building artworks with resources and people that have never previously been brought together, in theaters and in front of audiences previously inaccessible to the field; and by proving through demonstration that a profitable and important dialogue between digital art and the pioneers of modern dance can in fact occur. The methodological perspective of this thesis is that of biologically inspired, agent-based artificial intelligence, taken to a high degree of technical depth. The representations, algorithms and techniques behind such agent architectures are extended and pushed into new territory for both interactive art and artificial intelligence. In particular, this thesis ill focus on the control structures and the rendering of the extended agents' bodies, the tools for creating complex agent-based artworks in intense collaborative situations, and the creation of agent structures that can span live image and interactive sound production. Each of these parts becomes an element of what it means to "choreograph" an extended agent for live performance.Marc Downie.Ph.D

    Soma: live performance where congruent musical, visual, and proprioceptive stimuli fuse to form a combined aesthetic narrative

    Get PDF
    Artists and scientists have long had an interest in the relationship between music and visual art. Today, many occupy themselves with correlated animation and music, called 'visual music'. Established tools and paradigms for performing live visual music however, have several limitations: Virtually no user interface exists, with an expressivity comparable to live musical performance. Mappings between music and visuals are typically reduced to the musicā€˜s beat and amplitude being statically associated to the visuals, disallowing close audiovisual congruence, tension and release, and suspended expectation in narratives. Collaborative performance, common in other live art, is mostly absent due to technical limitations. Preparing or improvising performances is complicated, often requiring software development. This thesis addresses these, through a transdisciplinary integration of findings from several research areas, detailing the resulting ideas, and their implementation in a novel system: Musical instruments are used as the primary control data source, accurately encoding all musical gestures of each performer. The advanced embodied knowledge musicians have of their instruments, allows increased expressivity, the full control data bandwidth allows high mapping complexity, while musiciansā€˜ collaborative performance familiarity may translate to visual music performance. The conduct of Mutable Mapping, gradually creating, destroying and altering mappings, may allow for a narrative in mapping during performance. The art form of Soma, in which correlated auditory, visual and proprioceptive stimulus form a combined narrative, builds on knowledge that performers and audiences are more engaged in performance requiring advanced motor knowledge, and when congruent percepts across modalities coincide. Preparing and improvising is simplified, through re-adapting the Processing programming language for artists to behave as a plug-in API, thus encapsulating complexity in modules, which may be dynamically layered during performance. Design research methodology is employed during development and evaluation, while introducing the additional viewpoint of ethnography during evaluation, engaging musicians, audience and visuals performers
    corecore