9 research outputs found
Amarok Pikap: interactive percussion playing automobile
Alternative interfaces that imitate the audio-structure of authentic musical instruments are often equipped with sound generation techniques that feature physical attributes similar to those of the instruments they imitate. Amarok Pikap project utilizes an interactive system on the surface of an automobile that is specially modified with the implementation of various electronic sensors attached to its bodywork. Sur-faces that will be struck to produce sounds in percussive instrument modeling are commonly selected as distinctive surfaces such as electronic pads or keys. In this article we will carry out a status analysis to examine to what extent a percussion- playing interface using FSR and Piezo sensors can represent an authentic musical instrument, and how a new interactive musical interface may draw the interests of the public to a promotional event of an automobile campaign: Amarok Pikap. The structure that forms the design will also be subjected to a technical analysis
Advancing performability in playable media : a simulation-based interface as a dynamic score
When designing playable media with non-game orientation, alternative play scenarios to gameplay scenarios must be accompanied by alternative mechanics to game mechanics. Problems of designing playable media with non-game orientation are stated as the problems of designing a platform for creative explorations and creative expressions. For such design problems, two requirements are articulated: 1) play state transitions must be dynamic in non-trivial ways in order to achieve a significant level of engagement, and 2) pathways for players’ experience from exploration to expression must be provided. The transformative pathway from creative exploration to creative expression is analogous to pathways for game players’ skill acquisition in gameplay. The paper first describes a concept of simulation-based interface, and then binds that concept with the concept of dynamic score. The former partially accounts for the first requirement, the latter the second requirement. The paper describes the prototype and realization of the two concepts’ binding. “Score” is here defined as a representation of cue organization through a transmodal abstraction. A simulation based interface is presented with swarm mechanics and its function as a dynamic score is demonstrated with an interactive musical composition and performance
Interaction Design for Digital Musical Instruments
The thesis aims to elucidate the process of designing interactive systems for musical performance that combine software and hardware in an intuitive and elegant fashion. The original contribution to knowledge consists of: (1) a critical assessment of recent trends in digital musical instrument design, (2) a descriptive model of interaction design for the digital musician and (3) a highly customisable multi-touch performance system that was designed in accordance with the model.
Digital musical instruments are composed of a separate control interface and a sound generation system that exchange information. When designing the way in which a digital musical instrument responds to the actions of a performer, we are creating a layer of interactive behaviour that is abstracted from the physical controls. Often, the structure of this layer depends heavily upon:
1. The accepted design conventions of the hardware in use
2. Established musical systems, acoustic or digital
3. The physical configuration of the hardware devices and the grouping of controls that such configuration suggests
This thesis proposes an alternate way to approach the design of digital musical instrument behaviour – examining the implicit characteristics of its composite devices. When we separate the conversational ability of a particular sensor type from its hardware body, we can look in a new way at the actual communication tools at the heart of the device. We can subsequently combine these separate pieces using a series of generic interaction strategies in order to create rich interactive experiences that are not immediately obvious or directly inspired by the physical properties of the hardware.
This research ultimately aims to enhance and clarify the existing toolkit of interaction design for the digital musician
Listening-Mode-Centered Sonification Design for Data Exploration
Grond F. Listening-Mode-Centered Sonification Design for Data Exploration. Bielefeld: Bielefeld University; 2013.From the Introduction to this thesis:
Through the ever growing amount of data and the desire to make them accessible to the user through the sense of listening, sonification, the representation of data by using sound has been subject of active research in the computer sciences and the field of HCI for the last 20 years. During this time, the field of sonification has diversified into different application areas: today, sound in auditory display informs the user about states and actions on the desktop and in mobile devices; sonification has been applied in monitoring applications, where sound can range from being informative to alarming; sonification has been used to give sensory feedback in order to close the action and perception loop; last but not least, sonifications have also been developed for exploratory data analysis, where sound is used to represent data with unknown structures for hypothesis building.
Coming from the computer sciences and HCI, the conceptualization of sonification has been mostly driven by application areas. On the other hand, the sonic arts who have always contributed to the community of auditory display have a genuine focus on sound. Despite this close interdisciplinary relation of communities of sound practitioners, a rich and sound- (or listening)-centered concept about sonification is still missing as a point of departure for a more application and task overarching approach towards design guidelines. Complementary to the useful organization along fields of applications, a conceptual framework that is proper to sound needs to abstract from applications and also to some degree from tasks, as both are not directly related to sound. I hence propose in this thesis to conceptualize sonifications along two poles where sound serves either a normative or a descriptive purpose.
In the beginning of auditory display research, a continuum between a symbolic and an analogic pole has been proposed by Kramer (1994a, page 21). In this continuum, symbolic stands for sounds that coincide with existing schemas and are more denotative, analogic stands for sounds that are informative through their connotative aspects. (compare Worrall (2009, page 315)). The notions of symbolic and analogic illustrate the struggle to find apt descriptions of how the intention of the listener subjects audible phenomena to a process of meaning making and interpretation. Complementing the analogic-symbolic continuum with descriptive and normative purposes of displays is proposed in the light of the recently increased research interest in listening modes and intentions.
Similar to the terms symbolic and analogic, listening modes have been discussed in auditory display since the beginning usually in dichotomic terms which were either identified with the words listening and hearing or understood as musical listening and everyday listening as proposed by Gaver (1993a). More than 25 years earlier, four direct listening modes have been introduced by Schaeffer (1966) together with a 5th synthetic mode of reduced listening which leads to the well-known sound object. Interestingly, Schaeffer’s listening modes remained largely unnoticed by the auditory display community. Particularly the notion of reduced listening goes beyond the connotative and denotative poles of the continuum proposed by Kramer and justifies the new terms descriptive and normative. Recently, a new taxonomy of listening modes has been proposed by Tuuri and Eerola (2012) that is motivated through an embodied cognition approach. The main contribution of their taxonomy is that it convincingly diversifies the connotative and denotative aspects of listening modes. In the recently published sonification handbook, multimodal and interactive aspects in combination with sonification have been discussed as promising options to expand and advance the field by Hunt and Hermann (2011), who point out that there is a big need for a better theoretical foundation in order to systematically integrate these aspects. The main contribution of this thesis is to address this need by providing alternative and complementary design guidelines with respect to existing approaches, all of which have been conceived before the recently increased research interest in listening modes. None of the existing contributions to design frameworks integrates multimodality, and listening modes with a focus on exploratory data analysis, where sonification is conceived to support the understanding of complex data potentially helping to identify new structures therein. In order to structure this field the following questions are addressed in this thesis:
• How do natural listening modes and reduced listening relate to the proposed normative and descriptive display purposes?
• What is the relationship of multimodality and interaction with listening modes and display purposes?
• How can the potential of embodied cognition based listening modes be put to use for exploratory data sonification?
• How can listening modes and display purposes be connected to questions of aesthetics in the display?
• How do data complexity and Parameter-mapping sonification relate to exploratory data analysis and listening modes
Overviews and their effect on interaction in the auditory interface.
PhDAuditory overviews have the potential to improve the quality of auditory interfaces. However, in order
to apply overviews well, we must understand them. Specifically, what are they and what is their impact?
This thesis presents six characteristics that overviews should have. They should be a structured representation
of the detailed information, define the scope of the material, guide the user, show context and
patterns in the data, encourage exploration of the detail and represent the current state of the data. These
characteristics are guided by a systematic review of visual overview research, analysis of established
visual overviews and evaluation of how these characteristics fit current auditory overviews.
The second half of the thesis evaluates how the addition of an overview impacts user interaction.
While the overviews do not improve performance, they do change the navigation patterns from one of
data exploration and discovery to guided and directed information seeking. With these two contributions,
we gain a better understanding of how overviews work in an auditory interface and how they might be
exploited more effectively
El cuerpo en la interpretaciĂłn musical. Un modelo teĂłricobasado en las propiocepciones en la interpretaciĂłn de instrumentos acĂşsticos, hiperinstrumentos e instrumentos alternativos
Se estudia el rol que desempeña el cuerpo en la interpretaciĂłn de instrumentos musicales. Para ello propone un modelo teĂłrico basado en tres teorĂas cognitivas: la Embodied Mind propuesta por Mark Jonson, la teorĂa ecolĂłgica de la percepciĂłn visual de James Gibson y la teorĂa de las contingencias sensoriomotoras formulada por Kevin O’Regan y Alva NoĂ«. El modelo estudia cuatro niveles de intervenciĂłn del cuerpo: un nivel de programaciĂłn motora, un nivel de producciĂłn sonora, un nivel perceptivo y otro de almacenamiento. A travĂ©s de tres ejemplos musicales muy diferentes entre si (un instrumentista acĂşstico, un hiperinstrumentista y un instrumentista alternativo) se ejemplifica el funcionamiento del modelo teĂłrico. Las aplicaciones de este estudio tanto para el campo de la pedagogĂa musical, la musicoterapia y la composiciĂłn son innumerables.Departamento de Didáctica de la ExpresiĂłn Musical, Plástica y Corpora
Sound and Meaning in Auditory Data Display
Hermann T, Ritter H. Sound and Meaning in Auditory Data Display. Proceedings of the IEEE (Special Issue on Engineering and Music - Supervisory Control and Auditory Communication). 2004;92(4):730-741.Auditory data display is an interdisciplinary field linking auditory perception research, sound engineering, data mining, and human-computer interaction in order to make semantic contents of data perceptually accessible in the form of (nonverbal) audible sound. For this goal it is important to understand the different ways in which sound can encode meaning. We discuss this issue from the perspectives of language, music, functionality, listening modes, and physics, and point out some limitations of current techniques for auditory data display, in particular when targeting high-dimensional data sets. As a promising, potentially very widely applicable approach, we discuss the method of model-based sonification (MBS) introduced recently by the authors and point out how its natural semantic grounding in the physics of a sound generation process supports the design of sonifications that are accessible even to untrained, everyday listening. We then proceed to show that MBS also facilitates the design of an intuitive, active navigation through "acoustic aspects", somewhat analogous to the use of successive two-dimensional views in three-dimensional visualization. Finally, we illustrate the concept with a first prototype of a "tangible" sonification interface which allows us to "perceptually map" sonification responses into active exploratory hand motions of a user, and give an outlook on some planned extensions