180 research outputs found

    Interactive sonification of curve shape and curvature data

    Get PDF
    Abstract. This paper presents a number of different sonification approaches that aim to communicate geometrical data, specifically curve shape and curvature information, of virtual 3-D objects. The system described here is part of a multi-modal augmented reality environment in which users interact with virtual models through the modalities vision, hearing and touch. An experiment designed to assess the performance of the sonification strategies is described and the key findings are presented and discussed

    Listening-Mode-Centered Sonification Design for Data Exploration

    Get PDF
    Grond F. Listening-Mode-Centered Sonification Design for Data Exploration. Bielefeld: Bielefeld University; 2013.From the Introduction to this thesis: Through the ever growing amount of data and the desire to make them accessible to the user through the sense of listening, sonification, the representation of data by using sound has been subject of active research in the computer sciences and the field of HCI for the last 20 years. During this time, the field of sonification has diversified into different application areas: today, sound in auditory display informs the user about states and actions on the desktop and in mobile devices; sonification has been applied in monitoring applications, where sound can range from being informative to alarming; sonification has been used to give sensory feedback in order to close the action and perception loop; last but not least, sonifications have also been developed for exploratory data analysis, where sound is used to represent data with unknown structures for hypothesis building. Coming from the computer sciences and HCI, the conceptualization of sonification has been mostly driven by application areas. On the other hand, the sonic arts who have always contributed to the community of auditory display have a genuine focus on sound. Despite this close interdisciplinary relation of communities of sound practitioners, a rich and sound- (or listening)-centered concept about sonification is still missing as a point of departure for a more application and task overarching approach towards design guidelines. Complementary to the useful organization along fields of applications, a conceptual framework that is proper to sound needs to abstract from applications and also to some degree from tasks, as both are not directly related to sound. I hence propose in this thesis to conceptualize sonifications along two poles where sound serves either a normative or a descriptive purpose. In the beginning of auditory display research, a continuum between a symbolic and an analogic pole has been proposed by Kramer (1994a, page 21). In this continuum, symbolic stands for sounds that coincide with existing schemas and are more denotative, analogic stands for sounds that are informative through their connotative aspects. (compare Worrall (2009, page 315)). The notions of symbolic and analogic illustrate the struggle to find apt descriptions of how the intention of the listener subjects audible phenomena to a process of meaning making and interpretation. Complementing the analogic-symbolic continuum with descriptive and normative purposes of displays is proposed in the light of the recently increased research interest in listening modes and intentions. Similar to the terms symbolic and analogic, listening modes have been discussed in auditory display since the beginning usually in dichotomic terms which were either identified with the words listening and hearing or understood as musical listening and everyday listening as proposed by Gaver (1993a). More than 25 years earlier, four direct listening modes have been introduced by Schaeffer (1966) together with a 5th synthetic mode of reduced listening which leads to the well-known sound object. Interestingly, Schaeffer’s listening modes remained largely unnoticed by the auditory display community. Particularly the notion of reduced listening goes beyond the connotative and denotative poles of the continuum proposed by Kramer and justifies the new terms descriptive and normative. Recently, a new taxonomy of listening modes has been proposed by Tuuri and Eerola (2012) that is motivated through an embodied cognition approach. The main contribution of their taxonomy is that it convincingly diversifies the connotative and denotative aspects of listening modes. In the recently published sonification handbook, multimodal and interactive aspects in combination with sonification have been discussed as promising options to expand and advance the field by Hunt and Hermann (2011), who point out that there is a big need for a better theoretical foundation in order to systematically integrate these aspects. The main contribution of this thesis is to address this need by providing alternative and complementary design guidelines with respect to existing approaches, all of which have been conceived before the recently increased research interest in listening modes. None of the existing contributions to design frameworks integrates multimodality, and listening modes with a focus on exploratory data analysis, where sonification is conceived to support the understanding of complex data potentially helping to identify new structures therein. In order to structure this field the following questions are addressed in this thesis: • How do natural listening modes and reduced listening relate to the proposed normative and descriptive display purposes? • What is the relationship of multimodality and interaction with listening modes and display purposes? • How can the potential of embodied cognition based listening modes be put to use for exploratory data sonification? • How can listening modes and display purposes be connected to questions of aesthetics in the display? • How do data complexity and Parameter-mapping sonification relate to exploratory data analysis and listening modes

    On the Use of Sound for Representing Geometrical Information of Virtual Objects

    Get PDF
    Presented at the 14th International Conference on Auditory Display (ICAD2008) on June 24-27, 2008 in Paris, France.This study is concerned with the use of sound in a multimodal interface that is currently being developed as an aid for product design. By using this interface, the designer is able to physically interact with a virtual object. The requirements of the interface include the interactive sonification of geometrical data, relating to the virtual object, which are otherwise practically undetectable. We propose a classification scheme of the sound synthesis methods relevant to this application. These methods are presented in terms of the level of abstraction between the virtual object and the sound produced as a result of the user's interaction. Finally, we present an example that demonstrates the advantages of sonification for this application

    Does embodied training improve the recognition of mid-level expressive movement qualities sonification?

    Get PDF
    This research is a part of a broader project exploring how movement qualities can be recognized by means of the auditory channel: can we perceive an expressive full-body movement quality by means of its interactive sonification? The paper presents a sonification framework and an experiment to evaluate if embodied sonic training (i.e., experiencing interactive sonification of your own body movements) increases the recognition of such qualities through the auditory channel only, compared to a non-embodied sonic training condition. We focus on the sonification of two mid-level movement qualities: fragility and lightness. We base our sonification models, described in the first part, on the assumption that specific compounds of spectral features of a sound can contribute to the cross-modal perception of a specific movement quality. The experiment, described in the second part, involved 40 participants divided into two groups (embodied sonic training vs. no training). Participants were asked to report the level of lightness and fragility they perceived in 20 audio stimuli generated using the proposed sonification models. Results show that (1) both expressive qualities were correctly recognized from the audio stimuli, (2) a positive effect of embodied sonic training was observed for fragility but not for lightness. The paper is concluded by the description of the artistic performance that took place in 2017 in Genoa (Italy), in which the outcomes of the presented experiment were exploited

    Perception and performance: an evaluation of multimodal feedback for the assessment of curve shape differences

    Get PDF
    The EU-funded SATIN project sought to provide a multimodal interface to aid product designers in judging the quality of curved shapes. This thesis outlines a research programme designed to assist in the exploration of fundamental issues related to this project, and provide a means to evaluate the success of such interfaces more generally. Therefore, three studies were undertaken with the aim of exploring the value of haptic and sound feedback in the perception of curve shape differences, and through the knowledge gained provide an evaluative framework for the assessment of such interfaces. The first study found that visual, haptic, and visual-haptic perception was insufficient to judge discontinuities in curvature without some further augmentation. This led to a second study which explored the use of sound for conveying curve shape information. It was found that sine waves or harmonic sounds were most suited to for this task. The third study combined visual-haptic and auditory information. It was found that sound improved the perception of curve shape differences, although this was dependent upon the type of sonification method used. Further to this, data from studies one and three were used to identify gradient as the active mechanism of curve shape differentiation and provided a model for the prediction of these differences. Similarly performance data (response time, accuracy, and confidence) were analysed to produce a model for the prediction of user performance at varying degrees of task difficulty. The research undertaken across these studies was used to develop a framework to evaluate multimodal interfaces for curve shape exploration. In particular a ‘discount’ psychophysical method was proposed, along with predictive tools for the creation of perceptual and performance metrics, plus guidelines to aid development. This research has added to fundamental knowledge and provided a useful framework through which future multimodal interfaces may be evaluated

    Perception and performance: an evaluation of multimodal feedback for the assessment of curve shape differences

    Get PDF
    The EU-funded SATIN project sought to provide a multimodal interface to aid product designers in judging the quality of curved shapes. This thesis outlines a research programme designed to assist in the exploration of fundamental issues related to this project, and provide a means to evaluate the success of such interfaces more generally. Therefore, three studies were undertaken with the aim of exploring the value of haptic and sound feedback in the perception of curve shape differences, and through the knowledge gained provide an evaluative framework for the assessment of such interfaces. The first study found that visual, haptic, and visual-haptic perception was insufficient to judge discontinuities in curvature without some further augmentation. This led to a second study which explored the use of sound for conveying curve shape information. It was found that sine waves or harmonic sounds were most suited to for this task. The third study combined visual-haptic and auditory information. It was found that sound improved the perception of curve shape differences, although this was dependent upon the type of sonification method used. Further to this, data from studies one and three were used to identify gradient as the active mechanism of curve shape differentiation and provided a model for the prediction of these differences. Similarly performance data (response time, accuracy, and confidence) were analysed to produce a model for the prediction of user performance at varying degrees of task difficulty. The research undertaken across these studies was used to develop a framework to evaluate multimodal interfaces for curve shape exploration. In particular a ‘discount’ psychophysical method was proposed, along with predictive tools for the creation of perceptual and performance metrics, plus guidelines to aid development. This research has added to fundamental knowledge and provided a useful framework through which future multimodal interfaces may be evaluated

    Plays of proximity and distance: Gesture-based interaction and visual music

    Get PDF
    This thesis presents the relations between gestural interfaces and artworks which deal with real- time and simultaneous performance of dynamic imagery and sound, the so called visual music practices. Those relation extend from a historical, practical and theoretical viewpoint, which this study aims to cover, at least partially, all of them. Such relations are exemplified by two artistic projects developed by the author of this thesis, which work as a starting point for analysing the issues around the two main topics. The principles, patterns, challenges and concepts which struc- tured the two artworks are extracted, analysed and discussed, providing elements for comparison and evaluation, which may be useful for future researches on the topic

    An empirical study of embodied music listening, and its applications in mediation technology

    Get PDF

    Tangible auditory interfaces : combining auditory displays and tangible interfaces

    Get PDF
    Bovermann T. Tangible auditory interfaces : combining auditory displays and tangible interfaces. Bielefeld (Germany): Bielefeld University; 2009.Tangible Auditory Interfaces (TAIs) investigates into the capabilities of the interconnection of Tangible User Interfaces and Auditory Displays. TAIs utilise artificial physical objects as well as soundscapes to represent digital information. The interconnection of the two fields establishes a tight coupling between information and operation that is based on the human's familiarity with the incorporated interrelations. This work gives a formal introduction to TAIs and shows their key features at hand of seven proof of concept applications

    Re-Sonification of Objects, Events, and Environments

    Get PDF
    abstract: Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.Dissertation/ThesisPh.D. Electrical Engineering 201
    • …
    corecore