56 research outputs found

    Listening-Mode-Centered Sonification Design for Data Exploration

    Get PDF
    Grond F. Listening-Mode-Centered Sonification Design for Data Exploration. Bielefeld: Bielefeld University; 2013.From the Introduction to this thesis: Through the ever growing amount of data and the desire to make them accessible to the user through the sense of listening, sonification, the representation of data by using sound has been subject of active research in the computer sciences and the field of HCI for the last 20 years. During this time, the field of sonification has diversified into different application areas: today, sound in auditory display informs the user about states and actions on the desktop and in mobile devices; sonification has been applied in monitoring applications, where sound can range from being informative to alarming; sonification has been used to give sensory feedback in order to close the action and perception loop; last but not least, sonifications have also been developed for exploratory data analysis, where sound is used to represent data with unknown structures for hypothesis building. Coming from the computer sciences and HCI, the conceptualization of sonification has been mostly driven by application areas. On the other hand, the sonic arts who have always contributed to the community of auditory display have a genuine focus on sound. Despite this close interdisciplinary relation of communities of sound practitioners, a rich and sound- (or listening)-centered concept about sonification is still missing as a point of departure for a more application and task overarching approach towards design guidelines. Complementary to the useful organization along fields of applications, a conceptual framework that is proper to sound needs to abstract from applications and also to some degree from tasks, as both are not directly related to sound. I hence propose in this thesis to conceptualize sonifications along two poles where sound serves either a normative or a descriptive purpose. In the beginning of auditory display research, a continuum between a symbolic and an analogic pole has been proposed by Kramer (1994a, page 21). In this continuum, symbolic stands for sounds that coincide with existing schemas and are more denotative, analogic stands for sounds that are informative through their connotative aspects. (compare Worrall (2009, page 315)). The notions of symbolic and analogic illustrate the struggle to find apt descriptions of how the intention of the listener subjects audible phenomena to a process of meaning making and interpretation. Complementing the analogic-symbolic continuum with descriptive and normative purposes of displays is proposed in the light of the recently increased research interest in listening modes and intentions. Similar to the terms symbolic and analogic, listening modes have been discussed in auditory display since the beginning usually in dichotomic terms which were either identified with the words listening and hearing or understood as musical listening and everyday listening as proposed by Gaver (1993a). More than 25 years earlier, four direct listening modes have been introduced by Schaeffer (1966) together with a 5th synthetic mode of reduced listening which leads to the well-known sound object. Interestingly, Schaeffer’s listening modes remained largely unnoticed by the auditory display community. Particularly the notion of reduced listening goes beyond the connotative and denotative poles of the continuum proposed by Kramer and justifies the new terms descriptive and normative. Recently, a new taxonomy of listening modes has been proposed by Tuuri and Eerola (2012) that is motivated through an embodied cognition approach. The main contribution of their taxonomy is that it convincingly diversifies the connotative and denotative aspects of listening modes. In the recently published sonification handbook, multimodal and interactive aspects in combination with sonification have been discussed as promising options to expand and advance the field by Hunt and Hermann (2011), who point out that there is a big need for a better theoretical foundation in order to systematically integrate these aspects. The main contribution of this thesis is to address this need by providing alternative and complementary design guidelines with respect to existing approaches, all of which have been conceived before the recently increased research interest in listening modes. None of the existing contributions to design frameworks integrates multimodality, and listening modes with a focus on exploratory data analysis, where sonification is conceived to support the understanding of complex data potentially helping to identify new structures therein. In order to structure this field the following questions are addressed in this thesis: • How do natural listening modes and reduced listening relate to the proposed normative and descriptive display purposes? • What is the relationship of multimodality and interaction with listening modes and display purposes? • How can the potential of embodied cognition based listening modes be put to use for exploratory data sonification? • How can listening modes and display purposes be connected to questions of aesthetics in the display? • How do data complexity and Parameter-mapping sonification relate to exploratory data analysis and listening modes

    Agent-Based Graphic Sound Synthesis and Acousmatic Composition

    Get PDF
    For almost a century composers and engineers have been attempting to create systems that allow drawings and imagery to behave as intuitive and efficient musical scores. Despite the intuitive interactions that these systems afford, they are somewhat underutilised by contemporary composers. The research presented here explores the concept of agency and artificial ecosystems as a means of creating and exploring new graphic sound synthesis algorithms. These algorithms are subsequently designed to investigate the creation of organic musical gesture and texture using granular synthesis. The output of this investigation consists of an original software artefact, The Agent Tool, alongside a suite of acousmatic musical works which the former was designed to facilitate. When designing new musical systems for creative exploration with vast parametric controls, careful constraints should be put in place to encourage focused development. In this instance, an evolutionary computing model is utilised as part of an iterative development cycle. Each iteration of the system’s development coincides with a composition presented in this portfolio. The features developed as part of this process subsequently serve the author’s compositional practice and inspiration. As the software package is designed to be flexible and open ended, each composition represents a refinement of features and controls for the creation of musical gesture and texture. This document subsequently discusses the creative inspirations behind each composition alongside the features and agents that were created. This research is contextualised through a review of established literature on graphic sound synthesis, evolutionary musical computing and ecosystemic approaches to sound synthesis and control

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    A method for developing an improved mapping model for data sonification

    Get PDF
    Presented at the 17th International Conference on Auditory Display (ICAD2011), 20-23 June, 2011 in Budapest, Hungary.The unreliable detection of information in sonifications of multivariate data that employ parameter mapping is generally thought to be the result of the co-dependency of psychoacoustic dimensions. The method described here is aimed at discovering whether the perceptual accuracy of such information can be improved by rendering the sonification of the data with a mapping model influenced by the gestural metrics of performing musicians playing notated versions of the data. Conceptually, the Gesture-Encoded Sound Model (GESM) is a means of transducing multivariate datasets to sound synthesis and control parameters in such as way as to make the information in those datasets available to general listeners in a more perceptually coherent and stable way than is currently the case. The approach renders to sound a datastream not only using observable quantities (inverse transforms of known psychoacoustic principles), but latent variables of a Dynamic Bayesian Network trained with gestures of the physical body movements of performing musicians and hypotheses concerning other observable quantities of their coincident acoustic spectra. If successful, such a model should significantly broaden the applicability of data sonification as a perceptualisation technique

    The Effects of Design on Performance for Data-based and Task-based Sonification Designs: Evaluation of a Task-based Approach to Sonification Design for Surface Electromyography

    Get PDF
    The goal of this work was to evaluate a task-analysis-based approach to sonification design for surface electromyography (sEMG) data. A sonification is a type of auditory display that uses sound to convey information about data to a listener. Sonifications work by mapping changes in a parameter of sound (e.g., pitch) to changes in data values and they have been shown to be useful in biofeedback and movement analysis applications. However, research that investigates and evaluates sonifications has been difficult due to the highly interdisciplinary nature of the field. Progress has been made but to date, many sonification designs have not been empirically evaluated and have been described as annoying, confusing, or fatiguing. Sonification design decisions have also often been based on characteristics of the data being sonified, and not on the listener’s data analysis task. The hypothesis for this thesis was that focusing on the listener’s task when designing sonifications could result in sonifications that were more readily understood and less annoying to listen to. Task analysis methods have been developed in fields like Human Factors and Human Computer Interaction, and their purpose is to break tasks down into their most basic elements so that products and software can be developed to meet user needs. Applying this approach to sonification design, a type of task analysis focused on Goals, Operators, Methods, and Selection rules (GOMS) was used to analyze two sEMG data evaluation tasks, identify design criteria that a sonification would need to meet in order to allow a listener to perform these two tasks, and two sonification designs were created to facilitate accomplishment of these tasks. These two Task-based sonification designs were then empirically compared to two Data-based sonification designs. The Task-based designs resulted in better listener performance for both sEMG data evaluation tasks, demonstrating the effectiveness of the Task-based approach and suggesting that sonification designers may benefit from adopting a task-based approach to sonification design

    Extended Abstracts

    Get PDF
    Presented at the 21st International Conference on Auditory Display (ICAD2015), July 6-10, 2015, Graz, Styria, Austria.Mark Ballora “Two examples of sonification for viewer engagement: Hurricanes and squirrel hibernation cycles” / Stephen Barrass, “ Diagnostic Singing Bowls” / Natasha Barrett, Kristian Nymoen. “Investigations in coarticulated performance gestures using interactive parameter-mapping 3D sonification” / Lapo Boschi, Arthur Paté, Benjamin Holtzman, Jean-Loïc le Carrou. “Can auditory display help us categorize seismic signals?” / Cédric Camier, François-Xavier Féron, Julien Boissinot, Catherine Guastavino. “Tracking moving sounds: Perception of spatial figures” / Coralie Diatkine, Stéphanie Bertet, Miguel Ortiz. “Towards the holistic spatialization of multiple sound sources in 3D, implementation using ambisonics to binaural technique” / S. Maryam FakhrHosseini, Paul Kirby, Myounghoon Jeon. “Regulating Drivers’ Aggressiveness by Sonifying Emotional Data” / Wolfgang Hauer, Katharina Vogt. “Sonification of a streaming-server logfile” / Thomas Hermann, Tobias Hildebrandt, Patrick Langeslag, Stefanie Rinderle-Ma. “Optimizing aesthetics and precision in sonification for peripheral process-monitoring” / Minna Huotilainen, Matti Gröhn, Iikka Yli-Kyyny, Jussi Virkkala, Tiina Paunio. “Sleep Enhancement by Sound Stimulation” / Steven Landry, Jayde Croschere, Myounghoon Jeon. “Subjective Assessment of In-Vehicle Auditory Warnings for Rail Grade Crossings” / Rick McIlraith, Paul Walton, Jude Brereton. “The Spatialised Sonification of Drug-Enzyme Interactions” / George Mihalas, Minodora Andor, Sorin Paralescu, Anca Tudor, Adrian Neagu, Lucian Popescu, Antoanela Naaji. “Adding Sound to Medical Data Representation” / Rainer Mittmannsgruber, Katharina Vogt. “Auditory assistance for timing presentations” / Joseph W. Newbold, Andy Hunt, Jude Brereton. “Chemical Spectral Analysis through Sonification” / S. Camille Peres, Daniel Verona, Paul Ritchey. “The Effects of Various Parameter Combinations in Parameter-Mapping Sonifications: A Pilot Study” / Eva Sjuve. “Metopia: Experiencing Complex Environmental Data Through Sound” / Benjamin Stahl, Katharina Vogt. “The Effect of Audiovisual Congruency on Short-Term Memory of Serial Spatial Stimuli: A Pilot Test” / David Worrall. “Realtime sonification and visualisation of network metadata (The NetSon Project)” / Bernhard Zeller, Katharina Vogt. “Auditory graph evolution by the example of spurious correlations” /The compiled collection of extended abstracts included in the ICAD 2015 Proceedings. Extended abstracts include, but are not limited to, late-breaking results, works in early stages of progress, novel methodologies, unique or controversial theoretical positions, and discussions of unsuccessful research or null findings

    Exploring the utility of giving robots auditory perspective-taking abilities

    Get PDF
    Presented at the 12th International Conference on Auditory Display (ICAD), London, UK, June 20-23, 2006.This paper reports on work in progress to develop a computational auditory perspective taking system for a robot. Auditory perspective taking is construed as the ability to reason about inferred or posited factors that affect an addressee's perspective as a listener for the purpose of presenting auditory information in an appropriate and effective manner. High-level aspects of this aural interaction skill are discussed, and a prototype adaptive auditory display, implemented in the context of a robotic information kiosk, is described and critiqued. Additionally, a sketch of the design and goals of a user study planned for later this year is given. A demonstration of the prototype system will accompany the presentation of this research in the poster session

    The Sound of the hallmarks of cancer

    Get PDF
    The objective of this research is to create a mixed portfolio of data-driven composition and performance interfaces, fixed Electroacoustic/Computer music compositions, and live-improvised musical and audiovisual works reflecting cancer as a disease. The main methodology in generating the raw sonic material is the sonification of high-throughput, protein/RNA fold-change data, derived from the bio- molecular research of cancer cells. This data and relevant insight into the field are obtained as part of a collaboration with Barts Cancer Institute, in London, UK. Furthermore, for the purpose of musical effectiveness and reaching wider audiences, a focus has been placed on balancing the use of data-driven sonic material with composer-driven musical choices, by drawing upon the narrative of the Hallmarks of Cancer (Hanahan and Weinberg, 2011) which is a widely accepted conceptual framework in the field of cancer research for understanding the various biomolecular processes responsible for causing cancer. This method is adopted in order to inspire musical form, and guide some of the syntactic and aesthetic choices within both fixed and improvised works. In addition, this research also reflects upon the use of data sonification as an artistic tool and practice, while also addressing the contradictions and contention that arise as a result of scientific aims and expectations regarding sonification, resulting in a proposed original model for framing and classifying artistic works incorporating this approach
    • …
    corecore