48,354 research outputs found

    Listening-Mode-Centered Sonification Design for Data Exploration

    Get PDF
    Grond F. Listening-Mode-Centered Sonification Design for Data Exploration. Bielefeld: Bielefeld University; 2013.From the Introduction to this thesis: Through the ever growing amount of data and the desire to make them accessible to the user through the sense of listening, sonification, the representation of data by using sound has been subject of active research in the computer sciences and the field of HCI for the last 20 years. During this time, the field of sonification has diversified into different application areas: today, sound in auditory display informs the user about states and actions on the desktop and in mobile devices; sonification has been applied in monitoring applications, where sound can range from being informative to alarming; sonification has been used to give sensory feedback in order to close the action and perception loop; last but not least, sonifications have also been developed for exploratory data analysis, where sound is used to represent data with unknown structures for hypothesis building. Coming from the computer sciences and HCI, the conceptualization of sonification has been mostly driven by application areas. On the other hand, the sonic arts who have always contributed to the community of auditory display have a genuine focus on sound. Despite this close interdisciplinary relation of communities of sound practitioners, a rich and sound- (or listening)-centered concept about sonification is still missing as a point of departure for a more application and task overarching approach towards design guidelines. Complementary to the useful organization along fields of applications, a conceptual framework that is proper to sound needs to abstract from applications and also to some degree from tasks, as both are not directly related to sound. I hence propose in this thesis to conceptualize sonifications along two poles where sound serves either a normative or a descriptive purpose. In the beginning of auditory display research, a continuum between a symbolic and an analogic pole has been proposed by Kramer (1994a, page 21). In this continuum, symbolic stands for sounds that coincide with existing schemas and are more denotative, analogic stands for sounds that are informative through their connotative aspects. (compare Worrall (2009, page 315)). The notions of symbolic and analogic illustrate the struggle to find apt descriptions of how the intention of the listener subjects audible phenomena to a process of meaning making and interpretation. Complementing the analogic-symbolic continuum with descriptive and normative purposes of displays is proposed in the light of the recently increased research interest in listening modes and intentions. Similar to the terms symbolic and analogic, listening modes have been discussed in auditory display since the beginning usually in dichotomic terms which were either identified with the words listening and hearing or understood as musical listening and everyday listening as proposed by Gaver (1993a). More than 25 years earlier, four direct listening modes have been introduced by Schaeffer (1966) together with a 5th synthetic mode of reduced listening which leads to the well-known sound object. Interestingly, Schaeffer’s listening modes remained largely unnoticed by the auditory display community. Particularly the notion of reduced listening goes beyond the connotative and denotative poles of the continuum proposed by Kramer and justifies the new terms descriptive and normative. Recently, a new taxonomy of listening modes has been proposed by Tuuri and Eerola (2012) that is motivated through an embodied cognition approach. The main contribution of their taxonomy is that it convincingly diversifies the connotative and denotative aspects of listening modes. In the recently published sonification handbook, multimodal and interactive aspects in combination with sonification have been discussed as promising options to expand and advance the field by Hunt and Hermann (2011), who point out that there is a big need for a better theoretical foundation in order to systematically integrate these aspects. The main contribution of this thesis is to address this need by providing alternative and complementary design guidelines with respect to existing approaches, all of which have been conceived before the recently increased research interest in listening modes. None of the existing contributions to design frameworks integrates multimodality, and listening modes with a focus on exploratory data analysis, where sonification is conceived to support the understanding of complex data potentially helping to identify new structures therein. In order to structure this field the following questions are addressed in this thesis: • How do natural listening modes and reduced listening relate to the proposed normative and descriptive display purposes? • What is the relationship of multimodality and interaction with listening modes and display purposes? • How can the potential of embodied cognition based listening modes be put to use for exploratory data sonification? • How can listening modes and display purposes be connected to questions of aesthetics in the display? • How do data complexity and Parameter-mapping sonification relate to exploratory data analysis and listening modes

    Ritualistic approach to sonic interaction design: A poetic framework for participatory sonification

    Get PDF
    Presented at the 27th International Conference on Auditory Display (ICAD 2022) 24-27 June 2022, Virtual conference.While sonification is often adopted as an analytical tool to understand data, it can also be an efficient basis for the construction of an interaction model for an aesthetic sound piece. Mirroring the performative arts, a ritualistic approach in participatory sonification can take place whenever a work relies more on the outer form of the piece, than on the meaning attributed to the information communicated, to connect with an audience, losing some degrees of readability and intelligibility in the process while maintaining a reliable data-to-sound relationship. We will present a few examples that anticipate or expand the use of sonification as analytical tool to propose aesthetic approaches, accessing more complex layers of meaning in interactive design. By proposing topological, semantic, and technical perspectives, we demonstrate the functional aspects of the multimedia artwork “The Only Object They Could Retrieve From Earth’s Lost Civilisation” (“The Only Object” from now on). Outcomes will be considered under the multidisciplinary framework here proposed, to conclude with possibilities and implications of a ritualistic approach to interaction design

    Sound and Meaning in Auditory Display

    Get PDF
    Hermann T. Sound and Meaning in Auditory Display. In: Proceedings of the International Workshop on Supervision and Control in Engineering and Music. Kassel; 2001.This paper focusses on the connection between human listening and datamining. The goal in the research field of datamining is to find patterns, to detect hidden regularities in data. Often, high-dimensional datasets are given which are not easily understood from pure inspection of the table of numbers representing the data. There are two ways to solve the datamining problem: one is to implement perceptional capabilities in artificial systems - this is the approach of machine learning. The other way is to make usage of the human brain which actually is the most brilliant data mining system we know. In connection with our sensory system we are able to recognize and distinguish patterns, and this capability is usually exploited when data is presented in form of a visualization. However, we also have extremely high- developed pattern recognition capabilites in the auditory domain, and the field of sonification addresses this modality by rendering auditory representation for data for the joint purposes of deepening insight into given data and facilitating the monitoring of complex processes. An unanswered question is how high-dimensional data could or should sound. This paper looks at the relation between sound and meaning in our real world and transfers some findings onto the sonification domain. The result is the technique of Model-Based Sonification, which allows the development of sonifications that can easily be interpreted by the listener

    Understanding concurrent earcons: applying auditory scene analysis principles to concurrent earcon recognition

    Get PDF
    Two investigations into the identification of concurrently presented, structured sounds, called earcons were carried out. One of the experiments investigated how varying the number of concurrently presented earcons affected their identification. It was found that varying the number had a significant effect on the proportion of earcons identified. Reducing the number of concurrently presented earcons lead to a general increase in the proportion of presented earcons successfully identified. The second experiment investigated how modifying the earcons and their presentation, using techniques influenced by auditory scene analysis, affected earcon identification. It was found that both modifying the earcons such that each was presented with a unique timbre, and altering their presentation such that there was a 300 ms onset-to-onset time delay between each earcon were found to significantly increase identification. Guidelines were drawn from this work to assist future interface designers when incorporating concurrently presented earcons

    Investigating Perceptual Congruence Between Data and Display Dimensions in Sonification

    Get PDF
    The relationships between sounds and their perceived meaning and connotations are complex, making auditory perception an important factor to consider when designing sonification systems. Listeners often have a mental model of how a data variable should sound during sonification and this model is not considered in most data:sound mappings. This can lead to mappings that are difficult to use and can cause confusion. To investigate this issue, we conducted a magnitude estimation experiment to map how roughness, noise and pitch relate to the perceived magnitude of stress, error and danger. These parameters were chosen due to previous findings which suggest perceptual congruency between these auditory sensations and conceptual variables. Results from this experiment show that polarity and scaling preference are dependent on the data:sound mapping. This work provides polarity and scaling values that may be directly utilised by sonification designers to improve auditory displays in areas such as accessible and mobile computing, process-monitoring and biofeedback

    CAITLIN: A Musical Program Auralisation Tool to Assist Novice Programmers with Debugging

    Get PDF
    Early experiments have suggested that program auralization can convey information about program structure [5]. Languages like Pascal contain classes of construct that are similar in nature allowing hierarchical classification of their features. This taxonomy can be reflected in the design of musical signatures which are used within the CAITLIN program auralization system. Experiments using these hierarchical leitmotifs should (see note in EXPERIMENT section) indicate that their similarities can be put to good use in communicating information about program structure and state

    Sonification, Musification, and Synthesis of Absolute Program Music

    Get PDF
    Presented at the 22nd International Conference on Auditory Display (ICAD-2016)When understood as a communication system, a musical work can be interpreted as data existing within three domains. In this interpretation an absolute domain is interposed as a communication channel between two programatic domains that act respectively as source and receiver. As a source, a programatic domain creates, evolves, organizes, and represents a musical work. When acting as a receiver it re-constitutes acoustic signals into unique auditory experience. The absolute domain transmits physical vibrations ranging from the stochastic structures of noise to the periodic waveforms of organized sound. Analysis of acoustic signals suggest recognition as a musical work requires signal periodicity to exceed some minimum. A methodological framework that satisfies recent definitions of sonification is outlined. This framework is proposed to extend to musification through incorporation of data features that represent more traditional elements of a musical work such as melody, harmony, and rhythm

    Testing spatial aspects of auditory salience

    Get PDF
    Auditory salience describes the extent to which sounds attract the listener’s attention. So far, there have not been any published studies testing if the location of sound relative to the listener influences its salience. In fact, not many experiments in general test auditory attention in a fully spatialised setting, with sounds in front and behind the listener. We modified two experimental methods from the literature so that they can be used to test spatial salience - one based on oddball detection and artificially created sounds, the other based on self-reported attention tracking in a more ecologically valid scenario. Each of these methods has its advantages and each presents different challenges. However, they both seem to indicate that high frequency sounds arriving from the back are slightly less salient. We believe this result could likely be explained by loudness differences
    • …
    corecore