20 research outputs found

    Taxonomy and Definitions for Sonification and Auditory Display

    Get PDF
    Hermann T. Taxonomy and Definitions for Sonification and Auditory Display. In: Susini P, Warusfel O, eds. Proceedings of the 14th International Conference on Auditory Display (ICAD 2008). Paris, France: IRCAM; 2008. Sonification is still a young research field and many terms such as sonification, auditory display, auralization, audification have been used without a precise definition. Recent developments such as the introduction of Model-based Sonification, the establishing of interactive sonification and the increased interest in sonification from arts have raised the issue of revisiting the definitions towards a clearer terminology. This paper introduces a new definition for sonification and auditory display that emphasize necessary and sufficient conditions for organized sound to be called sonification. It furthermore suggests a taxonomy, and discusses the relation between visualization and sonification. A hierarchy of closed-loop interactions is furthermore introduced. This paper aims at initiating vivid discussions towards the establishing of a deeper theory of sonification and auditory display

    Recommendations to Develop, Distribute, and Market Sonification Apps

    Get PDF
    Presented at the 27th International Conference on Auditory Display (ICAD 2022) 24-27 June 2022. Virtual conference.After decades of research, sonification is still rarely adopted in consumer electronics, software and user interfaces. Outside the science and arts scenes the term sonification seems not well known to the public. As a means of science communication, and in order to make software developers, producers of consumer electronics and end users aware of sonification, we developed, distributed, and promoted Tiltification. This smartphone app utilizes sonification to inform users about the tilt angle of their phone, so that they can use it as a torpedo level. In this paper we report on our app development, distribution and promotion strategies and reflect on their success in making the app in particular, and sonification in general, better known to the public. Finally, we give recommendations on how to develop, distribute and market sonification apps.This article is dedicated to research institutions without commercial interests

    Does embodied training improve the recognition of mid-level expressive movement qualities sonification?

    Get PDF
    This research is a part of a broader project exploring how movement qualities can be recognized by means of the auditory channel: can we perceive an expressive full-body movement quality by means of its interactive sonification? The paper presents a sonification framework and an experiment to evaluate if embodied sonic training (i.e., experiencing interactive sonification of your own body movements) increases the recognition of such qualities through the auditory channel only, compared to a non-embodied sonic training condition. We focus on the sonification of two mid-level movement qualities: fragility and lightness. We base our sonification models, described in the first part, on the assumption that specific compounds of spectral features of a sound can contribute to the cross-modal perception of a specific movement quality. The experiment, described in the second part, involved 40 participants divided into two groups (embodied sonic training vs. no training). Participants were asked to report the level of lightness and fragility they perceived in 20 audio stimuli generated using the proposed sonification models. Results show that (1) both expressive qualities were correctly recognized from the audio stimuli, (2) a positive effect of embodied sonic training was observed for fragility but not for lightness. The paper is concluded by the description of the artistic performance that took place in 2017 in Genoa (Italy), in which the outcomes of the presented experiment were exploited

    Amplifying Actions - Towards Enactive Sound Design

    Get PDF
    Recently, artists and designers have begun to use digital technologies in order to stimulate bodily interaction, while scientists keep revealing new findings about sensorimotor contingencies, changing the way in which we understand human knowledge. However, implicit knowledge generated in artistic projects can become difficult to transfer and scientific research frequently remains isolated due to specific disciplinary languages and methodologies. By mutually enriching holistic creative approaches and highly specific scientific ways of working, this doctoral dissertation aims to set the foundation for Enactive Sound Design. It is focused on sound that engages sensorimotor experience that has been neglected within the existing design practices. The premise is that such a foundation can be best developed if grounded in transdisciplinary methods that bring together scientific and design approaches. The methodology adopted to achieve this goal is practice-based and supported by theoretical research and project analysis. Three different methodologies were formulated and evaluated during this doctoral study, based on a convergence of existing methods from design, psychology and human-computer interaction. First, a basic design approach was used to engage in a reflective creation process and to extend the existing work on interaction gestalt through hands-on activities. Second, psychophysical experiments were carried out and adapted to suit the needed shift from reception-based tests to a performance-based quantitative evaluation. Last, a set of participatory workshops were developed and conducted, within which the enactive sound exercises were iteratively tested through direct and participatory observation, questionnaires and interviews. A foundation for Enactive Sound Design developed in this dissertation includes novel methods that have been generated by extensive explorations into the fertile ground between basic design education, psychophysical experiments and participatory design. Combining creative practices with traditional task analysis further developed this basic design approach. The results were a number of abstract sonic artefacts conceptualised as the experimental apparatuses that can allow psychologists to study enactive sound experience. Furthermore, a collaboration between designers and scientists on a psychophysical study produced a new methodology for the evaluation of sensorimotor performance with tangible sound interfaces.These performance experiments have revealed that sonic feedback can support enactive learning. Finally, participatory workshops resulted in a number of novel methods focused on a holistic perspective fostered through a subjective experience of self-producing sound. They indicated the influence that such an approach may have on both artists and scientists in the future. The role of designer, as a scientific collaborator within psychological research and as a facilitator of participatory workshops, has been evaluated. Thus, this dissertation recommends a number of collaborative methods and strategies that can help designers to understand and reflectively create enactive sound objects. It is hoped that the examples of successful collaborations between designers and scientists presented in this thesis will encourage further projects and connections between different disciplines, with the final goal of creating a more engaging and a more aware sonic future.European Commission 6th Framework and European Science Foundation (COST Action

    Enhancing the Quality and Motivation of Physical Exercise Using Real-Time Sonification

    Get PDF
    This research project investigated the use of real-time sonification as a way to improve the quality and motivation of biceps curl exercise among healthy young participants. A sonification system was developed featuring an elec- tromyography (EMG) sensor and Microsoft Kinect camera. During exercise, muscular and kinematic data were collected and sent to custom design sonifi- cation software developed using Max to generate real-time auditory feedback. The software provides four types of output sound in consideration of personal preference and long-term use. Three experiments were carried out. The pilot study examined the sonifi- cation system and gathered the usersā€™ comments about their experience of each type of sound in relation to its functionality and aesthetics. A 3-session between-subjects test and an 8-session within-subjects comparative test were conducted to compared the exercise quality and motivation between two conditions: with and without the real-time sonification. Overall, several conclusions are drawn based on the experimental results: The sonification improved participantsā€™ pace of biceps curl significantly. No significant effect was found for the effect on vertical movement range. Participants expended more effort in training with the presence of sonification. Analysis of sur- veys indicated a higher motivation and willingness when exercising with the sonification. The results reflect a wider potential for applications including general fitness, physiotherapy and elite sports training

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoPā€™s dancer sonification system is to ā€œsonify the motion and emotionā€ of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    The Sound of the hallmarks of cancer

    Get PDF
    The objective of this research is to create a mixed portfolio of data-driven composition and performance interfaces, fixed Electroacoustic/Computer music compositions, and live-improvised musical and audiovisual works reflecting cancer as a disease. The main methodology in generating the raw sonic material is the sonification of high-throughput, protein/RNA fold-change data, derived from the bio- molecular research of cancer cells. This data and relevant insight into the field are obtained as part of a collaboration with Barts Cancer Institute, in London, UK. Furthermore, for the purpose of musical effectiveness and reaching wider audiences, a focus has been placed on balancing the use of data-driven sonic material with composer-driven musical choices, by drawing upon the narrative of the Hallmarks of Cancer (Hanahan and Weinberg, 2011) which is a widely accepted conceptual framework in the field of cancer research for understanding the various biomolecular processes responsible for causing cancer. This method is adopted in order to inspire musical form, and guide some of the syntactic and aesthetic choices within both fixed and improvised works. In addition, this research also reflects upon the use of data sonification as an artistic tool and practice, while also addressing the contradictions and contention that arise as a result of scientific aims and expectations regarding sonification, resulting in a proposed original model for framing and classifying artistic works incorporating this approach

    The effects of concurrent visual versus verbal feedback on swimming strength task execution

    Get PDF
    Background: The aim was to compare the effects of two different types of concurrent feedback administration on biomechanical performance during a swimming-specific task. Material and methods: A counterbalanced repeated measures design was used to compare the execution of the butterfly stroke (the propulsion phase only) on a modified Smith machine. Twenty repetitions were performed in each condition of feedback (visual vs. verbal). Fourteen college swimmers (age xĢ„ = 22.21 Ā±1.85 years, height xĢ„ = 173.71 Ā±8.65 cm, mass xĢ„ = 71.32 Ā±10.64 kg) were recruited. An incremental force test was administered for each participant to determine the mean propulsive velocity in which maximal power was produced. Feedback addressed correct execution velocity of the pulling movement that corresponded to the maximal power production as determined in an incremental force test. Results: T testing revealed no statistically significant differences between the verbal and visual feedback conditions. Visual feedback elicited a correct response in 76.11% of total feedback compared with 72.06% in the verbal feedback condition. Conclusions: Considering total feedback response, the visual feedback condition elicited 4.05% more correct responses than verbal feedback. However, this difference did not attain statistical significance and, therefore, the underlying hypothesis could not be confirmed.CTS-527: Actividad fĆ­sica y deportiva en el medio acuĆ”tic

    Listening-Mode-Centered Sonification Design for Data Exploration

    Get PDF
    Grond F. Listening-Mode-Centered Sonification Design for Data Exploration. Bielefeld: Bielefeld University; 2013.From the Introduction to this thesis: Through the ever growing amount of data and the desire to make them accessible to the user through the sense of listening, sonification, the representation of data by using sound has been subject of active research in the computer sciences and the field of HCI for the last 20 years. During this time, the field of sonification has diversified into different application areas: today, sound in auditory display informs the user about states and actions on the desktop and in mobile devices; sonification has been applied in monitoring applications, where sound can range from being informative to alarming; sonification has been used to give sensory feedback in order to close the action and perception loop; last but not least, sonifications have also been developed for exploratory data analysis, where sound is used to represent data with unknown structures for hypothesis building. Coming from the computer sciences and HCI, the conceptualization of sonification has been mostly driven by application areas. On the other hand, the sonic arts who have always contributed to the community of auditory display have a genuine focus on sound. Despite this close interdisciplinary relation of communities of sound practitioners, a rich and sound- (or listening)-centered concept about sonification is still missing as a point of departure for a more application and task overarching approach towards design guidelines. Complementary to the useful organization along fields of applications, a conceptual framework that is proper to sound needs to abstract from applications and also to some degree from tasks, as both are not directly related to sound. I hence propose in this thesis to conceptualize sonifications along two poles where sound serves either a normative or a descriptive purpose. In the beginning of auditory display research, a continuum between a symbolic and an analogic pole has been proposed by Kramer (1994a, page 21). In this continuum, symbolic stands for sounds that coincide with existing schemas and are more denotative, analogic stands for sounds that are informative through their connotative aspects. (compare Worrall (2009, page 315)). The notions of symbolic and analogic illustrate the struggle to find apt descriptions of how the intention of the listener subjects audible phenomena to a process of meaning making and interpretation. Complementing the analogic-symbolic continuum with descriptive and normative purposes of displays is proposed in the light of the recently increased research interest in listening modes and intentions. Similar to the terms symbolic and analogic, listening modes have been discussed in auditory display since the beginning usually in dichotomic terms which were either identified with the words listening and hearing or understood as musical listening and everyday listening as proposed by Gaver (1993a). More than 25 years earlier, four direct listening modes have been introduced by Schaeffer (1966) together with a 5th synthetic mode of reduced listening which leads to the well-known sound object. Interestingly, Schaefferā€™s listening modes remained largely unnoticed by the auditory display community. Particularly the notion of reduced listening goes beyond the connotative and denotative poles of the continuum proposed by Kramer and justifies the new terms descriptive and normative. Recently, a new taxonomy of listening modes has been proposed by Tuuri and Eerola (2012) that is motivated through an embodied cognition approach. The main contribution of their taxonomy is that it convincingly diversifies the connotative and denotative aspects of listening modes. In the recently published sonification handbook, multimodal and interactive aspects in combination with sonification have been discussed as promising options to expand and advance the field by Hunt and Hermann (2011), who point out that there is a big need for a better theoretical foundation in order to systematically integrate these aspects. The main contribution of this thesis is to address this need by providing alternative and complementary design guidelines with respect to existing approaches, all of which have been conceived before the recently increased research interest in listening modes. None of the existing contributions to design frameworks integrates multimodality, and listening modes with a focus on exploratory data analysis, where sonification is conceived to support the understanding of complex data potentially helping to identify new structures therein. In order to structure this field the following questions are addressed in this thesis: ā€¢ How do natural listening modes and reduced listening relate to the proposed normative and descriptive display purposes? ā€¢ What is the relationship of multimodality and interaction with listening modes and display purposes? ā€¢ How can the potential of embodied cognition based listening modes be put to use for exploratory data sonification? ā€¢ How can listening modes and display purposes be connected to questions of aesthetics in the display? ā€¢ How do data complexity and Parameter-mapping sonification relate to exploratory data analysis and listening modes
    corecore