23 research outputs found

    Listening-Mode-Centered Sonification Design for Data Exploration

    Get PDF
    Grond F. Listening-Mode-Centered Sonification Design for Data Exploration. Bielefeld: Bielefeld University; 2013.From the Introduction to this thesis: Through the ever growing amount of data and the desire to make them accessible to the user through the sense of listening, sonification, the representation of data by using sound has been subject of active research in the computer sciences and the field of HCI for the last 20 years. During this time, the field of sonification has diversified into different application areas: today, sound in auditory display informs the user about states and actions on the desktop and in mobile devices; sonification has been applied in monitoring applications, where sound can range from being informative to alarming; sonification has been used to give sensory feedback in order to close the action and perception loop; last but not least, sonifications have also been developed for exploratory data analysis, where sound is used to represent data with unknown structures for hypothesis building. Coming from the computer sciences and HCI, the conceptualization of sonification has been mostly driven by application areas. On the other hand, the sonic arts who have always contributed to the community of auditory display have a genuine focus on sound. Despite this close interdisciplinary relation of communities of sound practitioners, a rich and sound- (or listening)-centered concept about sonification is still missing as a point of departure for a more application and task overarching approach towards design guidelines. Complementary to the useful organization along fields of applications, a conceptual framework that is proper to sound needs to abstract from applications and also to some degree from tasks, as both are not directly related to sound. I hence propose in this thesis to conceptualize sonifications along two poles where sound serves either a normative or a descriptive purpose. In the beginning of auditory display research, a continuum between a symbolic and an analogic pole has been proposed by Kramer (1994a, page 21). In this continuum, symbolic stands for sounds that coincide with existing schemas and are more denotative, analogic stands for sounds that are informative through their connotative aspects. (compare Worrall (2009, page 315)). The notions of symbolic and analogic illustrate the struggle to find apt descriptions of how the intention of the listener subjects audible phenomena to a process of meaning making and interpretation. Complementing the analogic-symbolic continuum with descriptive and normative purposes of displays is proposed in the light of the recently increased research interest in listening modes and intentions. Similar to the terms symbolic and analogic, listening modes have been discussed in auditory display since the beginning usually in dichotomic terms which were either identified with the words listening and hearing or understood as musical listening and everyday listening as proposed by Gaver (1993a). More than 25 years earlier, four direct listening modes have been introduced by Schaeffer (1966) together with a 5th synthetic mode of reduced listening which leads to the well-known sound object. Interestingly, Schaeffer’s listening modes remained largely unnoticed by the auditory display community. Particularly the notion of reduced listening goes beyond the connotative and denotative poles of the continuum proposed by Kramer and justifies the new terms descriptive and normative. Recently, a new taxonomy of listening modes has been proposed by Tuuri and Eerola (2012) that is motivated through an embodied cognition approach. The main contribution of their taxonomy is that it convincingly diversifies the connotative and denotative aspects of listening modes. In the recently published sonification handbook, multimodal and interactive aspects in combination with sonification have been discussed as promising options to expand and advance the field by Hunt and Hermann (2011), who point out that there is a big need for a better theoretical foundation in order to systematically integrate these aspects. The main contribution of this thesis is to address this need by providing alternative and complementary design guidelines with respect to existing approaches, all of which have been conceived before the recently increased research interest in listening modes. None of the existing contributions to design frameworks integrates multimodality, and listening modes with a focus on exploratory data analysis, where sonification is conceived to support the understanding of complex data potentially helping to identify new structures therein. In order to structure this field the following questions are addressed in this thesis: ‱ How do natural listening modes and reduced listening relate to the proposed normative and descriptive display purposes? ‱ What is the relationship of multimodality and interaction with listening modes and display purposes? ‱ How can the potential of embodied cognition based listening modes be put to use for exploratory data sonification? ‱ How can listening modes and display purposes be connected to questions of aesthetics in the display? ‱ How do data complexity and Parameter-mapping sonification relate to exploratory data analysis and listening modes

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    Enaction and Visual Arts : Towards Dynamic Instrumental Visual Arts

    No full text
    International audienceThis paper is a theoretical paper that presents how the concept of Enaction, centerd on action and interaction paradigm, coupled with the new properties of the contemporary computer tools is able to provoke deep changes in arts. It examines how this concept accompanies the historical trends in Musical, Visual and Choreographic Arts. It enumerates the new correlated fundamental questions, scientific as well as artistic, the author identifies. After that, it focuses on Dynamic Visual Arts, trying to elicit the revolution brought by these deep conceptual and technological changes. It assumes that the contemporary conditions shift the art of visual motion from a ''Kinema'' to a ''Dyname'', allowing artists ''to play images'' as ''to play violin'', and that this shift could not appear before our era. It illustrates these new historical possibilities by some examples developed by the scientific and artistic works of the author and her co- workers. In conclusion, it assumes that this shift could open the door to a new genuine connection between arts that believed to cooperate but that remained separated during ages: music, dance and animation. This possible new ALLIANCE could lead the society to consider a new type of arts, we want to call ''Dynamic Instrumental Arts'', which will be really multisensorial: simultaneously Musical, Gestural and Visual

    Back To The Cross-Modal Object: A Look Back At Early Audiovisual Performance Through The Lens Of Objecthood

    Get PDF
    This paper looks at 2 early digital audiovisual performance works, solo work Overbow and the group Sensors_Sonics_Sights (S.S.S) and describes the compositional and performance strategies behind each one. We draw upon the concept of audiovisual objecthood proposed by Kubovy and Schutz to think about the different ways in which linkages between vision and audition can be established, and how audio-visual objects can be composed from the specific attributes of auditory and visual perception. The model is used as a means to analyze these live audio-visual works performed using sensor-based instruments. The fact that gesture is not the only visual component in these performances, and is the common source articulating sound and visual output, extends the classical 2-way audiovisual object into a three-way relationship between gesture, sound, and image, fulfilling a potential of cross-modal objects

    Amplifying Actions - Towards Enactive Sound Design

    Get PDF
    Recently, artists and designers have begun to use digital technologies in order to stimulate bodily interaction, while scientists keep revealing new findings about sensorimotor contingencies, changing the way in which we understand human knowledge. However, implicit knowledge generated in artistic projects can become difficult to transfer and scientific research frequently remains isolated due to specific disciplinary languages and methodologies. By mutually enriching holistic creative approaches and highly specific scientific ways of working, this doctoral dissertation aims to set the foundation for Enactive Sound Design. It is focused on sound that engages sensorimotor experience that has been neglected within the existing design practices. The premise is that such a foundation can be best developed if grounded in transdisciplinary methods that bring together scientific and design approaches. The methodology adopted to achieve this goal is practice-based and supported by theoretical research and project analysis. Three different methodologies were formulated and evaluated during this doctoral study, based on a convergence of existing methods from design, psychology and human-computer interaction. First, a basic design approach was used to engage in a reflective creation process and to extend the existing work on interaction gestalt through hands-on activities. Second, psychophysical experiments were carried out and adapted to suit the needed shift from reception-based tests to a performance-based quantitative evaluation. Last, a set of participatory workshops were developed and conducted, within which the enactive sound exercises were iteratively tested through direct and participatory observation, questionnaires and interviews. A foundation for Enactive Sound Design developed in this dissertation includes novel methods that have been generated by extensive explorations into the fertile ground between basic design education, psychophysical experiments and participatory design. Combining creative practices with traditional task analysis further developed this basic design approach. The results were a number of abstract sonic artefacts conceptualised as the experimental apparatuses that can allow psychologists to study enactive sound experience. Furthermore, a collaboration between designers and scientists on a psychophysical study produced a new methodology for the evaluation of sensorimotor performance with tangible sound interfaces.These performance experiments have revealed that sonic feedback can support enactive learning. Finally, participatory workshops resulted in a number of novel methods focused on a holistic perspective fostered through a subjective experience of self-producing sound. They indicated the influence that such an approach may have on both artists and scientists in the future. The role of designer, as a scientific collaborator within psychological research and as a facilitator of participatory workshops, has been evaluated. Thus, this dissertation recommends a number of collaborative methods and strategies that can help designers to understand and reflectively create enactive sound objects. It is hoped that the examples of successful collaborations between designers and scientists presented in this thesis will encourage further projects and connections between different disciplines, with the final goal of creating a more engaging and a more aware sonic future.European Commission 6th Framework and European Science Foundation (COST Action

    The composer as technologist : an investigation into compositional process

    Get PDF
    This work presents an investigation into compositional process. This is undertaken where a study of musical gesture, certain areas of cognitive musicology, computer vision technologies and object-orientated programming, provide the basis for a composer (author) to assume the role of a technologist and acquire knowledge and skills to that end. In particular, it focuses on the application and development of a video gesture recognition heuristic to the compositional problems posed. The result is the creation of an interactive musical work with score for violin and electronics that supports the research findings. In addition, the investigative approach into developing technology to solve musical problems that explores practical composition and aesthetic challenges is detailed

    Here We Don't Speak, Here We Whistle. Mobilizing A Cultural Reading of Cognition, Sound and Ecology in the Design of a Language Support System for the Silbo Gomero.

    Get PDF
    This thesis presents the study of a whistled form of language known as the Silbo Gomero (Island of La Gomera, Canarian Archipelago). After fifty years of almost total extinction this form of communication has been revived, shifting from the fields where it was once used by peasant islanders and into the space of the classroom. Here, it is integrated into the curriculum of the island’s schools while providing children with a rich cultural platform that instigates linguistic and auditory experimentation. As a response to this transformation, the need to develop didactic materials is presented as one of the main challenges encountered by the community. Taking this condition as the driver of its research, this body of work draws on phonological, bioacoustic and cognitive theories to develop a formal understanding of the Silbo Gomero in a way which aims to complement the whistler’s own experience and mastery of the language by also developing an ethnographic reading of this indigenous body of knowledge and its characteristic auditory perceptual ecology. The investigation has culminated in the design of a digital application, El Laberinto del Sonido, and its active use within the educational community of the island. Finally, emphasising the practice-based nature of the research, this thesis attempts to relocate the question of intangible heritage from a focus on cultural safeguarding and transmission to one of experimentation, where an indigenous body of knowledge not only provides new exploratory paradigms in the design of didactic materials, but also contributes towards the sustainability of culturally situated forms of apprenticeship within contemporary educational contexts

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures

    A Multidimensional Sketching Interface for Visual Interaction with Corpus-Based Concatenative Sound Synthesis

    Get PDF
    The present research sought to investigate the correspondence between auditory and visual feature dimensions and to utilise this knowledge in order to inform the design of audio-visual mappings for visual control of sound synthesis. The first stage of the research involved the design and implementation of Morpheme, a novel interface for interaction with corpus-based concatenative synthesis. Morpheme uses sketching as a model for interaction between the user and the computer. The purpose of the system is to facilitate the expression of sound design ideas by describing the qualities of the sound to be synthesised in visual terms, using a set of perceptually meaningful audio-visual feature associations. The second stage of the research involved the preparation of two multidimensional mappings for the association between auditory and visual dimensions.The third stage of this research involved the evaluation of the Audio-Visual (A/V) mappings and of Morpheme’s user interface. The evaluation comprised two controlled experiments, an online study and a user study. Our findings suggest that the strength of the perceived correspondence between the A/V associations prevails over the timbre characteristics of the sounds used to render the complementary polar features. Hence, the empirical evidence gathered by previous research is generalizable/ applicable to different contexts and the overall dimensionality of the sound used to render should not have a very significant effect on the comprehensibility and usability of an A/V mapping. However, the findings of the present research also show that there is a non-linear interaction between the harmonicity of the corpus and the perceived correspondence of the audio-visual associations. For example, strongly correlated cross-modal cues such as size-loudness or vertical position-pitch are affected less by the harmonicity of the audio corpus in comparison to weaker correlated dimensions (e.g. texture granularity-sound dissonance). No significant differences were revealed as a result of musical/audio training. The third study consisted of an evaluation of Morpheme’s user interface were participants were asked to use the system to design a sound for a given video footage. The usability of the system was found to be satisfactory.An interface for drawing visual queries was developed for high level control of the retrieval and signal processing algorithms of concatenative sound synthesis. This thesis elaborates on previous research findings and proposes two methods for empirically driven validation of audio-visual mappings for sound synthesis. These methods could be applied to a wide range of contexts in order to inform the design of cognitively useful multi-modal interfaces and representation and rendering of multimodal data. Moreover this research contributes to the broader understanding of multimodal perception by gathering empirical evidence about the correspondence between auditory and visual feature dimensions and by investigating which factors affect the perceived congruency between aural and visual structures
    corecore