5 research outputs found

    SoundTrAD, a method and tool for prototyping auditory displays: Can we apply it to an autonomous driving scenario?

    Get PDF
    This paper presents SoundTrAD, a method and tool for designing auditory displays for the user interface. SoundTrAD brings together ideas from user interface design and soundtrack composition and supports novice auditory display designers in building an auditory user interface. The paper argues for the need for such a method before going on to describe the fundamental structure of the method and construction of the supporting tools. The second half of the paper applies SoundTrAD to an autonomous driving scenario and demonstrates its use in prototyping ADs for a wide range of scenarios

    The Development and Evaluation of an Approach to Auditory Display Design Based on Soundtrack Composition

    Get PDF
    PhDThis thesis presents the development and evaluation of a new approach (Sound- TrAD) to designing auditory interfaces. The proposed approach combines practices and concepts from film soundtrack composition with established approaches to general user interface design. The synthesis of the two design approaches from different areas of design into a novel approach may be viewed as an example of conceptual integration, (also known as conceptual blending). The process of developing and evaluating SoundTrAD broadly follows a methodology of Research through Design. The thesis presents four user studies as part of an iterative design and evaluation process. Each study involves a mixture of expert and novice end-users which provides new information and identifi es new questions and design issues for the subsequent studies. The fi rst study explores how an idea from fim composition (the cue sheet) can be used in auditory interface design to help designers place and organise sound elements, and to better understand auditory design spaces. In order to make this concept work in the new context, it is combined with the scenario concept from general interaction design to provide designers with reference linear sequences of events and actions. The second study used thematic analysis to investigate how information to be sonifed can be characterised and analysed for features that can be mapped in to sound. The study also explores the development of a timeline on which the sound design ideas from soundtrack composition for individual events, can be placed and in principle moved in order to cater for multiple use-case scenarios. The third study presents an iteration of this, including further development of both the task analysis and mapping technique. The study also explores the idea in principle of an interactive timeline that can be manipulated by the designer in order to re-arrange and audition sound events. The final study brings the studies together by obtaining feedback on the success of a nal version of SoundTrAD.RCUK under the Digital Economy Doctoral Training Centre schem

    A Real-time Auditory Biofeedback System for Sports Swimming

    Get PDF
    Cesarini D, Hermann T, Ungerechts B. A Real-time Auditory Biofeedback System for Sports Swimming. In: Stockmann T, Metatla O, MacDonald D, eds. Proceedings of the 20th International Conference on Auditory Display (ICAD 2014). Workshop on Sonification for Sports and Performance. New York, NY, USA: International Community for Auditory Display (ICAD); 2014.This paper introduces a novel hardware and software system to measure, process, and sonify the instantaneous hydrodynamic pressure at any surface of the human body during sports swimming. In particular, we use four sensors attached to the palmar and dorsal side of the hands to calculate the net pressure difference of piezo probes, corresponding to the net energy transferred to water due to hand actions. The information corresponds to the ‘feel-for-water’ which is critical to improve the effectiveness of swimming. With our system the information is conveyed, using audio, by interactive sonifications using in-ear headphones, allowing a stereo spatialized sound representation of the interaction of both hands and water. For the first time, we hereby demonstrate in-water-experience of swimming actions using sonification. We focus on the system setup and present two parameter-mapping sonification designs that represent differently derived information and illustrate the system performance with interaction videos

    Designing adaptive audio for autonomous driving: an industrial and academic-led design challenge

    Get PDF
    Presented at the 25th International Conference on Auditory Display (ICAD 2019) 23-27 June 2019, Northumbria University, Newcastle upon Tyne, UK.The paper discusses a design challenge around the use of adaptive audio to support experience and uptake of autonomous driving. The paper outlines a collaboration that is currently being established between Researchers at Swansea university and a major OEM that is set to examine user-centred approaches to designing audio that enhance and enrich human-experience with driving. The paper outlines the potential collaboration and describes how we will address the challenge to designing adaptive audio for unsupervised /autonomous driving. The paper outlines the research question we will address and how we will apply a tool/method that supports rapid prototyping for novice designers alongside addressing ideas around aesthetics in the interface and relationships between sound as a means for communication and as experience

    The Development Of A Method For Designing Auditory Displays Based On Soundtrack Composition

    Get PDF
    Presented at the 19th International Conference on Auditory Display (ICAD2013) on July 6-9, 2013 in Lodz, Poland.This paper details work toward the design of a method for creating auditory displays for the human-computer interface, based on soundtrack composition. We begin with the benefits to this approach before discussing methods for auditory display design and the need for a unification of different design techniques. We then outline our on-going investigation into the tools and techniques employed within the working practices of sound designers and soundtrack composers. Following this we report our observations of the main priorities that influence how composers create soundtracks and propose ways in which our method may support these. We argue that basing the first steps of the method on a ‘cue sheet could enable designers to identify actions, objects and events within an HCI scenario whilst taking into account the user and the context of use. This is followed by some initial observations of a preliminary study into whether a participant can successfully use this cue sheet methodology. We conclude by identifying that certain elements of the methodology need to be changed: Further investigation and subsequent design needs to be carried out into ways participants can successfully comprehend and systematically use the cue sheet to identify seen and unseen events, actions and objects within the human-computer interface. Additionally we need to investigate how best categorize and map these elements to sound. We conclude our paper with our plans for future wor
    corecore