44 research outputs found

    Technical aspects of a demonstration tape for three-dimensional sound displays

    Get PDF
    This document was developed to accompany an audio cassette that demonstrates work in three-dimensional auditory displays, developed at the Ames Research Center Aerospace Human Factors Division. It provides a text version of the audio material, and covers the theoretical and technical issues of spatial auditory displays in greater depth than on the cassette. The technical procedures used in the production of the audio demonstration are documented, including the methods for simulating rotorcraft radio communication, synthesizing auditory icons, and using the Convolvotron, a real-time spatialization device

    From shape to sound: sonification of two dimensional curves by reenaction of biological movements

    No full text
    International audienceSonifying two dimensional data is a common problem. In this study, we propose a method to synthesize sonic metaphors of two dimensional curves based on the mental representation of the sound produced by the friction of the pencil when somebody is drawing or writing on a paper. The relevance of such approach is firstly presented. Secondly, synthesis of friction sounds allows to investigate the relevance of the kinematics in the perception of a gesture underlying a sound. In third part, a biological law linking the curvature of a shape to the velocity of the gesture which have draw the shape is calibrated from the auditory point of view. It enables to generate friction sounds with a physically based synthesis model from a given shape

    Novel interfaces for controlling sound effects and physical models

    Get PDF

    Synthesis and control of everyday sounds reconstructing Russolo’s Intonarumori

    Get PDF

    Correcting menu usability problems with sound

    Get PDF
    Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound

    Sound at the user interface

    Get PDF

    Sound at the user interface

    Get PDF

    Understanding concurrent earcons: applying auditory scene analysis principles to concurrent earcon recognition

    Get PDF
    Two investigations into the identification of concurrently presented, structured sounds, called earcons were carried out. One of the experiments investigated how varying the number of concurrently presented earcons affected their identification. It was found that varying the number had a significant effect on the proportion of earcons identified. Reducing the number of concurrently presented earcons lead to a general increase in the proportion of presented earcons successfully identified. The second experiment investigated how modifying the earcons and their presentation, using techniques influenced by auditory scene analysis, affected earcon identification. It was found that both modifying the earcons such that each was presented with a unique timbre, and altering their presentation such that there was a 300 ms onset-to-onset time delay between each earcon were found to significantly increase identification. Guidelines were drawn from this work to assist future interface designers when incorporating concurrently presented earcons
    corecore