9 research outputs found

    Ars Informatica -- Ars Electronica: Improving Sonification Aesthetics

    Get PDF
    In this paper we discuss æsthetic issues of sonifications. We posit that many sonifications have suffered from poor acoustic ecology which makes listening more difficult, thereby resulting in poorer data extraction and inference on the part of the listener. Lessons are drawn from the electro acoustic music community as we argue that it is not instructive to distinguish between sonifications and music/sound art. Edgar Var`ese defined music as organised sound and sonifications organise sound to reflect some aspect of the thing being sonified. Therefore, we propose that sonification designers can improve the communicative ability of their auditory displays by paying attention to the æsthetic issues that are well known to composers, orchestrators, sound designers & artists, and recording engineers

    Music and Speech in Auditory Interfaces: When is One Mode More Appropriate Than the Other?

    Get PDF
    A number of experiments, which have been carried out using non-speech auditory interfaces, are reviewed and the advantages and disadvantages of each are discussed. The possible advantages of using non-speech audio media such as music are discussed – richness of the representations possible, the aesthetic appeal, and the possibilities of such interfaces being able to handle abstraction and consistency across the interface

    Music and speech in auditory interfaces: When is one mode more appropriate than another?

    Get PDF
    Presented at the 11th International Conference on Auditory Display (ICAD2005)A number of experiments, which have been carried out using non-speech auditory interfaces, are reviewed and the advantages and disadvantages of each are discussed. The possible advantages of using non-speech audio media such as music are discussed – richness of the representations possible, the aesthetic appeal, and the possibilities of such interfaces being able to handle abstraction and consistency across the interface

    The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users

    Get PDF
    Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts: The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers. The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers

    Program Comprehension Through Sonification

    Get PDF
    Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired

    When Bugs Sing

    Get PDF
    In The Songs of Insects, Pierce (1949) described the striped ground cricket, Nemobius fasciatus-fasciatus, which chirps at a rate proportional to ambient air temperature. Twenty chirps-per-second tell us it is 31.4 °C; 16 chirps and it is 27 °C. This is a natural example of an auditory display, a mechanism for communicating data with sound. By applying auditory display techniques to computer programming we have attempted to give the bugs that live in software programs their own songs. We have developed the CAITLIN musical program auralisation system (Vickers and Alty, 2002b) to allow structured musical mappings to be made of the constructs in Pascal programs. Initial experimental evaluation [Interacting with Computers (2002a,b)] showed that subjects could interpret the musical motifs used to represent the various Pascal language constructs. In this paper we describe how the CAITLIN system was used to study the effects of musical program auralisation on debugging tasks performed by novice Pascal programmers. The results of the experiment indicate that a formal musical framework can act as a medium for communicating information about program behaviour, and that the information communicated could be used to assist with the task of locating bugs in faulty programs
    corecore