538 research outputs found

    An evaluation of earcons for use in auditory human-computer interfaces

    Get PDF
    An evaluation of earcons was carried out to seee whether they are an effective means of communicating information in sound. An initial experiment showed that earcons were better than unstructured bursts of sound and that musical timbres were more effective than simple tones. A second experiment was then carried out which improved upon some of the weaknesses shown up in Experiment 1 to give a significant improvement in recognition. From the results of these experiments some guidelines were drawn up for use in the creation of earcons. Earcons have been shown to be an effective method for communicating information in a human-computer interface

    Parallel earcons: reducing the length of audio messages

    Get PDF
    This paper describes a method of presenting structured audio messages, earcons, in parallel so that they take less time to play and can better keep pace with interactions in a human-computer interface. The two component parts of a compound earcon are played in parallel so that the time taken is only that of a single part. An experiment was conducted to test the recall and recognition of parallel compound earcons as compared to serial compound earcons. Results showed that there are no differences in the rates of recognition between the two groups. Non-musicians are also shown to be equal in performance to musicians. Some extensions to the earcon creation guidelines of Brewster, Wright and Edwards are put forward based upon research into auditory stream segregation. Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    Understanding concurrent earcons: applying auditory scene analysis principles to concurrent earcon recognition

    Get PDF
    Two investigations into the identification of concurrently presented, structured sounds, called earcons were carried out. One of the experiments investigated how varying the number of concurrently presented earcons affected their identification. It was found that varying the number had a significant effect on the proportion of earcons identified. Reducing the number of concurrently presented earcons lead to a general increase in the proportion of presented earcons successfully identified. The second experiment investigated how modifying the earcons and their presentation, using techniques influenced by auditory scene analysis, affected earcon identification. It was found that both modifying the earcons such that each was presented with a unique timbre, and altering their presentation such that there was a 300 ms onset-to-onset time delay between each earcon were found to significantly increase identification. Guidelines were drawn from this work to assist future interface designers when incorporating concurrently presented earcons

    Sonically-enhanced widgets: comments on Brewster and Clarke, ICAD 1997

    Get PDF
    This paper presents a review of the research surrounding the paper “The Design and Evaluation of a Sonically Enhanced Tool Palette” by Brewster and Clarke from ICAD 1997. A historical perspective is given followed by a discussion of how this work has fed into current developments in the area

    The design and evaluation of a sonically enhanced tool palette

    Get PDF
    This paper describes an experiment to investigate the effectiveness of adding sound to tool palettes. Palettes have usability problems because users need to see the information they present, but they are often outside the area of visual focus. We used nonspeech sounds called earcons to indicate the current tool and when tool changes occurred so that users could tell what tool they were in wherever they were looking. Results showed a significant reduction in the number of tasks performed with the wrong tool. Therefore, users knew what the current tool was and did not try to perform tasks with the wrong one. All of this was not at the expense of making the tool palettes any more annoying to use

    Ecological IVIS design : using EID to develop a novel in-vehicle information system

    Get PDF
    New in-vehicle information systems (IVIS) are emerging which purport to encourage more environment friendly or ‘green’ driving. Meanwhile, wider concerns about road safety and in-car distractions remain. The ‘Foot-LITE’ project is an effort to balance these issues, aimed at achieving safer and greener driving through real-time driving information, presented via an in-vehicle interface which facilitates the desired behaviours while avoiding negative consequences. One way of achieving this is to use ecological interface design (EID) techniques. This article presents part of the formative human-centred design process for developing the in-car display through a series of rapid prototyping studies comparing EID against conventional interface design principles. We focus primarily on the visual display, although some development of an ecological auditory display is also presented. The results of feedback from potential users as well as subject matter experts are discussed with respect to implications for future interface design in this field

    Efficiency of Spearcon-Enhanced Navigation of One Dimensional Electronic Menus

    Get PDF
    This study simulated and compared cell phone contact book menu navigation using combinations of both auditory (text-to-speech and spearcons) and visual cues. A total of 127 undergraduates participated in a study that required using one of five conditions of alphabetically listed menu cues to find a target name. Participants using visual cues (either alone or combined with auditory cues) outperformed those using only auditory cues. Performance was not found to be significantly different among the three auditory only conditions. When combined with visual cues, spearcons improved navigational efficiency more than both text-to-speech cues and menus using no sound, and provided evidence for the ability of sound to enhance visual menus. Research results provide evidence applicable to efficient auditory menu creation.Gregory Corso - Committee Member/Second Reader ; Bruce Walker - Faculty Mento

    Correcting menu usability problems with sound

    Get PDF
    Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound

    Sonically enhanced interface toolkit

    Get PDF
    This paper describes an on-going research project investigating the design of a user-interface toolkit composed of sonically enhanced widgets. The motivation for this work is the same that motivated the creation of graphical interface toolkits, which was to simplify their construction, allowing designers who are not experts to create such interfaces; to ensure the sonically enhanced widgets are effective and improve usability; and to ensure the widgets use sound in a clear and consistent way across the interface
    corecore