122 research outputs found

    The design and evaluation of an auditory-enhanced scrollbar

    Get PDF
    A structured method is described for the analysis of interactions to identify situations where hidden information may exist and where non-speech sound might be used to overcome the associated problems. Interactions are considered in terms of events, status and modes to find any hidden information. This is then categorised in terms of the feedback needed to present it. An auditory-enhanced scrollbar, based on the method described, was then experimentally tested. Timing and error rates were used along with subjective measures of workload. Results from the experiment show a significant reduction in time to complete one task, a decrease in the mental effort required and an overall preference for the auditory-enhanced scrollbar

    The design and evaluation of an auditory-enhanced scrollbar

    Full text link

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    Sonically enhanced interface toolkit

    Get PDF
    This paper describes an on-going research project investigating the design of a user-interface toolkit composed of sonically enhanced widgets. The motivation for this work is the same that motivated the creation of graphical interface toolkits, which was to simplify their construction, allowing designers who are not experts to create such interfaces; to ensure the sonically enhanced widgets are effective and improve usability; and to ensure the widgets use sound in a clear and consistent way across the interface

    Parallel earcons: reducing the length of audio messages

    Get PDF
    This paper describes a method of presenting structured audio messages, earcons, in parallel so that they take less time to play and can better keep pace with interactions in a human-computer interface. The two component parts of a compound earcon are played in parallel so that the time taken is only that of a single part. An experiment was conducted to test the recall and recognition of parallel compound earcons as compared to serial compound earcons. Results showed that there are no differences in the rates of recognition between the two groups. Non-musicians are also shown to be equal in performance to musicians. Some extensions to the earcon creation guidelines of Brewster, Wright and Edwards are put forward based upon research into auditory stream segregation. Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates

    Sonically-enhanced widgets: comments on Brewster and Clarke, ICAD 1997

    Get PDF
    This paper presents a review of the research surrounding the paper “The Design and Evaluation of a Sonically Enhanced Tool Palette” by Brewster and Clarke from ICAD 1997. A historical perspective is given followed by a discussion of how this work has fed into current developments in the area

    Using earcons to improve the usability of tool palettes

    Get PDF
    This paper describes an experiment to investigate the effectiveness of adding sound to tool palettes. Palette shave usability problems because users need to see the information they present but they are often outside the area of visual focus. Non-speech sounds called earcons were used to indicate the current tool and tool changes so that users could tell what tool was in use, wherever they were looking. Experimental results showed a significant reduction in the number of tasks performed with the wrong tool. Users knew what the current tool was and did not try to perform tasks with the wrong one

    Correcting menu usability problems with sound

    Get PDF
    Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound

    Using non-speech sounds to provide navigation cues

    Get PDF
    This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time
    corecore