86 research outputs found

    Using non-speech sounds to provide navigation cues

    Get PDF
    This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time

    Efficiency of Spearcon-Enhanced Navigation of One Dimensional Electronic Menus

    Get PDF
    This study simulated and compared cell phone contact book menu navigation using combinations of both auditory (text-to-speech and spearcons) and visual cues. A total of 127 undergraduates participated in a study that required using one of five conditions of alphabetically listed menu cues to find a target name. Participants using visual cues (either alone or combined with auditory cues) outperformed those using only auditory cues. Performance was not found to be significantly different among the three auditory only conditions. When combined with visual cues, spearcons improved navigational efficiency more than both text-to-speech cues and menus using no sound, and provided evidence for the ability of sound to enhance visual menus. Research results provide evidence applicable to efficient auditory menu creation.Gregory Corso - Committee Member/Second Reader ; Bruce Walker - Faculty Mento

    Designing non-speech sounds to support navigation in mobile phone menus

    Get PDF
    This paper describes a framework for integrating non-speech audio to hierarchical menu structures where the visual feedback is limited. In the first part of this paper, emphasis is put on how to extract sound design principles from actual navigation problems. These design principles are then applied in the second part, through the design, implementation and evaluation of a set of sounds in a computer-based simulation of the Nokia 6110 mobile phone. The evaluation indicates that non-speech sound improves the performance of navigational tasks in terms of the number of errors made and the number of keypresses taken to complete the given tasks. This study provides both theoretical and practical insights about the design of audio cues intended to support navigation in complex menu structures

    Using compound earcons to represent hierarchies

    Get PDF
    Previous research on non-speech audio messages called <i>earcons</i> showed that they could provide powerful navigation cues in menu hierarchies. This work used <i>hierarchical</i> earcons. In this paper we suggest <i>compound</i> earcons provide a more flexible method for presenting this information. A set of sounds was created to represent the numbers 0-4 and dot. Sounds could then be created for any node in a hierarchy by concatenating these simple sounds. A hierarchy of four levels and 27 nodes was constructed. An experiment was conducted in which participants had to identify their location in the hierarchy by listening to an earcon. Results showed that participants could identify their location with over 97% accuracy, significantly better than with hierarchical earcons. Participants were also able to recognise previously unheard earcons with over 97% accuracy. These results showed that compound earcons are an effective way of representing hierarchies in sound

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    Navigating Telephone-Based Interfaces with Earcons

    Get PDF
    Non-speech audio messages called earcons can provide powerful navigation cues in menu hierarchies. However, previous research on earcons has not addressed the particular problems of menus in telephone-based interfaces (TBI's) such as: Does the lower quality of sound in TBI's lower recall rates, can users remember earcons over a period of time and what effect does training type have on recall. An experiment was conducted and results showed that sound quality did lower the recall of earcons. However, redisgn of the earcons overcame this problem with 73 % recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With 'personal training' participants recalled 73% of the earcons but with purely textual training results were significantly lower. These results show that earcons can provide excellent navigation cues for telephone-based interface

    Navigation efficiency of two dimensional auditory menus using spearcon enhancements

    Full text link

    An investigation of using music to provide navigation cues.

    Get PDF
    This paper describes an experiment that investigates new principles for representing hierarchical menus suchas telephone-based interface menus, with non-speech audio. A hierarchy of 25 nodes with a sound for eachnode was used. The sounds were designed to test the efficiency of using specific features of a musicallanguage to provide navigation cues. Participants (half musicians and half non-musicians) were asked toidentify the position of the sounds in the hierarchy. The overall recall rate of 86% suggests that syntacticfeatures of a musical language of representation can be used as meaningful navigation cues. More generally,these results show that the specific meaning of musical motives can be used to provide ways to navigate in ahierarchical structure such as telephone-based interfaces menu

    Correcting menu usability problems with sound

    Get PDF
    Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound

    "Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking

    Get PDF
    In a large number of electronic devices, users interact with the system by navigating through various menus. Auditory menus can complement or even replace visual menus, so research on auditory menus has recently increased with mobile devices as well as desktop computers. Despite the potential importance of auditory displays on touch screen devices, little research has been attempted to enhance the effectiveness of auditory menus for those devices. In the present study, I investigated how advanced auditory cues enhance auditory menu navigation on a touch screen smartphone, especially for new input gestures such as tapping, wheeling, and flicking methods for navigating a one-dimensional menu. Moreover, I examined if advanced auditory cues improve user experience, not only for visuals-off situations, but also for visuals-on contexts. To this end, I used a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the users of where they are in a long menu. In this study, each item in a menu was preceded by a sound based on the item's initial letter. One hundred and twenty two undergraduates navigated through an alphabetized list of 150 song titles. The study was a split-plot design with manipulated auditory cue type (text-to-speech (TTS) alone vs. TTS plus spindex), visual mode (on vs. off), and input gesture style (tapping, wheeling, and flicking). Target search time and subjective workload for the TTS + spindex were lower than those of the TTS alone in all input gesture types regardless of visual type. Also, on subjective ratings scales, participants rated the TTS + spindex condition higher than the plain TTS on being 'effective' and 'functionally helpful'. The interaction between input methods and output modes (i.e., auditory cue types) and its effects on navigation behaviors was also analyzed based on the two-stage navigation strategy model used in auditory menus. Results were discussed in analogy with visual search theory and in terms of practical applications of spindex cues.M.S.Committee Chair: Bruce N. Walker; Committee Member: Frank Durso; Committee Member: Gregory M. Cors
    • …
    corecore