24 research outputs found

    Virtual acoustics displays

    Get PDF
    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events

    Sound at the user interface

    Get PDF

    Sound at the user interface

    Get PDF

    Parallel earcons: reducing the length of audio messages

    Get PDF
    This paper describes a method of presenting structured audio messages, earcons, in parallel so that they take less time to play and can better keep pace with interactions in a human-computer interface. The two component parts of a compound earcon are played in parallel so that the time taken is only that of a single part. An experiment was conducted to test the recall and recognition of parallel compound earcons as compared to serial compound earcons. Results showed that there are no differences in the rates of recognition between the two groups. Non-musicians are also shown to be equal in performance to musicians. Some extensions to the earcon creation guidelines of Brewster, Wright and Edwards are put forward based upon research into auditory stream segregation. Parallel earcons are shown to be an effective means of increasing the presentation rates of audio messages without compromising recognition rates

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    Comparing the Effects of Mental Workload Between Visual and Auditory Secondary Tasks During Laparoscopy

    Get PDF
    The purpose of this study was to test Wickens’ Multiple Resource Theory (MRT) by comparing performance and subjective workload on a visual-spatial secondary task with an auditory-spatial analog when paired with visual-spatial laparoscopic primary tasks. Two primary tasks were performed with a laparoscopic box trainer: a high workload task that consisted of transferring rings from one peg to another and a low workload task that consisted of grasping and placing large pencil erasers in a bowl. It was predicted that the visual-spatial secondary task would be more sensitive when paired with the laparoscopic primary task than the auditory analog. Findings from the study mostly supported this prediction. Proportion of correct detections and subjective workload scores indicated that the auditory-spatial task secondary task was less demanding than the visual-spatial task in high workload, dual task conditions. However, no significant differences were found for response time and false alarms. Overall, these results support the modality predictions of MRT under high workload conditions. Additionally, this study provides further evidence supporting the use of the visual-spatial, ball-and-tunnel task as a measure of workload during laparoscopic surgery

    Internal representations of auditory frequency: behavioral studies of format and malleability by instructions

    Get PDF
    Research has suggested that representational and perceptual systems draw upon some of the same processing structures, and evidence also has accumulated to suggest that representational formats are malleable by instructions. Very little research, however, has considered how nonspeech sounds are internally represented, and the use of audio in systems will often proceed under the assumption that separation of information by modality is sufficient for eliminating information processing conflicts. Three studies examined the representation of nonspeech sounds in working memory. In Experiment 1, a mental scanning paradigm suggested that nonspeech sounds can be flexibly represented in working memory, but also that a universal per-item scanning cost persisted across encoding strategies. Experiment 2 modified the sentence-picture verification task to include nonspeech sounds (i.e., a sound-sentence-picture verification task) and found evidence generally supporting three distinct formats of representation as well as a lingering effect of auditory stimuli for verification times across representational formats. Experiment 3 manipulated three formats of internal representation (verbal, visuospatial imagery, and auditory imagery) for a point estimation sonification task in the presence of three types of interference tasks (verbal, visuospatial, and auditory) in an effort to induce selective processing code (i.e., domain-specific working memory) interference. Results showed no selective interference but instead suggested a general performance decline (i.e., a general representational resource) for the sonification task in the presence of an interference task, regardless of the sonification encoding strategy or the qualitative interference task demands. Results suggested a distinct role of internal representations for nonspeech sounds with respect to cognitive theory. The predictions of the processing codes dimension of the multiple resources construct were not confirmed; possible explanations are explored. The practical implications for the use of nonspeech sounds in applications include a possible response time advantage when an external stimulus and the format of internal representation match.Ph.D.Committee Chair: Walker, Bruce; Committee Member: Bonebright, Terri; Committee Member: Catrambone, Richard; Committee Member: Corso, Gregory; Committee Member: Rogers, Wend

    Avoiding overload in multiuser online applications

    Get PDF
    One way to strengthen the bond between popular applications and their online user communities is to integrate the applications with their communities, so users are able to observe and communicate with other users. The result of this integration is a Multiuser Online Application (MOA). The problem studied in this thesis is that MOA users and systems will be overloaded with information generated by large communities and complex applications. The solution investigated was to filter the amount of information delivered to users while attempting to preserve the benefits of dwelling in a MOA environment. This strategy was evaluated according to the amount of information it was capable of reducing and the effects as seen by MOA users. It was found that filtering could be used to substantially reduce the information exchanged by users while still providing users with the benefits of integrating application and community

    Multimedia Communication in e-Government Interface: A Usability and User Trust Investigation

    Get PDF
    In the past few years, e-government has been a topic of much interest among those excited about the advent of Web technologies. Due to the growing demand for effective communication to facilitate real-time interaction between users and e-government applications, many governments are considering installing new tools by e-government portals to mitigate the problems associated with user – interface communication. Therefore, this study is to indicate the use of multimodal metaphors such as audio-visual avatars in e-government interfaces; to increase the user performance of communications and to reduce information overload and lack of trust that is common with many e-government interfaces. However, only a minority of empirical studies has been focused on assessing the role of audio-visual metaphors in e-government. Therefore, the subject of this thesis’ investigation was the use of novel combinations of multimodal metaphors in the presentation of messaging content to produce an evaluation of these combinations’ effects on the users’ communication performance as well as the usability of e-government interfaces and perception of trust. The thesis outlines research comprising three experimental phases. An initial experiment was to explore and compare the usability of text in the presentation of the messaging content versus recorded speech and text with graphic metaphors. The second experimental was to investigate two different styles of incorporating initial avatars versus the auditory channel. The third experiment examined a novel approach around the use of speaking avatars with human-like facial expressions, obverse speaking avatars full body gestures during the presentation of the messaging content to compare the usability and communication performance as well as the perception of trust. The achieved results demonstrated the usefulness of the tested metaphors to enhance e-government usability, improve the performance of communication and increase users’ trust. A set of empirically derived ground-breaking guidelines for the design and use of these metaphors to generate more usable e-government interfaces was the overall provision of the results.Saudi Arabia Embass
    corecore