31,101 research outputs found

    AudioGPS: Spatial audio navigation with a minimal attention interface

    Get PDF
    In this paper we consider a prototype audio user interface for a Global Positioning System (GPS) that is designed to allow mobile computer users to carry out a location task while their eyes, hands and attention are often otherwise engaged. Audio user interfaces for GPS have typically been designed to meet the needs of visually handicapped users, and generally (though not exclusively) employ speech-audio. In this paper, we consider a prototype audio GPS user interface designed primarily for sighted mobile computer users who may have to attend simultaneously to other tasks, and who may be holding conversations at the same time. The system is considered in the context of being one component of a user interface for mobile computer users. The prototype system uses a simple form of spatial audio. Various candidate audio mappings of location and distance information are analysed. A variety of tasks, design considerations, technological opportunities and design trade-offs are considered. Preliminary findings are reported. Opportunities for improvements to the system, and future empirical testing are explored

    AudioGPS: spatial audio in a minimal attention interface

    Get PDF
    In this paper we consider a prototype audio user interface for a Global Positioning System (GPS) that is designed to allow mobile computer users to carry out a location task while their eyes, hands and attention are often otherwise engaged. Audio user interfaces for GPS have typically been designed to meet the needs of visually handicapped users, and generally (though not exclusively) employ speech-audio. In this paper, we consider a prototype audio GPS user interface designed primarily for sighted mobile computer users who may have to attend simultaneously to other tasks, and who may be holding conversations at the same time. The system is considered in the context of being one component of a user interface for mobile computer users. The prototype system uses a simple form of spatial audio. Various candidate audio mappings of location and distance information are analysed. A variety of tasks, design considerations, technological opportunities and design trade-offs are considered. Preliminary findings are reported. Opportunities for improvements to the system, and future empirical testing are explored

    Large-scale mobile audio environments for collaborative musical interaction.

    Get PDF
    ABSTRACT New application spaces and artistic forms can emerge when users are freed from constraints. In the general case of human-computer interfaces, users are often confined to a fixed location, severely limiting mobility. To overcome this constraint in the context of musical interaction, we present a system to manage large-scale collaborative mobile audio environments, driven by user movement. Multiple participants navigate through physical space while sharing overlaid virtual elements. Each user is equipped with a mobile computing device, GPS receiver, orientation sensor, microphone, headphones, or various combinations of these technologies. We investigate methods of location tracking, wireless audio streaming, and state management between mobile devices and centralized servers. The result is a system that allows mobile users, with subjective 3-D audio rendering, to share virtual scenes. The audio elements of these scenes can be organized into large-scale spatial audio interfaces, thus allowing for immersive mobile performance, locative audio installations, and many new forms of collaborative sonic activity

    Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind

    Get PDF
    Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones

    Using Sound to Enhance Users’ Experiences of Mobile Applications

    Get PDF
    The latest smartphones with GPS, electronic compass, directional audio, touch screens etc. hold potentials for location based services that are easier to use compared to traditional tools. Rather than interpreting maps, users may focus on their activities and the environment around them. Interfaces may be designed that let users search for information by simply pointing in a direction. Database queries can be created from GPS location and compass direction data. Users can get guidance to locations through pointing gestures, spatial sound and simple graphics. This article describes two studies testing prototypic applications with multimodal user interfaces built on spatial audio, graphics and text. Tests show that users appreciated the applications for their ease of use, for being fun and effective to use and for allowing users to interact directly with the environment rather than with abstractions of the same. The multimodal user interfaces contributed significantly to the overall user experience

    Sound for enhanced experiences in mobile applications

    Get PDF
    When visiting new places you want information about restaurants, shopping, places of historic in- terest etc. Smartphones are perfect tools for de- livering such location-based information, but the risk is that users get absorbed by texts, maps, videos etc. on the device screen and get a second- hand experience of the environment they are vis- iting rather than the sought-after first-hand expe- rience. One problem is that the users’ eyes often are directed to the device screen, rather than to the surrounding environment. Another problem is that interpreting more or less abstract informa- tion on maps, texts, images etc. may take up sig- nificant shares of the users’ overall cognitive re- sources. The work presented here tried to overcome these two problems by studying design for human-computer interaction based on the users’ everyday abilities such as directional hearing and point and sweep gestures. Today’s smartphones know where you are, in what direction you are pointing the device and they have systems for ren- dering spatial audio. These readily available tech- nologies hold the potential to make information more easy to interpret and use, demand less cog- nitive resources and free the users from having to look more or less constantly on a device screen

    Crossmodal spatial location: initial experiments

    Get PDF
    This paper describes an alternative form of interaction for mobile devices using crossmodal output. The aim of our work is to investigate the equivalence of audio and tactile displays so that the same messages can be presented in one form or another. Initial experiments show that spatial location can be perceived as equivalent in both the auditory and tactile modalities Results show that participants are able to map presented 3D audio positions to tactile body positions on the waist most effectively when mobile and that there are significantly more errors made when using the ankle or wrist. This paper compares the results from both a static and mobile experiment on crossmodal spatial location and outlines the most effective ways to use this crossmodal output in a mobile context

    An investigation of eyes-free spatial auditory interfaces for mobile devices: supporting multitasking and location-based information

    Get PDF
    Auditory interfaces offer a solution to the problem of effective eyes-free mobile interactions. However, a problem with audio, as opposed to visual displays, is dealing with multiple simultaneous information streams. Spatial audio can be used to differentiate between different streams by locating them into separate spatial auditory streams. In this thesis, we consider which spatial audio designs might be the most effective for supporting multiple auditory streams and the impact such spatialisation might have on the users' cognitive load. An investigation is carried out to explore the extent to which 3D audio can be effectively incorporated into mobile auditory interfaces to offer users eyes-free interaction for both multitasking and accessing location-based information. Following a successful calibration of the 3D audio controls on the mobile device of choice for this work (the Nokia N95 8GB), a systematic evaluationof 3D audio techniques is reported in the experimental chapters of this thesis which considered the effects of multitasking, multi-level displays, as well as differences between egocentric and exocentric designs. One experiment investigates the implementation and evaluation of a number of different spatial (egocentric) and non-spatial audio techniques for supporting eyes-free mobile multitasking that included spatial minimisation. The efficiency and usability of these techniques was evaluated under varying cognitive load. This evaluation showed an important interaction between cognitive load and the method used to present multiple auditory streams. The spatial minimisation technique offered an effective means of presenting and interacting with multiple auditory streams simultaneously in a selective-attention task (low cognitive load) but it was not as effective in a divided-attention task (high cognitive load), in which the interaction benefited significantly from the interruption of one of the stream. Two further experiments examine a location-based approach to supporting multiple information streams in a realistic eyes-free mobile environment. An initial case study was conducted in an outdoor mobile audio-augmented exploratory environment that allowed for the analysis and description of user behaviour in a purely exploratory environment. 3D audio was found to be an effective technique to disambiguate multiple sound sources in a mobile exploratory environment and to provide a more engaging and immersive experience as well as encouraging an exploratory behaviour. A second study extended the work of the previous case study by evaluating a number of complex multi-level spatial auditory displays that enabled interaction with multiple location-based information in an indoor mobile audio-augmented exploratory environment. It was found that a consistent exocentric design across levels failed to reduce workload or increase user satisfaction, so this design was widely rejected by users. However, the rest of spatial auditory displays tested in this study encouraged an exploratory behaviour similar to that described in the previous case study, here further characterised by increased user satisfaction and low perceived workload

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    A comparison of feedback cues for enhancing pointing efficiency in interaction with spatial audio displays

    Get PDF
    An empirical study that compared six different feedback cue types to enhance pointing efficiency in deictic spatial audio displays is presented. Participants were asked to select a sound using a physical pointing gesture, with the help of a loudness cue, a timbre cue and an orientation update cue as well as with combinations of these cues. Display content was varied systematically to investigate the effect of increasing display population. Speed, accuracy and throughput ratings are provided as well as effective target widths that allow for minimal error rates. The results showed direct pointing to be the most efficient interaction technique; however large effective target widths reduce the applicability of this technique. Movement-coupled cues were found to significantly reduce display element size, but resulted in slower interaction and were affected by display content due to the requirement of continuous target attainment. The results show that, with appropriate design, it is possible to overcome interaction uncertainty and provide solutions that are effective in mobile human computer interaction
    • …
    corecore