544 research outputs found

    Interaction and presentation techniques for shake menus in tangible augmented reality

    Full text link
    Menus play an important role in both information presentation and system control. We explore the design space of shake menus, which are intended for use in tangible augmented reality. Shake menus are radial menus displayed centered on a physical object and activated by shaking that object. One important aspect of their design space is the coordinate system used to present menu op-tions. We conducted a within-subjects user study to compare the speed and efficacy of several alternative methods for presenting shake menus in augmented reality (world-referenced, display-referenced, and object-referenced), along with a baseline tech-nique (a linear menu on a clipboard). Our findings suggest trade-offs amongst speed, efficacy, and flexibility of interaction, and point towards the possible advantages of hybrid approaches that compose together transformations in different coordinate systems. We close by describing qualitative feedback from use and present several illustrative applications of the technique

    Improvised interfaces for real-time musical applications

    Get PDF
    International audienceComputers offer a wealth of promises for real-time musical control. One of them is to enable musicians to change the structure of their instruments in the same time they are playing them, allowing them to adapt their tools to their wills and needs. Few interaction styles provide enough freedom to achieve this. Improvised interfaces are tangible interfaces made out of found objects and tailored by their users. We propose to take advantage of these improvised interfaces to turn the surrounding physical environment into a dynamic musical instrument with tremendous possibilities. Methods dealing with design issues are presented and an implementation of this novel approach is described

    Exploring Interactions with Printed Data Visualizations in Augmented Reality

    Get PDF

    Visual Hints for Tangible Gestures in Augmented Reality

    Full text link
    Tangible Augmented Reality (AR) systems imbue physical ob-jects with the ability to act and respond in new ways. In particu-lar, physical objects and gestures made with them gain meaning that does not exist outside the tangible AR environment. The existence of this new set of possible actions and outcomes is not always apparent, making it necessary to learn new movements or gestures. Addressing this opportunity, we present visual hints, which are graphical representations in AR of potential actions and their consequences in the augmented physical world. Visual hints enable discovery, learning, and completion of gestures and ma-nipulation in tangible AR. Here, we discuss our investigation of a variety of representations of visual hints and methods for activat-ing them. We then describe a specific implementation that sup-ports gestures developed for a tangible AR user interface to an electronic field guide for botanists, and present results from a pilot study

    Designing Interactions with Multilevel Auditory Displays in Mobile Audio-Augmented Reality

    Get PDF
    Auditory interfaces offer a solution to the problem of effective eyes-free mobile interactions. In this article, we investigate the use of multilevel auditory displays to enable eyes-free mobile interaction with indoor location-based information in non-guided audio-augmented environments. A top-level exocentric sonification layer advertises information in a gallery-like space. A secondary interactive layer is used to evaluate three different conditions that varied in the presentation (sequential versus simultaneous) and spatialisation (non-spatialised versus egocentric/exocentric spatialisation) of multiple auditory sources. Our findings show that (1) participants spent significantly more time interacting with spatialised displays; (2) using the same design for primary and interactive secondary display (simultaneous exocentric) showed a negative impact on the user experience, an increase in workload and substantially increased participant movement; and (3) the other spatial interactive secondary display designs (simultaneous egocentric, sequential egocentric, and sequential exocentric) showed an increase in time spent stationary but no negative impact on the user experience, suggesting a more exploratory experience. A follow-up qualitative and quantitative analysis of user behaviour support these conclusions. These results provide practical guidelines for designing effective eyes-free interactions for far richer auditory soundscapes

    Augmenting spaces and creating interactive experiences using video camera networks

    Get PDF
    This research addresses the problem of creating interactive experiences to encourage people to explore spaces. Besides the obvious spaces to visit, such as museums or art galleries, spaces that people visit can be, for example, a supermarket or a restaurant. As technology evolves, people become more demanding in the way they use it and expect better forms of interaction with the space that surrounds them. Interaction with the space allows information to be transmitted to the visitors in a friendly way, leading visitors to explore it and gain knowledge. Systems to provide better experiences while exploring spaces demand hardware and software that is not in the reach of every space owner either because of the cost or inconvenience of the installation, that can damage artefacts or the space environment. We propose a system adaptable to the spaces, that uses a video camera network and a wi-fi network present at the space (or that can be installed) to provide means to support interactive experiences using the visitor’s mobile device. The system is composed of an infrastructure (called vuSpot), a language grammar used to describe interactions at a space (called XploreDescription), a visual tool used to design interactive experiences (called XploreBuilder) and a tool used to create interactive experiences (called urSpace). By using XploreBuilder, a tool built of top of vuSpot, a user with little or no experience in programming can define a space and design interactive experiences. This tool generates a description of the space and of the interactions at that space (that complies with the XploreDescription grammar). These descriptions can be given to urSpace, another tool built of top of vuSpot, that creates the interactive experience application. With this system we explore new forms of interaction and use mobile devices and pico projectors to deliver additional information to the users leading to the creation of interactive experiences. The several components are presented as well as the results of the respective user tests, which were positive. The design and implementation becomes cheaper, faster, more flexible and, since it does not depend on the knowledge of a programming language, accessible for the general public.NOVA Laboratory for Computer Science and Informatics (NOVA LINCS), Multimodal Systems, Departamento de Informática (DI), Faculdade de Ciências e Tecnologia (FCT), Universidade Nova de Lisboa (UNL) and Escola Superior de Tecnologia de Setúbal (EST Setúbal), Instituto Politécnico de Setúbal (IPS)

    Investigation of dynamic three-dimensional tangible touchscreens: Usability and feasibility

    Get PDF
    The ability for touchscreen controls to move from two physical dimensions to three dimensions may soon be possible. Though solutions exist for enhanced tactile touchscreen interaction using vibrotactile devices, no definitive commercial solution yet exists for providing real, physical shape to the virtual buttons on a touchscreen display. Of the many next steps in interface technology, this paper concentrates on the path leading to tangible, dynamic, touchscreen surfaces. An experiment was performed that explores the usage differences between a flat surface touchscreen and one augmented with raised surface controls. The results were mixed. The combination of tactile-visual modalities had a negative effect on task completion time when visual attention was focused on a single task (single target task time increased by 8% and the serial target task time increased by 6%). On the other hand, the dual modality had a positive effect on error rate when visual attention was divided between two tasks (the serial target error rate decreased by 50%). In addition to the experiment, this study also investigated the feasibility of creating a dynamic, three dimensional, tangible touchscreen. A new interface solution may be possible by inverting the traditional touchscreen architecture and integrating emerging technologies such as organic light emitting diode (OLED) displays and electrorheological fluid based tactile pins

    An investigation of eyes-free spatial auditory interfaces for mobile devices: supporting multitasking and location-based information

    Get PDF
    Auditory interfaces offer a solution to the problem of effective eyes-free mobile interactions. However, a problem with audio, as opposed to visual displays, is dealing with multiple simultaneous information streams. Spatial audio can be used to differentiate between different streams by locating them into separate spatial auditory streams. In this thesis, we consider which spatial audio designs might be the most effective for supporting multiple auditory streams and the impact such spatialisation might have on the users' cognitive load. An investigation is carried out to explore the extent to which 3D audio can be effectively incorporated into mobile auditory interfaces to offer users eyes-free interaction for both multitasking and accessing location-based information. Following a successful calibration of the 3D audio controls on the mobile device of choice for this work (the Nokia N95 8GB), a systematic evaluationof 3D audio techniques is reported in the experimental chapters of this thesis which considered the effects of multitasking, multi-level displays, as well as differences between egocentric and exocentric designs. One experiment investigates the implementation and evaluation of a number of different spatial (egocentric) and non-spatial audio techniques for supporting eyes-free mobile multitasking that included spatial minimisation. The efficiency and usability of these techniques was evaluated under varying cognitive load. This evaluation showed an important interaction between cognitive load and the method used to present multiple auditory streams. The spatial minimisation technique offered an effective means of presenting and interacting with multiple auditory streams simultaneously in a selective-attention task (low cognitive load) but it was not as effective in a divided-attention task (high cognitive load), in which the interaction benefited significantly from the interruption of one of the stream. Two further experiments examine a location-based approach to supporting multiple information streams in a realistic eyes-free mobile environment. An initial case study was conducted in an outdoor mobile audio-augmented exploratory environment that allowed for the analysis and description of user behaviour in a purely exploratory environment. 3D audio was found to be an effective technique to disambiguate multiple sound sources in a mobile exploratory environment and to provide a more engaging and immersive experience as well as encouraging an exploratory behaviour. A second study extended the work of the previous case study by evaluating a number of complex multi-level spatial auditory displays that enabled interaction with multiple location-based information in an indoor mobile audio-augmented exploratory environment. It was found that a consistent exocentric design across levels failed to reduce workload or increase user satisfaction, so this design was widely rejected by users. However, the rest of spatial auditory displays tested in this study encouraged an exploratory behaviour similar to that described in the previous case study, here further characterised by increased user satisfaction and low perceived workload

    Proceedings of the Second International Workshop on Physicality, Physicality 2007

    Get PDF
    • …
    corecore