25,539 research outputs found

    Navigating large-scale virtual environments: what differences occur between helmet-mounted and desk-top displays?

    Get PDF
    Participants used a helmet-mounted display (HMD) and a desk-top (monitor) display to learn the layouts of two large-scale virtual environments (VEs) through repeated, direct navigational experience. Both VEs were ‘‘virtual buildings’’ containing more than seventy rooms. Participants using the HMD navigated the buildings significantly more quickly and developed a significantly more accurate sense of relative straight-line distance. There was no significant difference between the two types of display in terms of the distance that participants traveled or the mean accuracy of their direction estimates. Behavioral analyses showed that participants took advantage of the natural, head-tracked interface provided by the HMD in ways that included ‘‘looking around’’more often while traveling through the VEs, and spending less time stationary in the VEs while choosing a direction in which to travel

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Virtual Skiing as an Art Installation

    Get PDF
    The Virtual Skiing game allows the user to immerse himself into the skiing sensation without using any obvious hardware interfaces. To achieve the movement down the virtual skiing slope the skier who stands on a pair of skis attached to the floor performs the same movements as on real skis, in particular this is the case on carving skis: tilting the body to the left initiates a left turn, tilting the body to the right initiates a right turn, by lowering the body, the speed is increased. The skier observes his progress down the virtual slope projected on the wall in front of him. The skier’s movements are recorded using a video camera placed in front of him and processed on a PC in real time to drive the projected animation of the virtual slope

    Using Wii technology to explore real spaces via virtual environments for people who are blind

    Get PDF
    Purpose - Virtual environments (VEs) that represent real spaces (RSs) give people who are blind the opportunity to build a cognitive map in advance that they will be able to use when arriving at the RS. Design - In this research study Nintendo Wii based technology was used for exploring VEs via the Wiici application. The Wiimote allows the user to interact with VEs by simulating walking and scanning the space. Finding - By getting haptic and auditory feedback the user learned to explore new spaces. We examined the participants' abilities to explore new simple and complex places, construct a cognitive map, and perform orientation tasks in the RS. Originality – To our knowledge, this finding presents the first virtual environment for people who are blind that allow the participants to scan the environment and by this to construct map model spatial representations

    BCI-Based Navigation in Virtual and Real Environments

    Get PDF
    A Brain-Computer Interface (BCI) is a system that enables people to control an external device with their brain activity, without the need of any muscular activity. Researchers in the BCI field aim to develop applications to improve the quality of life of severely disabled patients, for whom a BCI can be a useful channel for interaction with their environment. Some of these systems are intended to control a mobile device (e. g. a wheelchair). Virtual Reality is a powerful tool that can provide the subjects with an opportunity to train and to test different applications in a safe environment. This technical review will focus on systems aimed at navigation, both in virtual and real environments.This work was partially supported by the Innovation, Science and Enterprise Council of the Junta de AndalucĂ­a (Spain), project P07-TIC-03310, the Spanish Ministry of Science and Innovation, project TEC 2011-26395 and by the European fund ERDF

    Interpretation at the controller's edge: designing graphical user interfaces for the digital publication of the excavations at Gabii (Italy)

    Get PDF
    This paper discusses the authors’ approach to designing an interface for the Gabii Project’s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence “on the ground,” notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials

    For efficient navigational search, humans require full physical movement but not a rich visual scene

    Get PDF
    During navigation, humans combine visual information from their surroundings with body-based information from the translational and rotational components of movement. Theories of navigation focus on the role of visual and rotational body-based information, even though experimental evidence shows they are not sufficient for complex spatial tasks. To investigate the contribution of all three sources of information, we asked participants to search a computer generated “virtual” room for targets. Participants were provided with either only visual information, or visual supplemented with body-based information for all movement (walk group) or rotational movement (rotate group). The walk group performed the task with near-perfect efficiency, irrespective of whether a rich or impoverished visual scene was provided. The visual-only and rotate groups were significantly less efficient, and frequently searched parts of the room at least twice. This suggests full physical movement plays a critical role in navigational search, but only moderate visual detail is required
    • 

    corecore