30,719 research outputs found

    The Effects of Finger-Walking in Place (FWIP) on Spatial Knowledge Acquisition in Virtual Environments

    Get PDF
    Spatial knowledge, necessary for efficient navigation, comprises route knowledge (memory of landmarks along a route) and survey knowledge (overall representation like a map). Virtual environments (VEs) have been suggested as a power tool for understanding some issues associated with human navigation, such as spatial knowledge acquisition. The Finger-Walking-in-Place (FWIP) interaction technique is a locomotion technique for navigation tasks in immersive virtual environments (IVEs). The FWIP was designed to map a human’s embodied ability overlearned by natural walking for navigation, to finger-based interaction technique. Its implementation on Lemur and iPhone/iPod Touch devices was evaluated in our previous studies. In this paper, we present a comparative study of the joystick’s flying technique versus the FWIP. Our experiment results show that the FWIP results in better performance than the joystick’s flying for route knowledge acquisition in our maze navigation tasks

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Wearable and mobile devices

    Get PDF
    Information and Communication Technologies, known as ICT, have undergone dramatic changes in the last 25 years. The 1980s was the decade of the Personal Computer (PC), which brought computing into the home and, in an educational setting, into the classroom. The 1990s gave us the World Wide Web (the Web), building on the infrastructure of the Internet, which has revolutionized the availability and delivery of information. In the midst of this information revolution, we are now confronted with a third wave of novel technologies (i.e., mobile and wearable computing), where computing devices already are becoming small enough so that we can carry them around at all times, and, in addition, they have the ability to interact with devices embedded in the environment. The development of wearable technology is perhaps a logical product of the convergence between the miniaturization of microchips (nanotechnology) and an increasing interest in pervasive computing, where mobility is the main objective. The miniaturization of computers is largely due to the decreasing size of semiconductors and switches; molecular manufacturing will allow for “not only molecular-scale switches but also nanoscale motors, pumps, pipes, machinery that could mimic skin” (Page, 2003, p. 2). This shift in the size of computers has obvious implications for the human-computer interaction introducing the next generation of interfaces. Neil Gershenfeld, the director of the Media Lab’s Physics and Media Group, argues, “The world is becoming the interface. Computers as distinguishable devices will disappear as the objects themselves become the means we use to interact with both the physical and the virtual worlds” (Page, 2003, p. 3). Ultimately, this will lead to a move away from desktop user interfaces and toward mobile interfaces and pervasive computing

    Empirical Comparisons of Virtual Environment Displays

    Get PDF
    There are many different visual display devices used in virtual environment (VE) systems. These displays vary along many dimensions, such as resolution, field of view, level of immersion, quality of stereo, and so on. In general, no guidelines exist to choose an appropriate display for a particular VE application. Our goal in this work is to develop such guidelines on the basis of empirical results. We present two initial experiments comparing head-mounted displays with a workbench display and a foursided spatially immersive display. The results indicate that the physical characteristics of the displays, users' prior experiences, and even the order in which the displays are presented can have significant effects on performance

    Human factors consideration in the interaction process with virtual environment

    Get PDF
    Newrequirements are needed by industry for computer aided design (CAD) data. Some techniques of CAD data management and the computer power unit capabilities enable an extraction of a virtual mock-up for an interactive use. CAD data may also be distributed and shared by different designers in various parts of the world (in the same company and with subcontractors). The use of digital mock-up is not limited to the mechanical design of the product but is dedicated to a maximum number of trades in industry. One of the main issues is to enable the evaluation of the product without any physical representation of the product but based on its virtual representation. In that objective, most of main actors in industry domain use virtual reality technologies. These technologies consist basically in enabling the designer to perceive the product in design process. This perception has to be rendered to guarantee that the evaluation process is done as in a real condition. The perception is the fruit of alchemy between the user and the VR technologies. Thus, in the experiment design, the whole system human-VR technology has to be considered

    MetaSpace II: Object and full-body tracking for interaction and navigation in social VR

    Full text link
    MetaSpace II (MS2) is a social Virtual Reality (VR) system where multiple users can not only see and hear but also interact with each other, grasp and manipulate objects, walk around in space, and get tactile feedback. MS2 allows walking in physical space by tracking each user's skeleton in real-time and allows users to feel by employing passive haptics i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. To enable these elements in VR, MS2 creates a correspondence in spatial layout and object placement by building the virtual world on top of a 3D scan of the real world. Through the association between the real and virtual world, users are able to walk freely while wearing a head-mounted device, avoid obstacles like walls and furniture, and interact with people and objects. Most current virtual reality (VR) environments are designed for a single user experience where interactions with virtual objects are mediated by hand-held input devices or hand gestures. Additionally, users are only shown a representation of their hands in VR floating in front of the camera as seen from a first person perspective. We believe, representing each user as a full-body avatar that is controlled by natural movements of the person in the real world (see Figure 1d), can greatly enhance believability and a user's sense immersion in VR.Comment: 10 pages, 9 figures. Video: http://living.media.mit.edu/projects/metaspace-ii

    Control of virtual environments for young people with learning difficulties

    Get PDF
    Purpose: The objective of this research is to identify the requirements for the selection or development of usable virtual environment (VE) interface devices for young people with learning disabilities. Method: a user-centred design methodology was employed, to produce a design specification for usable VE interface devices. Details of the users' cognitive, physical and perceptual abilities were obtained through observation and normative assessment tests. Conclusions : A review of computer interface technology, including virtual reality and assistive devices, was conducted. As there were no devices identified that met all the requirements of the design specification, it was concluded that there is a need for the design and development of new concepts. Future research will involve concept and prototype development and user-based evaluation of the prototypes
    • 

    corecore