23,388 research outputs found

    The display of electronic commerce within virtual environments

    Get PDF
    In today’s competitive business environment, the majority of companies are expected to be represented on the Internet in the form of an electronic commerce site. In an effort to keep up with current business trends, certain aspects of interface design such as those related to navigation and perception may be overlooked. For instance, the manner in which a visitor to the site might perceive the information displayed or the ease with which they navigate through the site may not be taken into consideration. This paper reports on the evaluation of the electronic commerce sites of three different companies, focusing specifically on the human factors issues such as perception and navigation. Heuristic evaluation, the most popular method for investigating user interface design, is the technique employed to assess each of these sites. In light of the results from the analysis of the evaluation data, virtual environments are suggested as a way of improving the navigation and perception display constraints

    Evaluating distributed cognitive resources for wayfinding in a desktop virtual environment.

    Get PDF
    As 3D interfaces, and in particular virtual environments, become increasingly realistic there is a need to investigate the location and configuration of information resources, as distributed in the humancomputer system, to support any required activities. It is important for the designer of 3D interfaces to be aware of information resource availability and distribution when considering issues such as cognitive load on the user. This paper explores how a model of distributed resources can support the design of alternative aids to virtual environment wayfinding with varying levels of cognitive load. The wayfinding aids have been implemented and evaluated in a desktop virtual environment

    A Content-Analysis Approach for Exploring Usability Problems in a Collaborative Virtual Environment

    Get PDF
    As Virtual Reality (VR) products are becoming more widely available in the consumer market, improving the usability of these devices and environments is crucial. In this paper, we are going to introduce a framework for the usability evaluation of collaborative 3D virtual environments based on a large-scale usability study of a mixedmodality collaborative VR system. We first review previous literature about important usability issues related to collaborative 3D virtual environments, supplemented with our research in which we conducted 122 interviews after participants solved a collaborative virtual reality task. Then, building on the literature review and our results, we extend previous usability frameworks. We identified twelve different usability problems, and based on the causes of the problems, we grouped them into three main categories: VR environment-, device interaction-, and task-specific problems. The framework can be used to guide the usability evaluation of collaborative VR environments

    Creating and testing urban virtual reality models for engineering applications

    Get PDF
    Virtual Reality interaction methods can provide better understanding of 3D virtual models of urban environments. These methods can be used to accommodate different civil engineering applications, such as facilities management and construction progress monitoring. However, the users of these applications may have severe problems in exploring and navigating the large virtual environments to accomplish specific tasks. Properly designed navigation methods are critical for using these applications efficiently. In this reserch, first a literature survey is conducted about existing navigation and visualization methods in order to identify the methods that are most suitable and practical for engineering applications. Based on this survey, a taxonomy of navigation methods and support tools for engineering applications in urban virtual environments is developed. In addition, a framework for virtual reality applications in civil engineering is proposed. The framework includes a practical method for creating virtual urban models and several interaction and navigation methods. Furthermore, a new method is proposed for usability testing of the navigation supports in these models. The proposed approach is demonstrated through three case studies for desktop, indoor and outdoor applications. The results of our usability study showed that using navigation supports allow users to navigate more efficiently in the virtual environment

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    Are tiled display walls needed for astronomy?

    Full text link
    Clustering commodity displays into a Tiled Display Wall (TDW) provides a cost-effective way to create an extremely high resolution display, capable of approaching the image sizes now gen- erated by modern astronomical instruments. Astronomers face the challenge of inspecting single large images, many similar images simultaneously, and heterogeneous but related content. Many research institutions have constructed TDWs on the basis that they will improve the scientific outcomes of astronomical imagery. We test this concept by presenting sample images to astronomers and non- astronomers using a standard desktop display (SDD) and a TDW. These samples include standard English words, wide field galaxy surveys and nebulae mosaics from the Hubble telescope. These experiments show that TDWs provide a better environment for searching for small targets in large images than SDDs. It also shows that astronomers tend to be better at searching images for targets than non-astronomers, both groups are generally better when employing physical navigation as opposed to virtual navigation, and that the combination of two non-astronomers using a TDW rivals the experience of a single astronomer. However, there is also a large distribution in aptitude amongst the participants and the nature of the content also plays a significant role is success.Comment: 19 pages, 15 figures, accepted for publication in PASA (Publications of the Astronomical Society of Australia

    The Effect of Environmental Features, Self-Avatar, and Immersion on Object Location Memory in Virtual Environments

    Get PDF
    One potential application for virtual environments (VEs) is the training of spatial knowledge. A critical question is what features the VE should have in order to facilitate this training. Previous research has shown that people rely on environmental features, such as sockets and wall decorations, when learning object locations. The aim of this study is to explore the effect of varied environmental feature fidelity of VEs, the use of self-avatars, and the level of immersion on object location learning and recall. Following a between-subjects experimental design, participants were asked to learn the location of three identical objects by navigating one of the three environments: a physical laboratory or low and high detail VE replicas of this laboratory. Participants who experienced the VEs could use either a head-mounted display (HMD) or a desktop computer. Half of the participants learning in the HMD and desktop systems were assigned a virtual body. Participants were then asked to place physical versions of the three objects in the physical laboratory in the same configuration. We tracked participant movement, measured object placement, and administered a questionnaire related to aspects of the experience. HMD learning resulted in statistically significant higher performance than desktop learning. Results indicate that, when learning in low detail VEs, there is no difference in performance between participants using HMD and desktop systems. Overall, providing the participant with a virtual body had a negative impact on performance. Preliminary inspection of navigation data indicates that spatial learning strategies are different in systems with varying levels of immersion

    Modelling virtual urban environments

    Get PDF
    In this paper, we explore the way in which virtual reality (VR) systems are being broadened to encompass a wide array of virtual worlds, many of which have immediate applicability to understanding urban issues through geocomputation. Wesketch distinctions between immersive, semi-immersive and remote environments in which single and multiple users interact in a variety of ways. We show how suchenvironments might be modelled in terms of ways of navigating within, processes of decision-making which link users to one another, analytic functions that users have to make sense of the environment, and functions through which users can manipulate, change, or design their world. We illustrate these ideas using four exemplars that we have under construction: a multi-user internet GIS for Londonwith extensive links to 3-d, video, text and related media, an exploration of optimal retail location using a semi-immersive visualisation in which experts can explore such problems, a virtual urban world in which remote users as avatars can manipulate urban designs, and an approach to simulating such virtual worlds through morphological modelling based on the digital record of the entire decision-making process through which such worlds are built

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing
    corecore