7 research outputs found

    Three levels of metric for evaluating wayfinding

    Get PDF
    Three levels of virtual environment (VE) metric are proposed, based on: (1) users’ task performance (time taken, distance traveled and number of errors made), (2) physical behavior (locomotion, looking around, and time and error classification), and (3) decision making (i.e., cognitive) rationale (think aloud, interview and questionnaire). Examples of the use of these metrics are drawn from a detailed review of research into VE wayfinding. A case study from research into the fidelity that is required for efficient VE wayfinding is presented, showing the unsuitability in some circumstances of common metrics of task performance such as time and distance, and the benefits to be gained by making fine-grained analyses of users’ behavior. Taken as a whole, the article highlights the range of techniques that have been successfully used to evaluate wayfinding and explains in detail how some of these techniques may be applied

    Designing virtual environments for brain injury rehabilitation

    Get PDF
    Virtual Reality (VR) has shown great potential in various training applications. In the field of cognitive rehabilitation it has been shown that VR technology can become a useful complement to conventional rehabilitation techniques (e.g. Rizzo et al. (2002), Brown et al. (2002) and Kizony et al. (2002)). An important part of a brain injury patient’s rehabilitation process is practicing instrumental activities of daily living (IADL), such as preparing meals, cleaning, shopping and using a telephone. A pilot study by LindĂ©n et al. (2000) came to the conclusion that activities like these can be practiced using desktop VR. The question addressed in this thesis is how a Virtual Environment (VE) should be designed to be a usable tool in brain injury rehabilitation. The thesis consists of three papers that describe three different studies that have been performed in order to further explore this area of research. Paper I describes the design of a practical VE application in the shape of a cash dispenser. A paper prototype was constructed which was first used to generate ideas from three occupational therapists. The prototype was then tested on six people with little or moderate computer knowledge and no experience of 3D computer simulations. The results from the evaluation were then used to implement a computer prototype with the VR development tool World Up. The computer prototype had automatic navigation, which meant that the position and orientation of the viewpoint, the user’s view into the VE, was controlled by the computer. The point-and-click method, which allows the user to move and manipulate objects with single mouse clicks, was used for interaction with objects. The computer prototype was then tested on five brain injury patients. The results of this evaluation are not included in paper I but are described in the thesis summary. Overall, all five subjects learned to handle the computer prototype sufficiently well. However, the interaction with objects posed some problems for them. For example, they initially tried to move the bankcard with drag-and-drop instead of point-and-click. Three subjects also pointed out that some parts of the VE, for example the display and the keypad, were unclear. All five subjects showed a positive attitude to the virtual cash dispenser The aim of paper II was to find a usable navigation input device for people with no experience of 3D computer graphics. After an initial discussion about various input devices it was decided that a Microsoft Sidewinder joystick and an IntelliKeys keyboard, both programmed with two and three degrees of freedom (DOF), should be compared in an experiment. Sixty able-bodied people with no experience of 3D computer graphics were divided into four groups. Each group was to perform a navigation task in a VE consisting of a kitchen and a corridor using one of the four input devices. The navigation task was designed to evaluate both fine adjustments of the viewpoint (maneuvering task) and transportation of the viewpoint from one location to another (search task). Each subject performed the task five times in a row and then answered a questionnaire consisting of five questions. Data logging and video recording were used to collect data. The study showed that both keyboard and joystick have their advantages and disadvantages. The keyboard seemed to be easier to control than the joystick for the maneuvering task. The keyboard was slightly easier to control also for the search task but was much slower than the joystick, which might make it an inconvenient input device for VEs that only involve search navigation. No significant difference could be found between two and three DOFs for the maneuvering task, but the 3rd DOF (sideways movement) seemed to facilitate the subjects’ navigation in some situations. Two DOFs was found to be slightly easier to control than three DOFs for the search task. The study described in paper III aimed at 1) evaluating a method for interaction with objects in VEs on people with no 3D computer graphics experience, and 2) finding a sufficiently usable input device for this purpose. After an initial discussion of possible methods for interaction with objects and various input devices, an experiment was conducted with 20 able-bodied people with no experience of 3D computer graphics. Our experiences of point-and-click from paper I and the pilot study (LindĂ©n et al., 2000) made us think that maybe people have a more inherent understanding for drag-and-drop. Also, we had discussed using a virtual hand for carrying objects to simplify object movement. We therefore wanted to evaluate the following method for interaction with objects: 1) A virtual hand was used for carrying objects, 2) drag-and-drop was used for moving objects, 3) a single click was used for activating objects, and 4) objects were given a proper orientation automatically. Ten subjects used a regular desktop mouse and the other ten a touch screen to perform four interaction tasks in a kitchen VE five times in a row. Video recording was used to document the trial and the interview that was conducted afterwards. Broadly, the method for interaction with objects worked well. The majority of the subjects used the virtual hand for carrying objects. However, the fact that some subjects needed information before they started to use it indicates that its visibility and affordance needs to be improved. Opening and closing cupboard doors caused some problems, especially for the subjects in the touch screen group who tried to open them with drag-and-drop in a manner that resembled reality. No large difference in performance, except from the problem with the cupboard doors, could be seen between the mouse group and the touch screen group. The three studies described in this thesis is a step closer towards understanding how a VE should be designed in order to be a usable tool for people with brain injury. In particular, knowledge on how to make it as easy as possible for the user to navigate the viewpoint and interact with objects has been achieved. The work has also provided a deeper understanding on what effects the choice of input device has on the usability of a VE

    In Search of the ‘Magic Carpet’: Design and Experimentation of a Bimanual 3D Navigation Interface ABSTRACT

    No full text
    Hardware and software advances are making real time 3D graphics part of all mainstream computers. World-Wide Web sites encoded in Virtual Reality Modeling Language or other formats allow users across the Internet to share virtual 3D “worlds”. As the supporting software and hardware become increasingly powerful, the usability of the current 3D navigation interfaces becomes the limiting factor to the wide-spread application of 3D technologies. In this paper, we analyze the human factors issues in designing a usable navigation interface, such as interface metaphor, integration and separation of multiple degrees of freedom, mode switching, isotonic versus isometric control, seamless merger of the 3D navigation devices with the GUI pointing and scrolling devices and two-handed input. We propose a dual joystick navigation interface design based on a real world metaphor (bulldozer), and present an experimental evaluation. Results showed that the proposed bulldozer interface outperformed the status quo mouse-mapping interface in maze travelling and free flying tasks by 25 % to 50%. Limitations of and possible future improvements to the bulldozer interface are also presented

    Quantitative analysis of computer interaction movements

    Get PDF

    A Symmetric Interaction Model for Bimanual Input

    Get PDF
    People use both their hands together cooperatively in many everyday activities. The modern computer interface fails to take advantage of this basic human ability, with the exception of the keyboard. However, the keyboard is limited in that it does not afford continuous spatial input. The computer mouse is perfectly suited for the point and click tasks that are the major method of manipulation within graphical user interfaces, but standard computers have a single mouse. A single mouse does not afford spatial coordination between the two hands within the graphical user interface. Although the advent of the Universal Serial Bus has made it possible to easily plug in many peripheral devices, including a second mouse, modern operating systems work on the assumption of a single spatial input stream. Thus, if a second mouse is plugged into a Macintosh computer, a Windows computer or a UNIX computer, the two mice control the same cursor. Previous work in two-handed or bimanual interaction techniques has often followed the asymmetric interaction guidelines set out by Yves Guiard's Kinematic Chain Model. In asymmetric interaction, the hands are assigned different tasks, based on hand dominance. I show that there is an interesting class of desktop user interface tasks which can be classified as symmetric. A symmetric task is one in which the two hands contribute equally to the completion of a unified task. I show that dual-mouse symmetric interaction techniques outperform traditional single-mouse techniques as well as dual-mouse asymmetric techniques for these symmetric tasks. I also show that users prefer the symmetric interaction techniques for these naturally symmetric tasks
    corecore