61 research outputs found

    Usability Analysis of 3D Rotation Techniques

    No full text
    We report results from a formal user study of interactive 3D rotation using the mouse-driven Virtual Sphere and Arcball techniques, as well as multidimensional input techniques based on magnetic orientation sensors. Multidimensional input is often assumed to allow users to work quickly, but at the cost of precision, due to the instability of the hand moving in the open air. We show that, at least for the orientation matching task used in this experiment, users can take advantage of the integrated degrees of freedom provided by multidimensional input without necessarily sacrificing precision: using multidimensional input, users completed the experimental task up to 36% faster without any statistically detectable loss of accuracy. We also report detailed observations of common usability problems when first encountering the techniques. Our observations suggest some design issues for 3D input devices. For example, the physical form-factors of the 3D input device significantly influenced us..

    A survey of design issues in spatial input

    No full text
    We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces. Our survey is based upon previous work in 3D interaction, our experience in developing free-space interfaces, and our informal observations of test users. We illustrate our design issues using examples drawn from instances of 3D interfaces. For example, our first issue suggests that users have difficulty understanding three-dimensional space. We offer a set of strategies which may help users to better perceive a 3D virtual environment, including the use of spatial references, relative gesture, two-handed interaction, multisensory feedback, physical constraints, and head tracking. We describe interfaces which employ these strategies. Our major contribution is the synthesis of many scattered results, observations, and examples into a common framework. This framework should serve as a guide to researchers or systems builders who may not be familiar with design issues in spatial input. Where appropriate, we also try to identify areas in free-space 3D interaction which we see as likely candidates for additional research. An extended and annotated version of the references list for this paper is available on-line through mosaic at addres

    Passive Real-World Interface Props for Neurosurgical Visualization

    No full text
    We claim that physical manipulation of familiar real-world objects in the user's real environment is an important technique for the design of three-dimensional user interfaces. These real-world passive interface props are manipulated by the user to specify spatial relationships between interface objects. By unobtrusively embedding free-space position and orientation trackers within the props, we enable the computer to passively observe a natural user dialog in the real world, rather than forcing the user to engage in a contrived dialog in the computer-generated world. We present neurosurgical planning as a driving application and demonstrate the utility of a head viewing prop, a cutting -plane selection prop, and a trajectory selection prop in this domain. Using passive props in this interface exploits the surgeon's existing skills, provides direct action-task correspondence, eliminates explicit modes for separate tools, facilitates natural two-handed interaction, and provides tactile ..

    POSITION STATEMENT

    No full text
    In efforts to develop interaction techniques for virtual environments which are extremely flexible and versatile, manipulation in virtual reality has focused heavily on visual feedback techniques (such as highlighting objects when the selection cursor passes through them) and generic input devices (such as the glove). Such virtual manipulations lack many qualities of physical manipulation of objects in the real world which users might expect or which users might unconsciously depend upon. For example, in the case of selecting a virtual object using a glove, the user must visually attend to the object (watch for it to become highlighted) before selecting it. But what if the user’s attention is needed elsewhere? What if the user is monitoring an animation and is just trying to pick up a tool? We believe that designers of virtual environments can take better advantage of human motor, proprioceptive, and haptic capabilities without necessarily giving up flexibility and versatility. In support of this statement, we present our experiences with two systems, the two-handed props interface for neurosurgical visualization [2] and with the Worlds in Miniature (WIM) metaphor [6]
    corecore