14 research outputs found

    Two-handed polygonal surface design

    Full text link

    Head-tracked stereo viewing with two-handed 3D interaction for animated character construction

    Get PDF
    In this paper, we demonstrate a new interactive 3D desktop metaphor based on two-handed 3D direct manipulation registered with head-tracked stereo viewing. In our configuration, a six-degree-of-freedom head-tracker and CrystalEyes shutter glasses are used to produce stereo images that dynamically follow the user head motion. 3D virtual objects can be made to appear at a fixed location in physical space which the user may view from different angles by moving his head. The user interacts with the simulated 3D environment using both hands simultaneously. The left hand, controlling a Spaceball, is used for 3D navigation and object movement, while the right hand, holding a 3D mouse, is used to manipulate through a virtual tool metaphor, the objects appearing in front of the screen because of negative parallax. In this way, both incremental and absolute interactive input techniques are provided by the system. Hand-eye coordination is made possible by registration between virtual and physical space, allowing a variety of complex 3D tasks to be performed more easily and more rapidly than is possible using traditional interactive techniques. The system has been tested using both Polhemus Fastrak and Logitech ultrasonic input devices for tracking the head and 3D mouse.197-206Pubblicat

    An evaluation of asymmetric interfaces for bimanual virtual assembly with haptics

    Get PDF
    Immersive computing technology provides a human–computer interface to support natural human interaction with digital data and models. One application for this technology is product assembly methods planning and validation. This paper presents the results of a user study which explores the effectiveness of various bimanual interaction device configurations for virtual assembly tasks. Participants completed two assembly tasks with two device configurations in five randomized bimanual treatment conditions (within subjects). A Phantom Omni¼ with and without haptics enabled and a 5DT Data Glove were used. Participant performance, as measured by time to assemble, was the evaluation metric. The results revealed that there was no significant difference in performance between the five treatment conditions. However, half of the participants chose the 5DT Data Glove and the haptic-enabled Phantom Omni¼ as their preferred device configuration. In addition, qualitative comments support both the preference of haptics during the assembly process and comments confirming Guiard’s kinematic chain model

    Un systÚme interactif pour le prototypage virtuel coopératif

    Get PDF
    We present in this thesis the study and implementation of an interactive system for cooperative prototyping of virtual models. These works make use of several technologies from different scientific backgrounds; Virtual Reality is at the crossroads of many disciplines. Our goal is not to replace right now a CAD system with a system such as that we propose in this thesis. Indeed, the power of the machines does not allow yet the management of virtual objects with an accuracy comparable to that of CAD tools. While our system is intuitive and interactive but does not have enough machine power to compete with such precision tools; This precision is however necessary for the industry. This development will be achieved, for sure, but it is more reasonable for the moment to see virtual reality as a complement to CAD.Nous prĂ©sentons dans ce mĂ©moire l’étude et la rĂ©alisation d’un systĂšme interactif pour le prototypage coopĂ©ratif de maquettes virtuelles. Ces travaux font usage de plusieurs technologies issues de milieux scientifiques variĂ©s ; la rĂ©alitĂ© virtuelle n’est elle pas Ă  la croisĂ©e des chemins de nombreuses disciplines ? Notre objectif n’est pas de remplacer dĂšs Ă  prĂ©sent un systĂšme de CAO par un systĂšme tel que celui que nous proposons dans ce mĂ©moire. En effet, la puissance des machines ne permet pas encore la gestion d’objets virtuels avec une prĂ©cision comparable Ă  celle des outils de CAO. Certes notre systĂšme est intuitif et interactif mais il ne dispose pas d’assez de puissance machine pour rivaliser en prĂ©cision avec de tels outils ; cette prĂ©cision est pourtant nĂ©cessaire pour l’industrie. Cette Ă©volution se fera, c’est sĂ»r, mais il est pour l’instant plus raisonnable de voir la rĂ©alitĂ© virtuelle comme un complĂ©ment de la CAO

    3-D Interfaces for Spatial Construction

    Get PDF
    It is becoming increasingly easy to bring the body directly to digital form via stereoscopic immersive displays and tracked input devices. Is this space a viable one in which to construct 3d objects? Interfaces built upon two-dimensional displays and 2d input devices are the current standard for spatial construction, yet 3d interfaces, where the dimensionality of the interactive space matches that of the design space, have something unique to offer. This work increases the richness of 3d interfaces by bringing several new tools into the picture: the hand is used directly to trace surfaces; tangible tongs grab, stretch, and rotate shapes; a handle becomes a lightsaber and a tool for dropping simple objects; and a raygun, analagous to the mouse, is used to select distant things. With these tools, a richer 3d interface is constructed in which a variety of objects are created by novice users with relative ease. What we see is a space, not exactly like the traditional 2d computer, but rather one in which a distinct and different set of operations is easy and natural. Design studies, complemented by user studies, explore the larger space of three-dimensional input possibilities. The target applications are spatial arrangement, freeform shape construction, and molecular design. New possibilities for spatial construction develop alongside particular nuances of input devices and the interactions they support. Task-specific tangible controllers provide a cultural affordance which links input devices to deep histories of tool use, enhancing intuition and affective connection within an interface. On a more practical, but still emotional level, these input devices frame kinesthetic space, resulting in high-bandwidth interactions where large amounts of data can be comfortably and quickly communicated. A crucial issue with this interface approach is the tension between specific and generic input devices. Generic devices are the tradition in computing -- versatile, remappable, frequently bereft of culture or relevance to the task at hand. Specific interfaces are an emerging trend -- customized, culturally rich, to date these systems have been tightly linked to a single application, limiting their widespread use. The theoretical heart of this thesis, and its chief contribution to interface research at large is an approach to customization. Instead of matching an application domain's data, each new input device supports a functional class. The spatial construction task is split into four types of manipulation: grabbing, pointing, holding, and rubbing. Each of these action classes spans the space of spatial construction, allowing a single tool to be used in many settings without losing the unique strengths of its specific form. Outside of 3d interface, outside of spatial construction, this approach strikes a balance between generic and specific suitable for many interface scenarios. In practice, these specific function groups are given versatility via a quick remapping technique which allows one physical tool to perform many digital tasks. For example, the handle can be quickly remapped from a lightsaber that cuts shapes to tools that place simple platonic solids, erase portions of objects, and draw double-helices in space. The contributions of this work lie both in a theoretical model of spatial interaction, and input devices (combined with new interactions) which illustrate the efficacy of this philosophy. This research brings the new results of Tangible User Interface to the field of Virtual Reality. We find a space, in and around the hand, where full-fledged haptics are not necessary for users physically connect with digital form.</p

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Predictive text-entry in immersive environments

    Get PDF
    Virtual Reality (VR) has progressed significantly since its conception, enabling previously impossible applications such as virtual prototyping, telepresence, and augmented reality However, text-entry remains a difficult problem for immersive environments (Bowman et al, 2001b, Mine et al , 1997). Wearing a head-mounted display (HMD) and datagloves affords a wealth of new interaction techniques. However, users no longer have access to traditional input devices such as a keyboard. Although VR allows for more natural interfaces, there is still a need for simple, yet effective, data-entry techniques. Examples include communicating in a collaborative environment, accessing system commands, or leaving an annotation for a designer m an architectural walkthrough (Bowman et al, 2001b). This thesis presents the design, implementation, and evaluation of a predictive text-entry technique for immersive environments which combines 5DT datagloves, a graphically represented keyboard, and a predictive spelling paradigm. It evaluates the fundamental factors affecting the use of such a technique. These include keyboard layout, prediction accuracy, gesture recognition, and interaction techniques. Finally, it details the results of user experiments, and provides a set of recommendations for the future use of such a technique in immersive environments

    NA

    Get PDF
    This thesis investigates the application of Human Ability Requirements (HARs) to problem of two handed, whole handed interaction. The methodology is derived from the use of HARs in the world of human performance evaluation. This research is based on the need to understand how humans perform tasks in order to guide the understanding of the requirements of advanced interface technology development. The thesis presents the background for these two areas of research, taxonomies and whole hand interaction. It goes on to develop a taxonomy and classification of two handed, whole hand interaction for the real world and virtual environments. This taxonomy is used to analyze a large number of real world tasks, to further the development of a series of tests to externally validate the classification, and to analyze the tasks of the 91B Field Medic. This thesis further presents recommendations for how this methodology can be used to develop taxonomies for other areas of human interaction, for how this taxonomy can be used by researchers and practitioners, and areas of further research related to both areas.http://archive.org/details/twohandwholehand1094532746NANaval Postgraduate School author (civilian).Approved for public release; distribution is unlimited

    Conceptual free-form styling in virtual environments

    Get PDF
    This dissertation introduces the tools for designing complete models from scratch directly in a head-tracked, table-like virtual work environment. The models consist of free-form surfaces, and are constructed by drawing a network of curves directly in space. This is accomplished by using a tracked pen-like input device. Interactive deformation tools for curves and surfaces are proposed and are based on variational methods. By aligning the model with the left hand, editing is made possible with the right hand, corresponding to a natural distribution of tasks using both hands. Furthermore, in the emerging field of 3D interaction in virtual environments, particularly with regard to system control, this work uses novel methods to integrate system control tasks, such as selecting tools, and workflow of shape design. The aim of this work is to propose more suitable user interfaces to computersupported conceptual shape design applications. This would be beneficial since it is a field that lacks adequate support from standard desktop systems.Diese Dissertation beschreibtWerkzeuge zum Entwurf kompletter virtueller Modelle von Grund auf. Dies geschieht direkt in einer tischartigen, virtuellen Arbeitsumge-bung mit Hilfe von Tracking der HĂ€nde und der Kopfposition. Die Modelle sind aus FreiformlĂ€chen aufgebaut und werden als Netz von Kurven mit Hilfe eines getrack-ten, stiftartigen EingabegerĂ€tes direkt im Raum gezeichnet. Es werden interaktive Deformationswerkzeuge fĂŒr Kurven und FlĂ€chen vorgestellt, die auf Methoden des Variational Modeling basieren. Durch das Ausrichten des Modells mit der linken Hand wird das Editieren mit der rechten Hand erleichtert. Dies entspricht einer natĂŒrlichen Aufteilung von Aufgaben auf beide HĂ€nde. ZusĂ€tzlich stellt diese Arbeit neue Techniken fĂŒr die 3D-Interaktion in virtuellen Umgebungen, insbesondere im Bereich Anwendungskontrolle, vor, die die Aufgabe der Werkzeugauswahl in den Arbeitsablauf der Formgestaltung integrieren. Das Ziel dieser Arbeit ist es, besser geeignete Schnittstellen fĂŒr den computer-unterstĂŒtzten, konzeptionellen Formentwurf zur VerfĂŒgung zu stellen; ein Gebiet, fĂŒr das Standard-Desktop-Systeme wenig geeignete UnterstĂŒtzung bieten
    corecore