95,610 research outputs found

    3D User Interfaces for General-Purpose 3D Animation

    Get PDF
    Draft submission, Appeared as "3D User Interfaces for General-Purpose 3D Animation"Modern 3D animation systems let a growing number of people generate increasingly sophisticated animated movies, frequently for tutorials or multimedia documents. However, although these tasks are inherently three dimensional, these systems' user interfaces are still predominantly two dimensional. This makes it difficult to interactively input complex animated 3D movements. We have developed Virtual Studio, an inexpensive and easy-to-use 3D animation environment in which animators can perform all interaction directly in three dimensions. Animators can use 3D devices to specify complex 3D motions. Virtual tools are visible mediators that provide interaction metaphors to control application objects. An underlying constraint solver lets animators tightly couple application and interface objects. Users define animation by recording the effect of their manipulations on models. Virtual Studio applies data-reduction techniques to generate editable representations of each animated element that is manipulated.71-78Pubblicat

    Configurable Input Devices for 3D Interaction using Optical Tracking

    Get PDF
    Three-dimensional interaction with virtual objects is one of the aspects that needs to be addressed in order to increase the usability and usefulness of virtual reality. Human beings have difficulties understanding 3D spatial relationships and manipulating 3D user interfaces, which require the control of multiple degrees of freedom simultaneously. Conventional interaction paradigms known from the desktop computer, such as the use of interaction devices as the mouse and keyboard, may be insufficient or even inappropriate for 3D spatial interaction tasks. The aim of the research in this thesis is to develop the technology required to improve 3D user interaction. This can be accomplished by allowing interaction devices to be constructed such that their use is apparent from their structure, and by enabling efficient development of new input devices for 3D interaction. The driving vision in this thesis is that for effective and natural direct 3D interaction the structure of an interaction device should be specifically tuned to the interaction task. Two aspects play an important role in this vision. First, interaction devices should be structured such that interaction techniques are as direct and transparent as possible. Interaction techniques define the mapping between interaction task parameters and the degrees of freedom of interaction devices. Second, the underlying technology should enable developers to rapidly construct and evaluate new interaction devices. The thesis is organized as follows. In Chapter 2, a review of the optical tracking field is given. The tracking pipeline is discussed, existing methods are reviewed, and improvement opportunities are identified. In Chapters 3 and 4 the focus is on the development of optical tracking techniques of rigid objects. The goal of the tracking method presented in Chapter 3 is to reduce the occlusion problem. The method exploits projection invariant properties of line pencil markers, and the fact that line features only need to be partially visible. In Chapter 4, the aim is to develop a tracking system that supports devices of arbitrary shapes, and allows for rapid development of new interaction devices. The method is based on subgraph isomorphism to identify point clouds. To support the development of new devices in the virtual environment an automatic model estimation method is used. Chapter 5 provides an analysis of three optical tracking systems based on different principles. The first system is based on an optimization procedure that matches the 3D device model points to the 2D data points that are detected in the camera images. The other systems are the tracking methods as discussed in Chapters 3 and 4. In Chapter 6 an analysis of various filtering and prediction methods is given. These techniques can be used to make the tracking system more robust against noise, and to reduce the latency problem. Chapter 7 focusses on optical tracking of composite input devices, i.e., input devices 197 198 Summary that consist of multiple rigid parts that can have combinations of rotational and translational degrees of freedom with respect to each other. Techniques are developed to automatically generate a 3D model of a segmented input device from motion data, and to use this model to track the device. In Chapter 8, the presented techniques are combined to create a configurable input device, which supports direct and natural co-located interaction. In this chapter, the goal of the thesis is realized. The device can be configured such that its structure reflects the parameters of the interaction task. In Chapter 9, the configurable interaction device is used to study the influence of spatial device structure with respect to the interaction task at hand. The driving vision of this thesis, that the spatial structure of an interaction device should match that of the task, is analyzed and evaluated by performing a user study. The concepts and techniques developed in this thesis allow researchers to rapidly construct and apply new interaction devices for 3D interaction in virtual environments. Devices can be constructed such that their spatial structure reflects the 3D parameters of the interaction task at hand. The interaction technique then becomes a transparent one-to-one mapping that directly mediates the functions of the device to the task. The developed configurable interaction devices can be used to construct intuitive spatial interfaces, and allow researchers to rapidly evaluate new device configurations and to efficiently perform studies on the relation between the spatial structure of devices and the interaction task

    Motion-Capture-Enabled Software for Gestural Control of 3D Models

    Get PDF
    Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller

    Learning 3D Navigation Protocols on Touch Interfaces with Cooperative Multi-Agent Reinforcement Learning

    Get PDF
    Using touch devices to navigate in virtual 3D environments such as computer assisted design (CAD) models or geographical information systems (GIS) is inherently difficult for humans, as the 3D operations have to be performed by the user on a 2D touch surface. This ill-posed problem is classically solved with a fixed and handcrafted interaction protocol, which must be learned by the user. We propose to automatically learn a new interaction protocol allowing to map a 2D user input to 3D actions in virtual environments using reinforcement learning (RL). A fundamental problem of RL methods is the vast amount of interactions often required, which are difficult to come by when humans are involved. To overcome this limitation, we make use of two collaborative agents. The first agent models the human by learning to perform the 2D finger trajectories. The second agent acts as the interaction protocol, interpreting and translating to 3D operations the 2D finger trajectories from the first agent. We restrict the learned 2D trajectories to be similar to a training set of collected human gestures by first performing state representation learning, prior to reinforcement learning. This state representation learning is addressed by projecting the gestures into a latent space learned by a variational auto encoder (VAE).Comment: 17 pages, 8 figures. Accepted at The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2019 (ECMLPKDD 2019

    Immersive design of DMA molecules with a tangible interface

    Get PDF
    This work presents an experimental immersive interface for designing DNA components for application in nanotechnology. While much research has been done on immersive visualization, this is one of the first systems to apply advanced interface techniques to a scientific design problem. This system uses tangible 3D input devices (tongs, a raygun, and a multipurpose handle tool) to create and edit a purely digital representation of DNA. The tangible controllers are associated with functions (not data) while a virtual display is used to render the model. This interface was built in collaboration with a research group investigating the design of DNA tiles. A user study shows that scientists find the immersive interface more satisfying than a 2D interface due to the enhanced understanding gained by directly interacting with molecules in 3D space

    A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    Get PDF
    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file
    • …
    corecore