14 research outputs found

    A Virtual Environment System for the Comparison of Dome and HMD Systems

    Get PDF
    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system for the comparison of Dome and head-mounted display (HMD) systems. In particular, we address interactions techniques and playback environments

    The Comparison Of Dome And HMD Delivery Systems: A Case Study

    Get PDF
    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system, a Virtual environment testbed (VET), for the comparison of Dome and Head Mounted Display (HMD) systems on an SGI Onyx workstation. By writing codelets, we allow a variety of virtual scenarios and subjects' information to be loaded without programming or changing the code. This is part of an ongoing research project conducted by the NASA / JSC

    Social Loafing Impact on Collaboration in 3D Virtual Worlds: An Empirical Study

    Get PDF
    Collaboration is increasingly distributed and influenced by the technologies involved in the workspace. 3D Virtual Worlds (VWs) are rich and promising collaboration tools that provide highly interactive environments. Several researchers and practitioners are particularly interested in the potential of these new media to support collaborative practices. However, the literature does not provide yet satisfactory and accurate response to companies about impacts of these technologies’ use for professional collaboration purposes. The present research attempts to address this gap and looks at this effect more closely. This research in progress presents the research model and research methodology used. The research model hypothesizes social loafing as substantial factor that determines team members’ involvement in knowledge sharing and application processes. In the future, this empirical study suggests quantitative assessment of the impact of 3D virtual world use in workspace on knowledge sharing and knowledge application

    Collaborative Workspaces within Distributed Virtual Environments

    Get PDF
    In warfare, be it a training simulation or actual combat, a commander\u27s time is one of the most valuable and fleeting resources of a military unit. Thus, it is natural for a unit to have a plethora of personnel to analyze and filter information to the decision-maker. This dynamic exchange of ideas between analyst and commander is currently not available within the distributed interactive simulation (DIS) community. This lack of exchange limits the usefulness of the DIS experience to the commander and his troops. This thesis addresses the commander\u27s isolation problem through the integration of a collaborative workspace within AFIT\u27s Synthetic BattleBridge (SBB) as a technique to improve situational awareness. The SBB\u27s Collaborative Workspace enhances battlespace awareness through CSCW (computer supported cooperative work) enabling communication technologies. The SBB\u27s Collaborative Workspace allows the user to interact with other SBB users through the transmission and reception of public bulletins, private email, real-time chat sessions, shared viewpoints, shared video, and shared annotations to the virtual environment. Collaborative communication between SBB occurs through the use of standard and experimental DIS-compliant protocol data units. The SBB\u27s Collaborative Workspace gives the battlespace commander the widest range of communication options available within a DIS virtual environment today

    Automatic Speed Control For Navigation in 3D Virtual Environment

    Get PDF
    As technology progresses, the scale and complexity of 3D virtual environments can also increase proportionally. This leads to multiscale virtual environments, which are environments that contain groups of objects with extremely unequal levels of scale. Ideally the user should be able to navigate such environments efficiently and robustly. Yet, most previous methods to automatically control the speed of navigation do not generalize well to environments with widely varying scales. I present an improved method to automatically control the navigation speed of the user in 3D virtual environments. The main benefit of my approach is that automatically adapts the navigation speed in multi-scale environments in a manner that enables efficient navigation with maximum freedom, while still avoiding collisions. The results of a usability tests show a significant reduction in the completion time for a multi-scale navigation task

    Three-dimensional user interfaces for scientific visualization

    Get PDF
    The focus of this grant was to experiment with novel user interfaces for scientific visualization applications using both desktop and virtual reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past three years, and subsumes all prior reports

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    A new taxonomy for locomotion in virtual environments

    Get PDF
    The concept of virtual reality, although evolving due to technological advances, has always been fundamentally defined as a revolutionary way for humans to interact with computers. The revolution comes from the concept of immersion, which is the essence of virtual reality. Users are no longer passive observers of information, but active participants that have leaped through the computer screen and are now part of the information. This has tremendous implications on how users interact with computer information in the virtual world.;Perhaps the most common form of interaction in a virtual environment is locomotion. The term locomotion is used to indicate a user\u27s control of movement through the virtual environment. There are many ways for a user to change his viewpoint in the virtual world. Because virtual reality is a relatively young field, no standard interfaces exist for interaction, particularly locomotion, in a virtual world. There have been few attempts to formally classify the ways in which virtual locomotion can occur. These classification schemes do not take into account the various interaction devices such as joysticks and vehicle mock-ups that are used to perform the locomotion. Nor do they account for the differences in display devices, such as head-mounted displays, monitors, or projected walls.;This work creates a new classification system for virtual locomotion methods. The classification provides guidelines for designers of new VR applications, on what types of locomotion are best suited to the requirements of new applications. Unlike previous taxonomies, this work incorporates display devices, interaction devices, and travel tasks, along with identifying two major components of travel: translation and rotation. The classification also identifies important sub-components of these two.;In addition, we have experimentally validated the importance of display device and rotation method in this new classification system. This was accomplished through a large-scale user experiment. Users performed an architectural walkthrough of a virtual building. Both objective and subjective measures indicate that choice of display device is extremely important to the task of locomotion, and that for each display device, the choice of rotation method is also important

    Implementation of Flying, Scaling, and Grabbing in Virtual Worlds

    No full text
    In a virtual world viewed with a head-mounted display, the user may wish to perform certain actions under the control of a manual input device. The most important of these actions are flying through the world, scaling the world, and grabbing objects. This paper shows how these actions can be precisely specified with frame-to-frame invariants, and how the code to implement the actions can be derived from the invariants by algebraic manipulation. INTRODUCTION Wearing a Head-Mounted Display (HMD) gives a human user the sensation of being inside a three-dimensional, computersimulated world. Because the HMD replaces the sights and sounds of the real world with a computer-generated virtual world, this synthesized world is called virtual reality. The virtual world surrounding the user is defined by a graphics database called a model, which gives the colors and coordinates for each of the polygons making up the virtual world. The polygons making up the virtual world are normally grouped into e..
    corecore