307,102 research outputs found

    The virtual environment display system

    Get PDF
    Virtual environment technology is a display and control technology that can surround a person in an interactive computer generated or computer mediated virtual environment. It has evolved at NASA-Ames since 1984 to serve NASA's missions and goals. The exciting potential of this technology, sometimes called Virtual Reality, Artificial Reality, or Cyberspace, has been recognized recently by the popular media, industry, academia, and government organizations. Much research and development will be necessary to bring it to fruition

    A tracker alignment framework for augmented reality

    Get PDF
    To achieve accurate registration, the transformations which locate the tracking system components with respect to the environment must be known. These transformations relate the base of the tracking system to the virtual world and the tracking system's sensor to the graphics display. In this paper we present a unified, general calibration method for calculating these transformations. A user is asked to align the display with objects in the real world. Using this method, the sensor to display and tracker base to world transformations can be determined with as few as three measurements

    The Comparison Of Dome And HMD Delivery Systems: A Case Study

    Get PDF
    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system, a Virtual environment testbed (VET), for the comparison of Dome and Head Mounted Display (HMD) systems on an SGI Onyx workstation. By writing codelets, we allow a variety of virtual scenarios and subjects' information to be loaded without programming or changing the code. This is part of an ongoing research project conducted by the NASA / JSC

    Virtual acoustics displays

    Get PDF
    The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events

    System For Treating Patients With Anxiety Disorders

    Get PDF
    A virtual reality system provides effective exposure treatment for psychiatric patients suffering from a particular anxiety disorder. The system is characterized by a video screen disposed in front of the patient to display an image of a specific graphical environment that is intended to trigger anxiety within the patient as a result of the particular patient phobia. A headset is worn by the patient, and has sensors disposed to detect movement and positioning of the patient's head. A computer program controls the operation of the system, and is designed to control the display of the graphical environment on the video screen, monitor the headset sensors and determine the position of the patient's head, and controllably manipulate the graphical environment displayed on the video screen to reflect the movement and position of the patient's head. In a preferred embodiment, a sensor is provided to automatically detect a level of patient anxiety, and the computer program is designed to monitor this sensor and controllably manipulate the graphical environment displayed on the video screen in response thereto. In other embodiments, sound and tactile feedback are provided to further enhance the graphic emulation.Emory University And Georgia Tech Research Corporatio

    Volumetric Framework for Third-Party Content Placement in Virtual 3D Environments

    Get PDF
    A content distribution system facilitates the placement of third-party content in virtual, three-dimensional (3D) environments. The third-party content may be shown on virtual, 3D objects. The system defines a technical and financial framework by which developers of 3D environments can monetize their environments by allotting space for the display of third-party virtual objects. In some aspects, the content distribution system determines an amount to charge a third-party content provider for placing their content on a virtual object in a 3D environment based on the volume of the virtual object

    A Virtual Environment System for the Comparison of Dome and HMD Systems

    Get PDF
    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system for the comparison of Dome and head-mounted display (HMD) systems. In particular, we address interactions techniques and playback environments

    A synthetic environment for visualization and planning of orbital maneuvers

    Get PDF
    An interactive proximity operations planning system, which allows on-site planning of fuel-efficient, multi-burn maneuvers in a potential multi-space-craft environment has been developed and tested. This display system most directly assists planning by providing visual feedback in a synthetic virtual space that aids visualization of trajectories and their constraints. Its most significant features include (1) an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator and (2) a stack-oriented action-editor that reduces the order of control and creates, through a 'geometric spreadsheet,' the illusion of an inertially stable environment. This synthetic environment provides the user with control of relevant static and dynamic properties of way-points during small orbital changes allowing independent solutions to otherwise coupled problems of orbital maneuvering

    Urban Air Mobility System Testbed Using CAVE Virtual Reality Environment

    Get PDF
    Urban Air Mobility (UAM) refers to a system of air passenger and small cargo transportation within an urban area. The UAM framework also includes other urban Unmanned Aerial Systems (UAS) services that will be supported by a mix of onboard, ground, piloted, and autonomous operations. Over the past few years UAM research has gained wide interest from companies and federal agencies as an on-demand innovative transportation option that can help reduce traffic congestion and pollution as well as increase mobility in metropolitan areas. The concepts of UAM/UAS operation in the National Airspace System (NAS) remains an active area of research to ensure safe and efficient operations. With new developments in smart vehicle design and infrastructure for air traffic management, there is a need for methods to integrate and test various components of the UAM framework. In this work, we report on the development of a virtual reality (VR) testbed using the Cave Automatic Virtual Environment (CAVE) technology for human-automation teaming and airspace operation research of UAM. Using a four-wall projection system with motion capture, the CAVE provides an immersive virtual environment with real-time full body tracking capability. We created a virtual environment consisting of San Francisco city and a vertical take-off-and-landing passenger aircraft that can fly between a downtown location and the San Francisco International Airport. The aircraft can be operated autonomously or manually by a single pilot who maneuvers the aircraft using a flight control joystick. The interior of the aircraft includes a virtual cockpit display with vehicle heading, location, and speed information. The system can record simulation events and flight data for post-processing. The system parameters are customizable for different flight scenarios; hence, the CAVE VR testbed provides a flexible method for development and evaluation of UAM framework

    Full Body Acting Rehearsal in a Networked Virtual Environment-A Case Study

    Get PDF
    In order to rehearse for a play or a scene from a movie, it is generally required that the actors are physically present at the same time in the same place. In this paper we present an example and experience of a full body motion shared virtual environment (SVE) for rehearsal. The system allows actors and directors to meet in an SVE in order to rehearse scenes for a play or a movie, that is, to perform some dialogue and blocking (positions, movements, and displacements of actors in the scene) rehearsal through a full body interactive virtual reality (VR) system. The system combines immersive VR rendering techniques as well as network capabilities together with full body tracking. Two actors and a director rehearsed from separate locations. One actor and the director were in London (located in separate rooms) while the second actor was in Barcelona. The Barcelona actor used a wide field-of-view head-tracked head-mounted display, and wore a body suit for real-time motion capture and display. The London actor was in a Cave system, with head and partial body tracking. Each actor was presented to the other as an avatar in the shared virtual environment, and the director could see the whole scenario on a desktop display, and intervene by voice commands. A video stream in a window displayed in the virtual environment also represented the director. The London participant was a professional actor, who afterward commented on the utility of the system for acting rehearsal. It was concluded that full body tracking and corresponding real-time display of all the actors' movements would be a critical requirement, and that blocking was possible down to the level of detail of gestures. Details of the implementation, actors, and director experiences are provided
    corecore