1,237 research outputs found

    Immersive cyberspace system

    Get PDF
    An immersive cyberspace system is presented which provides visual, audible, and vibrational inputs to a subject remaining in neutral immersion, and also provides for subject control input. The immersive cyberspace system includes a relaxation chair and a neutral immersion display hood. The relaxation chair supports a subject positioned thereupon, and places the subject in position which merges a neutral body position, the position a body naturally assumes in zero gravity, with a savasana yoga position. The display hood, which covers the subject's head, is configured to produce light images and sounds. An image projection subsystem provides either external or internal image projection. The display hood includes a projection screen moveably attached to an opaque shroud. A motion base supports the relaxation chair and produces vibrational inputs over a range of about 0-30 Hz. The motion base also produces limited translation and rotational movements of the relaxation chair. These limited translational and rotational movements, when properly coordinated with visual stimuli, constitute motion cues which create sensations of pitch, yaw, and roll movements. Vibration transducers produce vibrational inputs from about 20 Hz to about 150 Hz. An external computer, coupled to various components of the immersive cyberspace system, executes a software program and creates the cyberspace environment. One or more neutral hand posture controllers may be coupled to the external computer system and used to control various aspects of the cyberspace environment, or to enter data during the cyberspace experience

    Simplifying collaboration in co-located virtual environments using the active-passive approach

    Get PDF
    The design and implementation of co-located immersive virtual environments with equal interaction possibilities for all participants is a complex topic. The main problem, on a fundamental technical level, is the difficulty of providing perspective-correct images for each participant. There is consensus that the lack of a correct perspective view will negatively affect interaction fidelity and therefore also collaboration. Several research approaches focus on providing a correct perspective view to all participants to enable co-located work. However, these approaches are usually either based on custom hardware solutions that limit the number of users with a correct perspective view or software solutions striving to eliminate or mitigate restrictions with custom image-generation approaches. In this paper we investigate an often overlooked approach to enable collaboration for multiple users in an immersive virtual environment designed for a single user. The approach provides one (active) user with a perspective-correct view while other (passive) users receive visual cues that are not perspective-correct. We used this active-passive approach to investigate the limitations posed by assigning the viewpoint to only one user. The findings of our study, though inconclusive, revealed two curiosities. First, our results suggest that the location of target geometry is an important factor to consider for designing interaction, expanding on prior work that has studied only the relation between user positions. Secondly, there seems to be only a low cost involved in accepting the limitation of providing perspective-correct images to a single user, when comparing with a baseline, during a coordinated work approach. These findings advance our understanding of collaboration in co-located virtual environments and suggest an approach to simplify co-located collaboration

    The screen as boundary object in the realm of imagination

    Get PDF
    As an object at the boundary between virtual and physical reality, the screen exists both as a displayer and as a thing displayed, thus functioning as a mediator. The screen's virtual imagery produces a sense of immersion in its viewer, yet at the same time the materiality of the screen produces a sense of rejection from the viewer's complete involvement in the virtual world. The experience of the screen is thus an oscillation between these two states of immersion and rejection. Nowadays, as interactivity becomes a central component of the relationship between viewers and many artworks, the viewer experience of the screen is changing. Unlike the screen experience in non-interactive artworks, such as the traditional static screen of painting or the moving screen of video art in the 1970s, interactive media screen experiences can provide viewers with a more immersive, immediate, and therefore, more intense experience. For example, many digital media artworks provide an interactive experience for viewers by capturing their face or body though real-time computer vision techniques. In this situation, as the camera and the monitor in the artwork encapsulate the interactor's body in an instant feedback loop, the interactor becomes a part of the interface mechanism and responds to the artwork as the system leads or even provokes them. This thesis claims that this kind of direct mirroring in interactive screen-based media artworks does not allow the viewer the critical distance or time needed for self-reflection. The thesis examines the previous aesthetics of spatial and temporal perception, such as presentness and instantaneousness, and the notions of passage and of psychological perception such as reflection, reflexiveness and auratic experience, looking at how these aesthetics can be integrated into new media screen experiences. Based on this theoretical research, the thesis claims that interactive screen spaces can act as a site for expression and representation, both through a doubling effect between the physical and virtual worlds, and through manifold spatial and temporal mappings with the screen experience. These claims are further supported through exploration of screen-based media installations created by the author since 2003.Ph.D.Committee Chair: Mazalek, Ali; Committee Member: Bolter, Jay David; Committee Member: Do, Ellen Yi-Luen; Committee Member: Nitsche, Michael; Committee Member: Winegarden, Claudia R

    Contributions to virtual reality

    Get PDF
    153 p.The thesis contributes in three Virtual Reality areas: ¿ Visual perception: a calibration algorithm is proposed to estimate stereo projection parameters in head-mounted displays, so that correct shapes and distances can be perceived, and calibration and control procedures are proposed to obtain desired accommodation stimuli at different virtual distances.¿ Immersive scenarios: the thesis analyzes several use cases demanding varying degrees of immersion and special, innovative visualization solutions are proposed to fulfil their requirements. Contributions focus on machinery simulators, weather radar volumetric visualization and manual arc welding simulation.¿ Ubiquitous visualization: contributions are presented to scenarios where users access interactive 3D applications remotely. The thesis follows the evolution of Web3D standards and technologies to propose original visualization solutions for volume rendering of weather radar data, e-learning on energy efficiency, virtual e-commerce and visual product configurators

    Immersive Insights: A Hybrid Analytics System for Collaborative Exploratory Data Analysis

    Full text link
    In the past few years, augmented reality (AR) and virtual reality (VR) technologies have experienced terrific improvements in both accessibility and hardware capabilities, encouraging the application of these devices across various domains. While researchers have demonstrated the possible advantages of AR and VR for certain data science tasks, it is still unclear how these technologies would perform in the context of exploratory data analysis (EDA) at large. In particular, we believe it is important to better understand which level of immersion EDA would concretely benefit from, and to quantify the contribution of AR and VR with respect to standard analysis workflows. In this work, we leverage a Dataspace reconfigurable hybrid reality environment to study how data scientists might perform EDA in a co-located, collaborative context. Specifically, we propose the design and implementation of Immersive Insights, a hybrid analytics system combining high-resolution displays, table projections, and augmented reality (AR) visualizations of the data. We conducted a two-part user study with twelve data scientists, in which we evaluated how different levels of data immersion affect the EDA process and compared the performance of Immersive Insights with a state-of-the-art, non-immersive data analysis system.Comment: VRST 201

    The Comparison Of Dome And HMD Delivery Systems: A Case Study

    Get PDF
    For effective astronaut training applications, choosing the right display devices to present images is crucial. In order to assess what devices are appropriate, it is important to design a successful virtual environment for a comparison study of the display devices. We present a comprehensive system, a Virtual environment testbed (VET), for the comparison of Dome and Head Mounted Display (HMD) systems on an SGI Onyx workstation. By writing codelets, we allow a variety of virtual scenarios and subjects' information to be loaded without programming or changing the code. This is part of an ongoing research project conducted by the NASA / JSC

    Web trauma and haunting images : experimentations on materiality, installation, and operation of screens

    Get PDF
    This thesis seeks to compose dynamics among the screens, images, and space for viewers to confront what we easily ignore: the haunting ghosts of mistreated humanity in this age of web-trauma

    VizLab: The Design and Implementation of An Immersive Virtual Environment System Using Game Engine Technology and Open Source Software

    Get PDF
    Virtual Reality (VR) is a term used to describe computer-simulated environments that can immerse users in a real or unreal world. Immersive systems are an essential component when experiencing virtual environments. Developing VR applications is time-consuming, and developers use many resources in creating VR applications. The separate components require integration, and the challenges in using public domain open source software present complex software development. The VizLab Virtual Reality System was created to meet these challenges and provide an integrated suite of tools for VR system development. VizLab supports the development of VR applications by using game engine and CAVE system technology. The system consists of software modules that provide rendering, texturing, collision, physics, window/viewport management, cluster synchronization, input management, multi-processing, stereoscopic 3D, and networking. VizLab combines the main functional aspects of a game engine and CAVE system for an improved approach to developing VR applications, virtual environments, and immersive environments

    A movable image-based rendering system and its application to multiview audio-visual conferencing

    Get PDF
    Image-based rendering (IBR) is an emerging technology for rendering photo-realistic views of scenes from a collection of densely sampled images or videos. It provides a framework for developing revolutionary virtual reality and immersive viewing systems. This paper studies the design of a movable image-based rendering system based on a class of dynamic representations called plenoptic videos. It is constructed by mounting a linear array of 8 video cameras on an electrically controllable wheel chair with its motion being controllable manually or remotely through wireless LAN by means of additional hardware circuitry. We also developed a real-time object tracking algorithm and utilize the motion information computed to adjust continuously the azimuth or rotation angle of the movable IBR system in order to cope with a given moving object. Due to the motion of the wheel chair, videos may appear shaky and video stabilization technique is proposed to overcome this problem. The system can be used in a multiview audio-visual conferencing via a multiview TV display. Through this pilot study, we hope to develop a framework for designing movable IBR systems with improved viewing freedom and ability to cope with moving object in large environment. ©2010 IEEE.published_or_final_versionThe 10th International Symposium on Communications and Information Technologies (ISCIT 2010), Tokyo, Japan, 26-29 October 2010. In Proceedings of 10th ISCIT, 2010, p. 1142-114
    corecore