5 research outputs found

    Effects of Handling Real Objects and Self-Avatar Fidelity on Cognitive Task Performance and Sense of Presence in Virtual Environments

    Get PDF
    Immersive virtual environments (VEs) provide participants with computer-generated environments filled with virtual objects to assist in learning, training, and practicing dangerous and/or expensive tasks. But does having every object being virtual inhibit the interactivity and effectiveness for certain tasks? Further, does the visual fidelity of the virtual objects affect performance? If participants spent most of their time and cognitive load on learning and adapting to interacting with a purely virtual system, this could reduce the overall effectiveness of a VE. We conducted a study that investigated how handling real objects and self-avatar visual fidelity affects performance on a spatial cognitive manual task. We compared participants' performance of a block arrangement task in both a real-space environment and several virtual and hybrid environments. The results showed that manipulating real objects in a VE brings task performance closer to that of real space, compared to manipulating virtual objects

    Accelerated volumetric reconstruction from uncalibrated camera views

    Get PDF
    While both work with images, computer graphics and computer vision are inverse problems. Computer graphics starts traditionally with input geometric models and produces image sequences. Computer vision starts with input image sequences and produces geometric models. In the last few years, there has been a convergence of research to bridge the gap between the two fields. This convergence has produced a new field called Image-based Rendering and Modeling (IBMR). IBMR represents the effort of using the geometric information recovered from real images to generate new images with the hope that the synthesized ones appear photorealistic, as well as reducing the time spent on model creation. In this dissertation, the capturing, geometric and photometric aspects of an IBMR system are studied. A versatile framework was developed that enables the reconstruction of scenes from images acquired with a handheld digital camera. The proposed system targets applications in areas such as Computer Gaming and Virtual Reality, from a lowcost perspective. In the spirit of IBMR, the human operator is allowed to provide the high-level information, while underlying algorithms are used to perform low-level computational work. Conforming to the latest architecture trends, we propose a streaming voxel carving method, allowing a fast GPU-based processing on commodity hardware

    Rendering and display for multi-viewer tele-immersion

    Get PDF
    Video teleconferencing systems are widely deployed for business, education and personal use to enable face-to-face communication between people at distant sites. Unfortunately, the two-dimensional video of conventional systems does not correctly convey several important non-verbal communication cues such as eye contact and gaze awareness. Tele-immersion refers to technologies aimed at providing distant users with a more compelling sense of remote presence than conventional video teleconferencing. This dissertation is concerned with the particular challenges of interaction between groups of users at remote sites. The problems of video teleconferencing are exacerbated when groups of people communicate. Ideally, a group tele-immersion system would display views of the remote site at the right size and location, from the correct viewpoint for each local user. However, is is not practical to put a camera in every possible eye location, and it is not clear how to provide each viewer with correct and unique imagery. I introduce rendering techniques and multi-view display designs to support eye contact and gaze awareness between groups of viewers at two distant sites. With a shared 2D display, virtual camera views can improve local spatial cues while preserving scene continuity, by rendering the scene from novel viewpoints that may not correspond to a physical camera. I describe several techniques, including a compact light field, a plane sweeping algorithm, a depth dependent camera model, and video-quality proxies, suitable for producing useful views of a remote scene for a group local viewers. The first novel display provides simultaneous, unique monoscopic views to several users, with fewer user position restrictions than existing autostereoscopic displays. The second is a random hole barrier autostereoscopic display that eliminates the viewing zones and user position requirements of conventional autostereoscopic displays, and provides unique 3D views for multiple users in arbitrary locations

    Virtual Environments. Seminar - Sommersemester 2003

    Get PDF
    Dieser Bericht stellt die Ergebnisse des Seminars Virtual Environments (VE) zusammen. Ein wichtiges Ziel von VE ist die Immersion, die Einbindung des Benutzers als aktiven Teilnehmer in eine computergenerierte Welt. Voraussetzung dafĂŒr sind Techniken zur Simulation von Lebendigen virtuellen Welten, also zur Simulation von 3D-Szenen mit realistischem Verhalten. Es geht dabei um Kollisionserkennungsalgorithmen, haptisches Rendering, Navigations- und Interaktionstechniken, programmierbare Grafik-Hardware, verteilte virtuelle Welten bis hin zur Modellierung und Simulation von virtuellen Menschen. Die virtuelle RealitĂ€t hat sich inzwischen in verschiedenen Anwendungsbereichen durchgesetzt und wird auch im Rahmen des SFB 588 Humanoide Roboter - Lernende und kooperierende multimodale Roboter fĂŒr die Simulation des humanoiden Roboters und die Evaluierung der Mensch-Roboter-Schnittstelle eingesetzt

    Online model reconstruction for interactive virtual environments

    No full text
    We present a system for generating real-time 3D reconstructions of the user and other real objects in an immersive virtual environment (IVE) for visualization and interaction. For example, when parts of the user's body are in his field of view, our system allows him to see a visually faithful graphical representation of himself, an avatar. In addition, the user can grab real objects, and then see and interact with those objects in the IVE. Our system bypasses an explicit 3D modeling stage, and does not use additional tracking sensors or prior object knowledge, nor do we generate dense 3D representations of objects using computer vision techniques. We use a set of outside-looking-in cameras and a novel visual hull technique that leverages the tremendous recent advances in graphics hardware performance and capabilities. We accelerate the visual hull computation by using projected textures to rapidly determine which volume samples lie within the visual hull. The samples are combined to form the object reconstruction from any given viewpoint. Our system produces results at interactive rates, and because it harnesses ever-improving graphics hardware, the rates and quality should continue to improve. We further examine realtime generated models as active participants in simulations (with lighting) in IVEs, and give results using synthetic and real data
    corecore