1,604 research outputs found

    The ASPECTA toolkit : affordable Full Coverage Displays

    Get PDF
    Full Coverage Displays (FCDs) cover the interior surface of an entire room with pixels. FCDs make possible many new kinds of immersive display experiences - but current technology for building FCDs is expensive and complex, and software support for developing full-coverage applications is limited. To address these problems, we introduce ASPECTA, a hardware configuration and software toolkit that provide a low-cost and easy-to-use solution for creating full coverage systems. We outline ASPECTA’s (minimal) hardware requirements and describe the toolkit’s architecture, development API, server implementation, and configuration tool; we also provide a full example of how the toolkit can be used. We performed two evaluations of the toolkit: a case study of a research system built with ASPECTA, and a laboratory study that tested the effectiveness of the API. Our evaluations, as well as multiple examples of ASPECTA in use, show how ASPECTA can simplify configuration and development while still dramatically reducing the cost for creators of applications that take advantage of full-coverage displays.Postprin

    3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes

    Get PDF
    Three-dimensional TV is expected to be the next revolution in the history of television. We implemented a 3D TV prototype system with real-time acquisition, transmission, and 3D display of dynamic scenes. We developed a distributed, scalable architecture to manage the high computation and bandwidth demands. Our system consists of an array of cameras, clusters of network-connected PCs, and a multi-projector 3D display. Multiple video streams are individually encoded and sent over a broadband network to the display. The 3D display shows high-resolution (1024 × 768) stereoscopic color images for multiple viewpoints without special glasses. We implemented systems with rear-projection and front-projection lenticular screens. In this paper, we provide a detailed overview of our 3D TV system, including an examination of design choices and tradeoffs. We present the calibration and image alignment procedures that are necessary to achieve good image quality. We present qualitative results and some early user feedback. We believe this is the first real-time end-to-end 3D TV system with enough views and resolution to provide a truly immersive 3D experience.Engineering and Applied Science

    Immersive Visualization for Enhanced Computational Fluid Dynamics Analysis

    Get PDF
    Modern biomedical computer simulations produce spatiotemporal results that are often viewed at a single point in time on standard 2D displays. An immersive visualization environment (IVE) with 3D stereoscopic capability can mitigate some shortcomings of 2D displays via improved depth cues and active movement to further appreciate the spatial localization of imaging data with temporal computational fluid dynamics (CFD) results. We present a semi-automatic workflow for the import, processing, rendering, and stereoscopic visualization of high resolution, patient-specific imaging data, and CFD results in an IVE. Versatility of the workflow is highlighted with current clinical sequelae known to be influenced by adverse hemodynamics to illustrate potential clinical utility

    VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera

    Full text link
    We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e. it works for outdoor scenes, community videos, and low quality commodity RGB cameras.Comment: Accepted to SIGGRAPH 201
    • …
    corecore