241 research outputs found
Recommended from our members
3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes
Three-dimensional TV is expected to be the next revolution in the history of television. We implemented a 3D TV prototype system with real-time acquisition, transmission, and 3D display of dynamic scenes. We developed a distributed, scalable architecture to manage the high computation and bandwidth demands. Our system consists of an array of cameras, clusters of network-connected PCs, and a multi-projector 3D display. Multiple video streams are individually encoded and sent over a broadband network to the display. The 3D display shows high-resolution (1024 Ă— 768) stereoscopic color images for multiple viewpoints without special glasses. We implemented systems with rear-projection and front-projection lenticular screens. In this paper, we provide a detailed overview of our 3D TV system, including an examination of design choices and tradeoffs. We present the calibration and image alignment procedures that are necessary to achieve good image quality. We present qualitative results and some early user feedback. We believe this is the first real-time end-to-end 3D TV system with enough views and resolution to provide a truly immersive 3D experience.Engineering and Applied Science
Subjective evaluation of an active crosstalk reduction system for mobile autostereoscopic displays
The Quality of Experience (QoE) provided by autostereoscopic 3D displays strongly depends on the user position. For an optimal image quality, the observer should be located at one of the relevant positions, called sweet spots, where artifacts reducing the QoE, such as crosstalk, are minimum. In this paper, we propose and evaluate a complete active crosstalk reduction system running on an HTC EVO 3D smartphone. To determine the crosstalk level at each position, a full display characterization was performed. Based on the user position and crosstalk profile, the system first helps the user to find the sweet spot using visual feedback. If the user moves away from the sweet spot, then the active crosstalk compensation is performed and reverse stereo phenomenon is corrected. The user preference between standard 2D and 3D modes, and the proposed system was evaluated through a subjective quality assessment. Results show that in terms of depth perception, the proposed system clearly outperforms the 3D and 2D modes. In terms of image quality, 2D mode was found to be best, but the proposed system outperforms 3D mode
Methods for reducing visual discomfort in stereoscopic 3D: A review
This work was supported by the EPSRC Grant EP/M01469X/1, “Geometric Evaluation of Stereoscopic Video”
Future Directions in Astronomy Visualisation
Despite the large budgets spent annually on astronomical research equipment
such as telescopes, instruments and supercomputers, the general trend is to
analyse and view the resulting datasets using small, two-dimensional displays.
We report here on alternative advanced image displays, with an emphasis on
displays that we have constructed, including stereoscopic projection, multiple
projector tiled displays and a digital dome. These displays can provide
astronomers with new ways of exploring the terabyte and petabyte datasets that
are now regularly being produced from all-sky surveys, high-resolution computer
simulations, and Virtual Observatory projects. We also present a summary of the
Advanced Image Displays for Astronomy (AIDA) survey which we conducted from
March-May 2005, in order to raise some issues pertitent to the current and
future level of use of advanced image displays.Comment: 13 pages, 2 figures, accepted for publication in PAS
Rendering and display for multi-viewer tele-immersion
Video teleconferencing systems are widely deployed for business, education and personal use to enable face-to-face communication between people at distant sites. Unfortunately, the two-dimensional video of conventional systems does not correctly convey several important non-verbal communication cues such as eye contact and gaze awareness. Tele-immersion refers to technologies aimed at providing distant users with a more compelling sense of remote presence than conventional video teleconferencing. This dissertation is concerned with the particular challenges of interaction between groups of users at remote sites. The problems of video teleconferencing are exacerbated when groups of people communicate. Ideally, a group tele-immersion system would display views of the remote site at the right size and location, from the correct viewpoint for each local user. However, is is not practical to put a camera in every possible eye location, and it is not clear how to provide each viewer with correct and unique imagery. I introduce rendering techniques and multi-view display designs to support eye contact and gaze awareness between groups of viewers at two distant sites. With a shared 2D display, virtual camera views can improve local spatial cues while preserving scene continuity, by rendering the scene from novel viewpoints that may not correspond to a physical camera. I describe several techniques, including a compact light field, a plane sweeping algorithm, a depth dependent camera model, and video-quality proxies, suitable for producing useful views of a remote scene for a group local viewers. The first novel display provides simultaneous, unique monoscopic views to several users, with fewer user position restrictions than existing autostereoscopic displays. The second is a random hole barrier autostereoscopic display that eliminates the viewing zones and user position requirements of conventional autostereoscopic displays, and provides unique 3D views for multiple users in arbitrary locations
Recommended from our members
Holoscopic 3D imaging and display technology: Camera/ processing/ display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHoloscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”.
While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display.
Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation.
Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability.
Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 Ă— 384 3D-Pixels whereas the traditional spatial resolution is 341 Ă— 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images
Augmented Reality for Restoration/Reconstruction of Artefacts with Artistic or Historical Value
The artistic or historical value of a structure, such as a monument, a mosaic, a painting or, generally speaking, an artefact, arises from the novelty and the development it represents in a certain field and in a certain time of the human activity. The more faithfully the structure preserves its original status, the greater its artistic and historical value is. For this reason it is fundamental to preserve its original condition, maintaining it as genuine as possible over the time.
Nevertheless the preservation of a structure cannot be always possible (for traumatic events as wars can occur), or has not always been realized, simply for negligence, incompetence, or even guilty unwillingness. So, unfortunately, nowadays the status of a not irrelevant number of such structures can range from bad to even catastrophic.
In such a frame the current technology furnishes a fundamental help for reconstruction/restoration purposes, so to bring back a structure to its original historical value and condition. Among the modern facilities, new possibilities arise from the Augmented Reality (AR) tools, which combine the virtual reality (VR) settings with real physical materials and instruments.
The idea is to realize a virtual reconstruction/restoration before materially acting on the structure itself. In this way main advantages are obtained among which: the manpower and machine power are utilized only in the last phase of the reconstruction; potential damages/abrasions of some parts of the structure are avoided during the cataloguing phase; it is possible to precisely define the forms and dimensions of the eventually missing pieces, etc.
Actually the virtual reconstruction/restoration can be even improved taking advantages of the AR, which furnish lots of added informative parameters, which can be even fundamental under specific circumstances. So we want here detail the AR application to restore and reconstruct the structures with artistic and/or historical valu
OCULAR VERGENCE RESPONSE OVER ANAGLYPHIC STEREOSCOPIC VIDEOS
The effect of anaglyphic stereographic stimuli on ocular vergence response is examined. An experiment is performed comparing ocular vergence response induced by anaglyphic stereographic display versus standard monoscopic display. Two visualization tools, synchronized three-dimensional scanpath playback and real-time dynamic heatmap generation, are developed and used to subjectively support the quantitative analysis of ocular disparity. The results of a one-way ANOVA indicate that there is a highly significant effect of anaglyphic stereoscopic display on ocular vergence for a majority of subjects although consistency of vergence response is difficult to predict
- …