68 research outputs found

    Adopting multiview pixel mapping for enhancing quality of holoscopic 3D scene in parallax barriers based holoscopic 3D displays

    Get PDF
    The Autostereoscopic multiview 3D Display is robustly developed and widely available in commercial markets. Excellent improvements are made using pixel mapping techniques and achieved an acceptable 3D resolution with balanced pixel aspect ratio in lens array technology. This paper proposes adopting multiview pixel mapping for enhancing quality constructed holoscopic 3D scene in parallax barriers based holoscopic 3D displays achieving great results. The Holoscopic imaging technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear. In addition pixel mapping and holoscopic 3D rendering tools are developed including a custom built holoscopic 3D displays to test the proposed method and carry out a like-to-like comparison.This work has been supported by European Commission under Grant FP7-ICT-2009-4 (3DVIVANT). The authors wish to ex-press their gratitude and thanks for the support given throughout the project

    Depth Image-Based Rendering for Full Parallax Displays: Rendering, Compression, and Interpolation of Content for Autostereoscopic Poster and Video Displays

    Get PDF
    Advancements in production and display techniques allowed for novel displays to emerge that project a high-resolution light field for static poster content and video content, as well. These displays allow a full parallax, hence an audience can perceive a stereoscopic view of a scene without special glasses, which adjusts to the observer's position. The application of such displays are public places where the audience does not wear special glasses and is not restricted in movement. The rendering, storage, and transfer of the large amount of data required by those displays is a challenge. The image data for a static poster display is about 200 GB and the data rate for video displays are to be expected two to four orders of magnitude higher than HDTV. In this work the challenges are met by utilising DIBR to reduce the amount of data at the very beginning, during rendering. A fraction of the full amount of colour and depth images are rendered and used to interpolate the full data set. The rendering with state of the art ray tracers is described and a novel method to render image data for full parallax displays using OpenGL is contributed, that addresses some shortcomings of previous approaches. For static poster displays a scene based representation for image interpolation is introduced, which efficiently utilises multi-core processors and graphics hardware for parallelization, found on modern workstations. The introduced approach implements lossy compression of the input data, and handles arbitrary scenes, using a novel BNV selection algorithm. For video displays the real-time constraint does not allow for a costly interpolation or scene analysis. Hence, a novel approach is presented that uses a basic and computational inexpensive interpolation, and combines the interpolation results of different image representations without introducing prominent artefacts

    Rendering and display for multi-viewer tele-immersion

    Get PDF
    Video teleconferencing systems are widely deployed for business, education and personal use to enable face-to-face communication between people at distant sites. Unfortunately, the two-dimensional video of conventional systems does not correctly convey several important non-verbal communication cues such as eye contact and gaze awareness. Tele-immersion refers to technologies aimed at providing distant users with a more compelling sense of remote presence than conventional video teleconferencing. This dissertation is concerned with the particular challenges of interaction between groups of users at remote sites. The problems of video teleconferencing are exacerbated when groups of people communicate. Ideally, a group tele-immersion system would display views of the remote site at the right size and location, from the correct viewpoint for each local user. However, is is not practical to put a camera in every possible eye location, and it is not clear how to provide each viewer with correct and unique imagery. I introduce rendering techniques and multi-view display designs to support eye contact and gaze awareness between groups of viewers at two distant sites. With a shared 2D display, virtual camera views can improve local spatial cues while preserving scene continuity, by rendering the scene from novel viewpoints that may not correspond to a physical camera. I describe several techniques, including a compact light field, a plane sweeping algorithm, a depth dependent camera model, and video-quality proxies, suitable for producing useful views of a remote scene for a group local viewers. The first novel display provides simultaneous, unique monoscopic views to several users, with fewer user position restrictions than existing autostereoscopic displays. The second is a random hole barrier autostereoscopic display that eliminates the viewing zones and user position requirements of conventional autostereoscopic displays, and provides unique 3D views for multiple users in arbitrary locations

    Widening Viewing Angles of Automultiscopic Displays using Refractive Inserts

    Get PDF

    Multiperspective Modeling and Rendering Using General Linear Cameras

    Full text link
    corecore