10 research outputs found

    A large scale interactive holographic display

    Get PDF
    Conference Held in Alexandria, VA, USA, March 26 2006. CD ROM ProceedingsOur work focuses on the development of interactive multi-user holographic displays that allow freely moving naked eye participants to share a three dimensional scene with fully continuous, observer independent, parallax. Our approach is based on a scalable design that exploits a specially arranged array of projectors and a holographic screen. The feasibility of such an approach has already been demonstrated with a working hardware and software 7.4M pixel prototype driven at 10-15Hz by two DVI streams. In this short contribution, we illustrate our progress, presenting a 50M pixel display prototype driven by a dedicated cluster hosting multiple consumer level graphic cards

    Self-Supervised Light Field Reconstruction Using Shearlet Transform and Cycle Consistency

    Get PDF
    The image-based rendering approach using Shearlet Transform (ST) is one of the state-of-the-art Densely-Sampled Light Field (DSLF) reconstruction methods. It reconstructs Epipolar-Plane Images (EPIs) in image domain via an iterative regularization algorithm restoring their coefficients in shearlet domain. Consequently, the ST method tends to be slow because of the time spent on domain transformations for dozens of iterations. To overcome this limitation, this letter proposes a novel self-supervised DSLF reconstruction method, CycleST, which applies ST and cycle consistency to DSLF reconstruction. Specifically, CycleST is composed of an encoder-decoder network and a residual learning strategy that restore the shearlet coefficients of densely-sampled EPIs using EPI reconstruction and cycle consistency losses. Besides, CycleST is a self-supervised approach that can be trained solely on Sparsely-Sampled Light Fields (SSLFs) with small disparity ranges (\leqslant 8 pixels). Experimental results of DSLF reconstruction on SSLFs with large disparity ranges (16 - 32 pixels) from two challenging real-world light field datasets demonstrate the effectiveness and efficiency of the proposed CycleST method. Furthermore, CycleST achieves ~ 9x speedup over ST, at least

    A Virtual Holographic Display Case for Museum Installations

    Get PDF
    Today, it is important in society to make artworks accessible to mass audiences and to widen participation in culture. In such a context, virtual reality is one of the areas of greatest interest: new devices and new techniques are affordable for many users, and virtual and real worlds are often mixed together. In this paper, we propose a "virtual holographic" display, i.e. a stereoscopic virtual reality system that is able to replicate the behavior of a real showcase for exhibitions. It works in a completely virtual manner and it can yield to a new generation of entertainment "holographic" installations. We evaluate such a system through an experimental session with 20 users. In particular, we compare the proposed system, based on a stereoscopic technique (TD3D), with respect to a standard motion parallax technique in terms of the users' perceptual experience

    A Survey of Signal Processing Problems and Tools in Holographic Three-Dimensional Television

    Get PDF
    Cataloged from PDF version of article.Diffraction and holography are fertile areas for application of signal theory and processing. Recent work on 3DTV displays has posed particularly challenging signal processing problems. Various procedures to compute Rayleigh-Sommerfeld, Fresnel and Fraunhofer diffraction exist in the literature. Diffraction between parallel planes and tilted planes can be efficiently computed. Discretization and quantization of diffraction fields yield interesting theoretical and practical results, and allow efficient schemes compared to commonly used Nyquist sampling. The literature on computer-generated holography provides a good resource for holographic 3DTV related issues. Fast algorithms to compute Fourier, Walsh-Hadamard, fractional Fourier, linear canonical, Fresnel, and wavelet transforms, as well as optimization-based techniques such as best orthogonal basis, matching pursuit, basis pursuit etc., are especially relevant signal processing techniques for wave propagation, diffraction, holography, and related problems. Atomic decompositions, multiresolution techniques, Gabor functions, and Wigner distributions are among the signal processing techniques which have or may be applied to problems in optics. Research aimed at solving such problems at the intersection of wave optics and signal processing promises not only to facilitate the development of 3DTV systems, but also to contribute to fundamental advances in optics and signal processing theory. © 2007 IEEE

    Rendering and display for multi-viewer tele-immersion

    Get PDF
    Video teleconferencing systems are widely deployed for business, education and personal use to enable face-to-face communication between people at distant sites. Unfortunately, the two-dimensional video of conventional systems does not correctly convey several important non-verbal communication cues such as eye contact and gaze awareness. Tele-immersion refers to technologies aimed at providing distant users with a more compelling sense of remote presence than conventional video teleconferencing. This dissertation is concerned with the particular challenges of interaction between groups of users at remote sites. The problems of video teleconferencing are exacerbated when groups of people communicate. Ideally, a group tele-immersion system would display views of the remote site at the right size and location, from the correct viewpoint for each local user. However, is is not practical to put a camera in every possible eye location, and it is not clear how to provide each viewer with correct and unique imagery. I introduce rendering techniques and multi-view display designs to support eye contact and gaze awareness between groups of viewers at two distant sites. With a shared 2D display, virtual camera views can improve local spatial cues while preserving scene continuity, by rendering the scene from novel viewpoints that may not correspond to a physical camera. I describe several techniques, including a compact light field, a plane sweeping algorithm, a depth dependent camera model, and video-quality proxies, suitable for producing useful views of a remote scene for a group local viewers. The first novel display provides simultaneous, unique monoscopic views to several users, with fewer user position restrictions than existing autostereoscopic displays. The second is a random hole barrier autostereoscopic display that eliminates the viewing zones and user position requirements of conventional autostereoscopic displays, and provides unique 3D views for multiple users in arbitrary locations

    Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding

    Get PDF
    Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience

    EL VIDEOHOLOGRAMA COMO PRÁCTICA ARTÍSTICA: PROPUESTA EXPERIMENTAL EN LA VISUALIZACIÓN 3D. / VIDEO HOLOGRAM AS AN ARTISTIC PRACTICE: AN EXPERIMENTAL PROPOSAL IN 3D VISUALIZATION

    Full text link
    Este tipo de investigación forma parte de un estudio sobre la imagen holográfica y su principio óptico, indagando sus características estéticas y las distintas aplicaciones de este tipo de imagen en el lenguaje artístico, medios audiovisuales y cinematográficos, para encontrar una forma experimental de su utilización en el arte interactivo y performativo. A partir del siglo XXI se han desarrollado muchas formas de videos pseudo holográficos, mediante diferentes técnicas y aparatos. Con este estudio se pretende investigar sobre las últimas técnicas audiovisuales y los últimos dispositivos de visualización tridimensional, utilizados por muchos medios masivos, desde la publicidad hasta la moda, para llegar a desarrollar una propuesta artística a la cual daremos la definición de video holograma, considerándola como un híbrido audiovisual que utiliza el medio del video y la estética de la imagen holográfica. La práctica del video holograma nos permite jugar con la percepción visual del espectador y en cierto modo lograr un tipo de inmersión en otra realidad sin necesidad de aparatos externos para amplificar los sentidos. Este tipo de experimentos nos conduce a estudiar campos más vastos de percepción hasta la espacial y la sinestésica. Es por ello que la investigación tiene como objetivo principal la realización de una base teóricopráctica que explique y sistematice este empleo del recurso del video holograma, en las diferentes aplicaciones contemplando todos aquellos aspectos, características y elementos que lo componen. Por tanto examinar las tecnologías actuales y su eficacia como herramientas para la creación artística tanto a nivel de representación como en cuanto a la activación de prácticas interactivas en espacios híbridos. Movidos por el interés de mostrar la evolución que ha supuesto otras técnicas anteriores que han posibilitado la creación de esta nueva (el video holograma), partimos de que las técnicas pseudo holográficas, crean una forma diferente y propiaMereu, F. (2012). EL VIDEOHOLOGRAMA COMO PRÁCTICA ARTÍSTICA: PROPUESTA EXPERIMENTAL EN LA VISUALIZACIÓN 3D. / VIDEO HOLOGRAM AS AN ARTISTIC PRACTICE: AN EXPERIMENTAL PROPOSAL IN 3D VISUALIZATION [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15186Palanci

    A Large Scale Interactive Holographic Display

    No full text

    ABSTRACT A Large Scale Interactive Holographic Display

    No full text
    Figure 1: Holographic display example. The images that were taken from different positions in front of the display. Our work focuses on the development of interactive multi-user holographic displays that allow freely moving naked eye participants to share a three dimensional scene with fully continuous, observer independent, parallax. Our approach is based on a scalable design that exploits a specially arranged array of projectors and a holographic screen. The feasibility of such an approach has already been demonstrated with a working hardware and software 7.4M pixel prototype driven at 10-15Hz by two DVI streams. In this short contribution, we illustrate our progress, presenting a 50M pixel display prototype driven by a dedicated cluster hosting multiple consumer level graphic cards
    corecore