18 research outputs found

    Real-time refocusing using an FPGA-based standard plenoptic camera

    Get PDF
    Plenoptic cameras are receiving increased attention in scientific and commercial applications because they capture the entire structure of light in a scene, enabling optical transforms (such as focusing) to be applied computationally after the fact, rather than once and for all at the time a picture is taken. In many settings, real-time inter active performance is also desired, which in turn requires significant computational power due to the large amount of data required to represent a plenoptic image. Although GPUs have been shown to provide acceptable performance for real-time plenoptic rendering, their cost and power requirements make them prohibitive for embedded uses (such as in-camera). On the other hand, the computation to accomplish plenoptic rendering is well structured, suggesting the use of specialized hardware. Accordingly, this paper presents an array of switch-driven finite impulse response filters, implemented with FPGA to accomplish high-throughput spatial-domain rendering. The proposed architecture provides a power-efficient rendering hardware design suitable for full-video applications as required in broadcasting or cinematography. A benchmark assessment of the proposed hardware implementation shows that real-time performance can readily be achieved, with a one order of magnitude performance improvement over a GPU implementation and three orders ofmagnitude performance improvement over a general-purpose CPU implementation

    Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems

    Get PDF
    There has been great interest in researching and implementing effective technologies for the capture, processing, and display of 3D images. This broad interest is evidenced by widespread international research and activities on 3D technologies. There is a large number of journal and conference papers on 3D systems, as well as research and development efforts in government, industry, and academia on this topic for broad applications including entertainment, manufacturing, security and defense, and biomedical applications. Among these technologies, integral imaging is a promising approach for its ability to work with polychromatic scenes and under incoherent or ambient light for scenarios from macroscales to microscales. Integral imaging systems and their variations, also known as plenoptics or light-field systems, are applicable in many fields, and they have been reported in many applications, such as entertainment (TV, video, movies), industrial inspection, security and defense, and biomedical imaging and displays. This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy

    FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples

    Get PDF
    In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and √2-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens

    Toward 3D integral-imaging broadcast with increased viewing angle and parallax

    Get PDF
    We propose a new method for improving the observer experience when using an integral monitor. Our method permits to increase the viewing angle of the integral monitor, and also the maximum parallax that can be displayed. Additionally, it is possible to decide which parts of the 3D scene are displayed in front or behind the monitor. Our method is based, first, in the direct capture, with significant excess of parallax, of elemental images of 3D real scenes. From them, a collection of microimages adapted to the observer lateral and depth position is calculated. Finally, an eye-tracking system permits to determine the 3D observer position, and therefore to display the adequate microimages set. Summarizing, it is reported here, for the first time we believe, the application of eye-tracking technology to the display of integral images of 3D real scenes with bright background. Although we are reporting here only a proof-of-concept experiment, this result could have direct application in a close future for the broadcasting of 3D videos recorded in professional studio, for videoconferences or for on-line professional meetings

    Fusion of computed point clouds and integral-imaging concepts for full-parallax 3D display

    Get PDF
    During the last century, various technologies of 3D image capturing and visualization have spotlighted, due to both their pioneering nature and the aspiration to extend the applications of conventional 2D imaging technology to 3D scenes. Besides, thanks to advances in opto-electronic imaging technologies, the possibilities of capturing and transmitting 2D images in real-time have progressed significantly, and boosted the growth of 3D image capturing, processing, transmission and as well as display techniques. Among the latter, integral-imaging technology has been considered as one of the promising ones to restore real 3D scenes through the use of a multi-view visualization system that provides to observers with a sense of immersive depth. Many research groups and companies have researched this novel technique with different approaches, and occasions for various complements. In this work, we followed this trend, but processed through our novel strategies and algorithms. Thus, we may say that our approach is innovative, when compared to conventional proposals. The main objective of our research is to develop techniques that allow recording and simulating the natural scene in 3D by using several cameras which have different types and characteristics. Then, we compose a dense 3D scene from the computed 3D data by using various methods and techniques. Finally, we provide a volumetric scene which is restored with great similarity to the original shape, through a comprehensive 3D monitor and/or display system. Our Proposed integral-imaging monitor shows an immersive experience to multiple observers. In this thesis we address the challenges of integral image production techniques based on the computerized 3D information, and we focus in particular on the implementation of full-parallax 3D display system. We have also made progress in overcoming the limitations of the conventional integral-imaging technique. In addition, we have developed different refinement methodologies and restoration strategies for the composed depth information. Finally, we have applied an adequate solution that reduces the computation times significantly, associated with the repetitive calculation phase in the generation of an integral image. All these results are presented by the corresponding images and proposed display experiments

    GPU-accelerated integral imaging and full-parallax 3D display using stereo-plenoptic camera system

    Get PDF
    In this paper, we propose a novel approach to produce integral images ready to be displayed onto an integral- imaging monitor. Our main contribution is the use of commercial plenoptic camera, which is arranged in a stereo configuration. Our proposed set-up is able to record the radiance, spatial and angular, information simultaneously in each different stereo position. We illustrate our contribution by composing the point cloud from a pair of captured plenoptic images, and generate an integral image from the properly registered 3D information. We have exploited the graphics processing unit (GPU) acceleration in order to enhance the integral-image computation speed and efficiency. We present our approach with imaging experiments that demonstrate the improved quality of integral image. After the projection of such integral image onto the proposed monitor, 3D scenes are displayed with full-parallax

    Recent advances in the capture and display of macroscopic and microscopic 3-D scenes by integral imaging

    Get PDF
    The capture and display of images of 3-D scenes under incoherent and polychromatic illumination is currently a hot topic of research, due to its broad applications in bioimaging, industrial procedures, military and surveillance, and even in the entertainment industry. In this context, Integral Imaging (InI) is a very competitive technology due to its capacity for recording with a single exposure the spatial-angular information of light-rays emitted by the 3-D scene. From this information, it is possible to calculate and display a collection of horizontal and vertical perspectives with high depth of field. It is also possible to calculate the irradiance of the original scene at different depths, even when these planes are partially occluded or even immersed in a scattering medium. In this paper, we describe the fundaments of InI and the main contributions to its development. We also focus our attention on the recent advances of the InI technique. Specifically, the application of InI concept to microscopy is analyzed and the achievements in resolution and depth of field are explained. In a different context, we also present the recent advances in the capture of large scenes. The progresses in the algorithms for the calculation of displayable 3-D images and in the implementation of setups for the 3-D displays are reviewed

    Integral imaging with Fourier-plane recording

    Get PDF
    Integral Imaging is well known for its capability of recording both the spatial and the angular information of threedimensional (3D) scenes. Based on such an idea, the plenoptic concept has been developed in the past two decades, and therefore a new camera has been designed with the capacity of capturing the spatial-angular information with a single sensor and after a single shot. However, the classical plenoptic design presents two drawbacks, one is the oblique recording made by external microlenses. Other is loss of information due to diffraction effects. In this contribution report a change in the paradigm and propose the combination of telecentric architecture and Fourier-plane recording. This new capture geometry permits substantial improvements in resolution, depth of field and computation time

    Full-parallax 3D display from stereo-hybrid 3D camera system

    Get PDF
    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle

    Plenoptic rendering with interactive performance using GPUs

    Full text link
    corecore