38 research outputs found

    Real-time refocusing using an FPGA-based standard plenoptic camera

    Get PDF
    Plenoptic cameras are receiving increased attention in scientific and commercial applications because they capture the entire structure of light in a scene, enabling optical transforms (such as focusing) to be applied computationally after the fact, rather than once and for all at the time a picture is taken. In many settings, real-time inter active performance is also desired, which in turn requires significant computational power due to the large amount of data required to represent a plenoptic image. Although GPUs have been shown to provide acceptable performance for real-time plenoptic rendering, their cost and power requirements make them prohibitive for embedded uses (such as in-camera). On the other hand, the computation to accomplish plenoptic rendering is well structured, suggesting the use of specialized hardware. Accordingly, this paper presents an array of switch-driven finite impulse response filters, implemented with FPGA to accomplish high-throughput spatial-domain rendering. The proposed architecture provides a power-efficient rendering hardware design suitable for full-video applications as required in broadcasting or cinematography. A benchmark assessment of the proposed hardware implementation shows that real-time performance can readily be achieved, with a one order of magnitude performance improvement over a GPU implementation and three orders ofmagnitude performance improvement over a general-purpose CPU implementation

    Exploring plenoptic properties of correlation imaging with chaotic light

    Full text link
    In a setup illuminated by chaotic light, we consider different schemes that enable to perform imaging by measuring second-order intensity correlations. The most relevant feature of the proposed protocols is the ability to perform plenoptic imaging, namely to reconstruct the geometrical path of light propagating in the system, by imaging both the object and the focusing element. This property allows to encode, in a single data acquisition, both multi-perspective images of the scene and light distribution in different planes between the scene and the focusing element. We unveil the plenoptic property of three different setups, explore their refocusing potentialities and discuss their practical applications.Comment: 9 pages, 4 figure

    Correlation Plenoptic Imaging With Entangled Photons

    Full text link
    Plenoptic imaging is a novel optical technique for three-dimensional imaging in a single shot. It is enabled by the simultaneous measurement of both the location and the propagation direction of light in a given scene. In the standard approach, the maximum spatial and angular resolutions are inversely proportional, and so are the resolution and the maximum achievable depth of focus of the 3D image. We have recently proposed a method to overcome such fundamental limits by combining plenoptic imaging with an intriguing correlation remote-imaging technique: ghost imaging. Here, we theoretically demonstrate that correlation plenoptic imaging can be effectively achieved by exploiting the position-momentum entanglement characterizing spontaneous parametric down-conversion (SPDC) photon pairs. As a proof-of-principle demonstration, we shall show that correlation plenoptic imaging with entangled photons may enable the refocusing of an out-of-focus image at the same depth of focus of a standard plenoptic device, but without sacrificing diffraction-limited image resolution.Comment: 12 pages, 5 figure

    Towards quantum 3d imaging devices

    Get PDF
    We review the advancement of the research toward the design and implementation of quantum plenoptic cameras, radically novel 3D imaging devices that exploit both momentum–position entanglement and photon–number correlations to provide the typical refocusing and ultra-fast, scanning-free, 3D imaging capability of plenoptic devices, along with dramatically enhanced performances, unattainable in standard plenoptic cameras: diffraction-limited resolution, large depth of focus, and ultra-low noise. To further increase the volumetric resolution beyond the Rayleigh diffraction limit, and achieve the quantum limit, we are also developing dedicated protocols based on quantum Fisher information. However, for the quantum advantages of the proposed devices to be effective and appealing to end-users, two main challenges need to be tackled. First, due to the large number of frames required for correlation measurements to provide an acceptable signal-to-noise ratio, quantum plenoptic imaging (QPI) would require, if implemented with commercially available high-resolution cameras, acquisition times ranging from tens of seconds to a few minutes. Second, the elaboration of this large amount of data, in order to retrieve 3D images or refocusing 2D images, requires high-performance and time-consuming computation. To address these challenges, we are developing high-resolution single-photon avalanche photodiode (SPAD) arrays and high-performance low-level programming of ultra-fast electronics, combined with compressive sensing and quantum tomography algorithms, with the aim to reduce both the acquisition and the elaboration time by two orders of magnitude. Routes toward exploitation of the QPI devices will also be discussed

    Implementation of a Depth from Light Field Algorithm on FPGA

    Get PDF
    A light field is a four-dimensional function that grabs the intensity of light rays traversing an empty space at each point. The light field can be captured using devices designed specifically for this purpose and it allows one to extract depth information about the scene. Most light-field algorithms require a huge amount of processing power. Fortunately, in recent years, parallel hardware has evolved and enables such volumes of data to be processed. Field programmable gate arrays are one such option. In this paper, we propose two hardware designs that share a common construction block to compute a disparity map from light-field data. The first design employs serial data input into the hardware, while the second employs view parallel input. These designs focus on performing calculations during data read-in and producing results only a few clock cycles after read-in. Several experiments were conducted. First, the influence of using fixed-point arithmetic on accuracy was tested using synthetic light-field data. Also tests on actual light field data were performed. The performance was compared to that of a CPU, as well as an embedded processor. Our designs showed similar performance to the former and outperformed the latter. For further comparison, we also discuss the performance difference between our designs and other designs described in the literatur

    Correlation Plenoptic Imaging between Arbitrary Planes

    Get PDF
    We propose a novel method to perform plenoptic imaging at the diffraction limit by measuring second-order correlations of light between two reference planes, arbitrarily chosen, within the tridimensional scene of interest. We show that for both chaotic light and entangled-photon illumination, the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field. In particular, the depth of field results larger by a factor 3 with respect to previous correlation plenoptic imaging protocols, and by an order of magnitude with respect to standard imaging, while the resolution is kept at the diffraction limit. The results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination, thus contributing to make correlation plenoptic imaging effectively competitive with commercial plenoptic devices.Comment: 12 pages, 6 figure

    Correlated-photon imaging at 10 volumetric images per second

    Full text link
    The correlation properties of light provide an outstanding tool to overcome the limitations of traditional imaging techniques. A relevant case is represented by correlation plenoptic imaging (CPI), a quantum-inspired volumetric imaging protocol employing spatio-temporally correlated photons from either entangled or chaotic sources to address the main limitations of conventional light-field imaging, namely, the poor spatial resolution and the reduced change of perspective for 3D imaging. However, the application potential of high-resolution imaging modalities relying on photon correlations is limited, in practice, by the need to collect a large number of frames. This creates a gap, unacceptable for many relevant tasks, between the time performance of correlated-light imaging and that of traditional imaging methods. In this article, we address this issue by exploiting the photon number correlations intrinsic in chaotic light, combined with a cutting-edge ultrafast sensor made of a large array of single-photon avalanche diodes (SPADs). This combination of source and sensor is embedded within a novel single-lens CPI scheme enabling to acquire 10 volumetric images per second. Our results place correlated-photon imaging at a competitive edge and prove its potential in practical applications.Comment: 13 pages, 6 figure

    The standard plenoptic camera: applications of a geometrical light field model

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyThe plenoptic camera is an emerging technology in computer vision able to capture a light field image from a single exposure which allows a computational change of the perspective view just as the optical focus, known as refocusing. Until now there was no general method to pinpoint object planes that have been brought to focus or stereo baselines of perspective views posed by a plenoptic camera. Previous research has presented simplified ray models to prove the concept of refocusing and to enhance image and depth map qualities, but lacked promising distance estimates and an efficient refocusing hardware implementation. In this thesis, a pair of light rays is treated as a system of linear functions whose solution yields ray intersections indicating distances to refocused object planes or positions of virtual cameras that project perspective views. A refocusing image synthesis is derived from the proposed ray model and further developed to an array of switch-controlled semi-systolic FIR convolution filters. Their real-time performance is verified through simulation and implementation by means of an FPGA using VHDL programming. A series of experiments is carried out with different lenses and focus settings, where prediction results are compared with those of a real ray simulation tool and processed light field photographs for which a blur metric has been considered. Predictions accurately match measurements in light field photographs and signify deviations of less than 0.35 % in real ray simulation. A benchmark assessment of the proposed refocusing hardware implementation suggests a computation time speed-up of 99.91 % in comparison with a state-of-the-art technique. It is expected that this research supports in the prototyping stage of plenoptic cameras and microscopes as it helps specifying depth sampling planes, thus localising objects and provides a power-efficient refocusing hardware design for full-video applications as in broadcasting or motion picture arts

    Sensors and Technologies in Spain: State-of-the-Art

    Get PDF
    The aim of this special issue was to provide a comprehensive view on the state-of-the-art sensor technology in Spain. Different problems cause the appearance and development of new sensor technologies and vice versa, the emergence of new sensors facilitates the solution of existing real problems. [...
    corecore