54 research outputs found

    Light Field compression and manipulation via residual convolutional neural network

    Get PDF
    Light field (LF) imaging has gained significant attention due to its recent success in microscopy, 3-dimensional (3D) displaying and rendering, augmented and virtual reality usage. Postprocessing of LF enables us to extract more information from a scene compared to traditional cameras. However, the use of LF is still a research novelty because of the current limitations in capturing high-resolution LF in all of its four dimensions. While researchers are actively improving methods of capturing high-resolution LF\u27s, using simulation, it is possible to explore a high-quality captured LF\u27s properties. The immediate concerns following the LF capture are its storage and processing time. A rich LF occupies a large chunk of memory ---order of multiple gigabytes per LF---. Also, most feature extraction techniques associated with LF postprocessing involve multi-dimensional integration that requires access to the whole LF and is usually time-consuming. Recent advancements in computer processing units made it possible to simulate realistic images using physical-based rendering software. In this work, at first, a transformation function is proposed for building a camera array (CA) to capture the same portion of LF from a scene that a standard plenoptic camera (SPC) can acquire. Using this transformation, LF simulation with similar properties as a plenoptic camera will become trivial in any rendering software. Artificial intelligence (AI) and machine learning (ML) algorithms ---when deployed on the new generation of GPUs--- are faster than ever. It is possible to generate and train large networks with millions of trainable parameters to learn very complex features. Here, residual convolutional neural network (RCNN) structures are employed to build complex networks for compression and feature extraction from an LF. By combining state-of-the-art image compression and RCNN, I have created a compression pipeline. The proposed pipeline\u27s bit per pixel (bpp) ratio is 0.0047 on average. I show that with a 1% compression time cost and 18x speedup for decompression, our methods reconstructed LFs have better structural similarity index metric (SSIM) and comparable peak signal-to-noise ratio (PSNR) compared to the state-of-the-art video compression techniques used to compress LFs. In the end, using RCNN, I created a network called RefNet, for extracting a group of 16 refocused images from a raw LF. The training parameters of the 16 LFs are set to (\alpha=0.125, 0.250, 0.375, ..., 2.0) for training. I show that RefNet is 134x faster than the state-of-the-art refocusing technique. The RefNet is also superior in color prediction compared to the state-of-the-art ---Fourier slice and shift-and-sum--- methods

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    What about computational super-resolution in fluorescence Fourier light field microscopy?

    Get PDF
    Recently, Fourier light field microscopy was proposed to overcome the limitations in conventional light field microscopy by placing a micro-lens array at the aperture stop of the microscope objective instead of the image plane. In this way, a collection of orthographic views from different perspectives are directly captured. When inspecting fluorescent samples, the sensitivity and noise of the sensors are a major concern and large sensor pixels are required to cope with low-light conditions, which implies under-sampling issues. In this context, we analyze the sampling patterns in Fourier light field microscopy to understand to what extent computational super-resolution can be triggered during deconvolution in order to improve the resolution of the 3D reconstruction of the imaged data

    The standard plenoptic camera: applications of a geometrical light field model

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyThe plenoptic camera is an emerging technology in computer vision able to capture a light field image from a single exposure which allows a computational change of the perspective view just as the optical focus, known as refocusing. Until now there was no general method to pinpoint object planes that have been brought to focus or stereo baselines of perspective views posed by a plenoptic camera. Previous research has presented simplified ray models to prove the concept of refocusing and to enhance image and depth map qualities, but lacked promising distance estimates and an efficient refocusing hardware implementation. In this thesis, a pair of light rays is treated as a system of linear functions whose solution yields ray intersections indicating distances to refocused object planes or positions of virtual cameras that project perspective views. A refocusing image synthesis is derived from the proposed ray model and further developed to an array of switch-controlled semi-systolic FIR convolution filters. Their real-time performance is verified through simulation and implementation by means of an FPGA using VHDL programming. A series of experiments is carried out with different lenses and focus settings, where prediction results are compared with those of a real ray simulation tool and processed light field photographs for which a blur metric has been considered. Predictions accurately match measurements in light field photographs and signify deviations of less than 0.35 % in real ray simulation. A benchmark assessment of the proposed refocusing hardware implementation suggests a computation time speed-up of 99.91 % in comparison with a state-of-the-art technique. It is expected that this research supports in the prototyping stage of plenoptic cameras and microscopes as it helps specifying depth sampling planes, thus localising objects and provides a power-efficient refocusing hardware design for full-video applications as in broadcasting or motion picture arts

    Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems

    Get PDF
    There has been great interest in researching and implementing effective technologies for the capture, processing, and display of 3D images. This broad interest is evidenced by widespread international research and activities on 3D technologies. There is a large number of journal and conference papers on 3D systems, as well as research and development efforts in government, industry, and academia on this topic for broad applications including entertainment, manufacturing, security and defense, and biomedical applications. Among these technologies, integral imaging is a promising approach for its ability to work with polychromatic scenes and under incoherent or ambient light for scenarios from macroscales to microscales. Integral imaging systems and their variations, also known as plenoptics or light-field systems, are applicable in many fields, and they have been reported in many applications, such as entertainment (TV, video, movies), industrial inspection, security and defense, and biomedical imaging and displays. This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy

    Correlation Plenoptic Imaging between Arbitrary Planes

    Get PDF
    We propose a novel method to perform plenoptic imaging at the diffraction limit by measuring second-order correlations of light between two reference planes, arbitrarily chosen, within the tridimensional scene of interest. We show that for both chaotic light and entangled-photon illumination, the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field. In particular, the depth of field results larger by a factor 3 with respect to previous correlation plenoptic imaging protocols, and by an order of magnitude with respect to standard imaging, while the resolution is kept at the diffraction limit. The results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination, thus contributing to make correlation plenoptic imaging effectively competitive with commercial plenoptic devices.Comment: 12 pages, 6 figure

    Evaluation and Quantification of Diffractive Plenoptic Camera Algorithm Performance

    Get PDF
    A diffractive plenoptic camera is a novel approach to the traditional plenoptic camera which replaces the main optic with a Fresnel zone plate making the camera sensitive to wavelength instead of range. However, algorithms are necessary to reconstruct the image produced by plenoptic cameras. While many algorithms exist for traditional plenoptic cameras, their ability to create spectral images in a diffractive plenoptic camera is unknown. This paper evaluates digital refocusing, super resolution, and 3D deconvolution through a Richardson-Lucy algorithm as well as a new Gaussian smoothing algorithm. All of the algorithms worked well near the Fresnel zone plate design wavelength, but Gaussian smoothing provided better looking images at a cost of high computation time. For wavelengths off the design wavelength, 3D deconvolution produced the best images but also required more computation time. 3D deconvolution also had the best spectral resolution, which increased away from the design wavelength. These results, along with consideration of mission constraints and spectral content in the scene, can guide algorithm selection for future sensor designs

    Light-field ghost imaging

    Get PDF
    Techniques based on classical and quantum correlations in light beams, such as ghost imaging, allow us to overcome many limitations of conventional imaging and sensing protocols. Despite their advantages, applications of such techniques are often limited in practical scenarios where the position and the longitudinal extension of the target object are unknown. In this work, we propose and experimentally demonstrate an imaging technique, named light-field ghost imaging, that exploits light correlations and light-field imaging principles to enable going beyond the limitations of ghost imaging in a wide range of applications. Notably, our technique removes the requirement to have prior knowledge of the object distance, allowing the possibility of refocusing in postprocessing, as well as performing three-dimensional imaging while retaining all the benefits of ghost imaging protocols

    3D deconvolution in Fourier integral microscopy

    Get PDF
    Fourier integral microscopy (FiMic), also referred to as Fourier light field microscopy (FLFM) in the literature, was recently proposed as an alternative to conventional light field microscopy (LFM). FiMic is designed to overcome the non-uniform lateral resolution limitation specific to LFM. By inserting a micro-lens array at the aperture stop of the microscope objective, the Fourier integral microscope directly captures in a single-shot a series of orthographic views of the scene from different viewpoints. We propose an algorithm for the deconvolution of FiMic data by combining the well known Maximum Likelihood Expectation (MLEM) method with total variation (TV) regularization to cope with noise amplification in conventional Richardson-Lucy deconvolution

    PlenoptiCam v1.0: A light-field imaging framework

    Get PDF
    This is an accepted manuscript of an article published by IEEE in IEEE Transactions on Image Processing on 19/07/2021. Available online: https://doi.org/10.1109/TIP.2021.3095671 The accepted version of the publication may differ from the final published version.Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications. The key obstacle in composing light-fields from exposures taken by a plenoptic camera is to computationally calibrate, re-align and rearrange four-dimensional image data. Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras and improving the color consistency across viewpoints at the expense of high computational loads. The framework presented herein advances prior outcomes thanks to its cost-effective color equalization from parallax-invariant probability distribution transfers and a novel micro image scale-space analysis for generic camera calibration independent of the lens specifications. Our framework compensates for artifacts from the sensor and micro lens grid in an innovative way to enable superior quality in sub-aperture image extraction, computational refocusing and Scheimpflug rendering with sub-sampling capabilities. Benchmark comparisons using established image metrics suggest that our proposed pipeline outperforms state-of-the-art tool chains in the majority of cases. The algorithms described in this paper are released under an open-source license, offer cross-platform compatibility with few dependencies and a graphical user interface. This makes the reproduction of results and experimentation with plenoptic camera technology convenient for peer researchers, developers, photographers, data scientists and others working in this field
    • …
    corecore