87 research outputs found

    Absolute depth using low-cost light field cameras

    Get PDF
    Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera

    Determining the Phase and Amplitude Distortion of a Wavefront using a Plenoptic Sensor

    Full text link
    We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-pi, pi) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2pi. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.Comment: This article has been accepted by JOSA

    Simultaneous measurement of flame temperature and absorption coefficient through LMBC-NNLS and plenoptic imaging techniques

    Get PDF
    It is important to identify boundary constraints in the inverse algorithm for the reconstruction of flame temperature because a negative temperature can be reconstructed with improper boundary constraints. In this study, a hybrid algorithm, a combination of Levenberg-Marquardt with boundary constraint (LMBC) and non-negative least squares (NNLS), was proposed to reconstruct the flame temperature and absorption coefficient simultaneously by sampling the multi-wavelength flame radiation with a colored plenoptic camera. To validate the proposed algorithm, numerical simulations were carried out for both the symmetric and asymmetric distributions of the flame temperature and absorption coefficient. The plenoptic flame images were modeled to investigate the characteristics of flame radiation sampling. Different Gaussian noises were added into the radiation samplings to investigate the noise effects on the reconstruction accuracy. Simulation results showed that the relative errors of the reconstructed temperature and absorption coefficient are less than 10, indicating that accurate and reliable reconstruction can be obtained by the proposed algorithm. The algorithm was further verified by experimental studies, where the reconstructed results were compared with the thermocouple measurements. The simulation and experimental results demonstrated that the proposed algorithm is effective for the simultaneous reconstruction of the flame temperature and absorption coefficient

    Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras

    Get PDF
    This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples

    The suitability of lightfield camera depth maps for coordinate measurement applications

    Get PDF
    Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 m to −2.59 m and the repeatability was 0.25 ”m

    From Calibration to Large-Scale Structure from Motion with Light Fields

    Get PDF
    Classic pinhole cameras project the multi-dimensional information of the light flowing through a scene onto a single 2D snapshot. This projection limits the information that can be reconstructed from the 2D acquisition. Plenoptic (or light field) cameras, on the other hand, capture a 4D slice of the plenoptic function, termed the “light field”. These cameras provide both spatial and angular information on the light flowing through a scene; multiple views are captured in a single photographic exposure facilitating various applications. This thesis is concerned with the modelling of light field (or plenoptic) cameras and the development of structure from motion pipelines using such cameras. Specifically, we develop a geometric model for a multi-focus plenoptic camera, followed by a complete pipeline for the calibration of the suggested model. Given a calibrated light field camera, we then remap the captured light field to a grid of pinhole images. We use these images to obtain metric 3D reconstruction through a novel framework for structure from motion with light fields. Finally, we suggest a linear and efficient approach for absolute pose estimation for light fields

    Mult-Spectral Imaging of Vegetation with a Diffractive Plenoptic Camera

    Get PDF
    Snapshot multi-spectral sensors allow for object detection based on its spectrum for remote sensing applications in air or space. By making these types of sensors more compact and lightweight, it allows drones to dwell longer on targets or the reduction of transport costs for satellites. To address this need, I designed and built a diffractive plenoptic camera (DPC) which utilized a Fresnel zone plate and a light field camera in order to detect vegetation via a normalized difference vegetation index (NDVI). This thesis derives design equations by relating DPC system parameters to its expected performance and evaluates its multi-spectral performance. The experimental results yielded a good agreement for spectral range and FOV with the design equations but was worse than the expected spectral resolution of 6.06 nm. In testing the spectral resolution of the DPC, it was found that near the design wavelength, the DPC had a spectral resolution of 25 nm. As the algorithm refocused further from design the spectral resolution broadened to 30 nm. In order to test multi-spectral performance, three scenes containing leaves in various states of health were captured by the DPC and an NDVI was calculated for each one. The DPC was able to identify vegetation in all scenes but at reduced NDVI values in comparison to the data measured by a spectrometer. Additionally, background noise contributed by the zeroth-order of diffraction and multiple wavelengths coming from the same spatial location was found to reduce the signal of vegetation. Optical aberrations were also found to create artifacts near the edges of the final refocused image. The future of this work includes using a different diffractive optic design to get a higher efficiency on the first order, deriving an aberrated sampling pattern, and using an intermediate image diffractive plenoptic camera to reduce the zeroth-order effects of the FZP

    Flame temperature reconstruction through multi-plenoptic camera technique

    Get PDF
    Due to the variety of burner structure and fuel mixing, the flame temperature distribution is not only irregular but also complex. Therefore, it is necessary to develop an advanced temperature measurement technique, which can provide not only adequate flame radiative information but also reconstruct complex flame temperature accurately. In this paper, a novel multi-plenoptic camera imaging technique is proposed which is not only provide adequate flame radiative information from two different directions but also reconstruct the complex flame temperature distribution accurately. An inverse algorithm i.e., Non-Negative Least Squares is used to reconstruct the flame temperature. The bimodal asymmetric temperature distribution is considered to verify the feasibility of the proposed system. Numerical simulations and experiments were carried out to evaluate the performance of the proposed technique. Simulation results demonstrate that the proposed system is able to provide higher reconstruction accuracy although the reconstruction accuracy decreases with the increase of noise levels. Meanwhile, compared with the single plenoptic and conventional multi-camera techniques, the proposed method has the advantages of lower relative error and better reconstruction quality even with higher noise levels. The proposed technique is further verified by experimental studies. The experimental results also demonstrate that the proposed technique is effective and feasible for the reconstruction of flame temperature. Therefore, the proposed multi-plenoptic camera imaging technique is capable of reconstructing the complex flame temperature fields more precisely
    • 

    corecore