44 research outputs found

    Mult-Spectral Imaging of Vegetation with a Diffractive Plenoptic Camera

    Get PDF
    Snapshot multi-spectral sensors allow for object detection based on its spectrum for remote sensing applications in air or space. By making these types of sensors more compact and lightweight, it allows drones to dwell longer on targets or the reduction of transport costs for satellites. To address this need, I designed and built a diffractive plenoptic camera (DPC) which utilized a Fresnel zone plate and a light field camera in order to detect vegetation via a normalized difference vegetation index (NDVI). This thesis derives design equations by relating DPC system parameters to its expected performance and evaluates its multi-spectral performance. The experimental results yielded a good agreement for spectral range and FOV with the design equations but was worse than the expected spectral resolution of 6.06 nm. In testing the spectral resolution of the DPC, it was found that near the design wavelength, the DPC had a spectral resolution of 25 nm. As the algorithm refocused further from design the spectral resolution broadened to 30 nm. In order to test multi-spectral performance, three scenes containing leaves in various states of health were captured by the DPC and an NDVI was calculated for each one. The DPC was able to identify vegetation in all scenes but at reduced NDVI values in comparison to the data measured by a spectrometer. Additionally, background noise contributed by the zeroth-order of diffraction and multiple wavelengths coming from the same spatial location was found to reduce the signal of vegetation. Optical aberrations were also found to create artifacts near the edges of the final refocused image. The future of this work includes using a different diffractive optic design to get a higher efficiency on the first order, deriving an aberrated sampling pattern, and using an intermediate image diffractive plenoptic camera to reduce the zeroth-order effects of the FZP

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging

    The Fresnel Zone Light Field Spectral Imager

    Get PDF
    This thesis provides a computational model and the first experimental demonstration of a Fresnel zone light field spectral imaging (FZLFSI) system. This type of system couples an axial dispersion binary diffractive optic with light field (plenoptic) camera designs providing a snapshot spectral imaging capability. A computational model of the system was developed based on wave optics methods using Fresnel propagation. It was validated experimentally and provides excellent demonstration of system capabilities. The experimentally demonstrated system was able to synthetically refocus monochromatic images across greater than a 100nm bandwidth. Furthermore, the demonstrated system was modeled to have a full range of approximately 400 to 800nm with close to a 15nm spectral sampling interval. While images of multiple diffraction orders were observed in the measured light fields, they did not degrade the system\u27s performance. Experimental demonstration also showed the capability to resolve between and process two different spectral signatures from a single snapshot. For future FZLFSI designs, the study noted there is a fundamental design trade-off, where improved spectral and spatial resolution reduces the spectral range of the system

    Spectral near field data of LED systems for optical simulations

    Get PDF
    This book presents, validates and applies a fast, accurate and general measurement and modeling technique to obtain spectral near field data of LED systems for optical simulations in order to address the steadily increasing requirements of modern high quality LED systems. It requires only a minimum of goniophotometric near field measurements and no time-consuming angularly resolved spectral measurements. The obtained results can be used directly in state-of-the-art ray tracers

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Electron microscopy of electromagnetic waveforms

    Get PDF
    Quickly oscillating electric and magnetic fields are the foundation of any information processing device or light-matter interaction. An electron microscope exceeds the diffraction limit of optical microscopes and is therefore a valuable device for condensed-matter structure and nanoscale objects investigations. While the electron microscope easily provides structural information, other methods are usually necessary to reveal the electromagnetic phenomena. Moreover, for ultrafast devices, in which charge-carrier dynamics occurs on femtosecond to picosecond time scales, the temporal resolution has to reach such values in order to successfully access the sample's electromagnetic response. Here, we introduce and demonstrate a concept for electron microscopy of electromagnetic waveforms. We achieve sub-optical-cycle and sub-wavelength resolutions in time and space. The technique can be applied to a transmission electron microscope, which expands its capabilities to the regime of electromagnetic phenomena. The approach thus may give researchers access to additional important information on the object under investigation. We let a short electron pulse pass through a sample, which is excited by an electromagnetic pulse, and record the time-dependent deflection. If the electron pulse, the key element of the technique, has a sub-cycle duration with respect to excitation radiation, the electrons are deflected by a time-frozen Lorentz force in a quasi-classical way and therefore directly reveal the sample's dynamics. By using an all-optical terahertz compression approach, we succeeded to shorten a single-electron pulse of 930 fs duration down to 75 fs, which is 15 times shorter than the period of excited in the sample dynamics. To characterize such short electron pulse, streaking with THz fields in a sub-wavelength structure was applied, which provided sub-20-femtosecond resolution. The reconstruction of electromagnetic fields from the electron deflection is not a trivial problem. We solve it by recording the electron density evolution after the interaction with a sample in a pump-probe experiment and employ the Gauss-Newton algorithm for an iterative fitting analysis. As a result, we acquire a time delay sequence containing two-dimensional spatial distributions of the field vector dynamics with a sub-cycle resolution in time. Further analysis of the evaluated data can provide frequency and material response information together with mode structures and their temporal dynamics. If the new technique is combined with a transmission electron microscope, it will be possible to study the fastest and smallest electrodynamic processes in light-matter interactions and devices

    Context Imaging Raman Spectrometer

    Get PDF
    Methods and systems for Raman spectroscopy and context imaging are disclosed. One or two lasers can be used to excite Raman scattering in a sample, while a plurality of LEDs can illuminate the sample at a different wavelength. The LED light is collected by a lenslet array in order to enable a high depth of field. Focusing of the image can be carried out at specific points of the image by processing the light collected by the lenslet array

    Pioneering the use of a plenoptic Adaptive Optics system for Free Space Optical Communications

    Get PDF
    Tesis doctoral en Astrofísica, fecha de lectura 30 de Septiembre de 2019In this thesis, an Adaptive Optics proposal is presented and experimentally verified at both laboratory and telescope, with the objective of compensating the atmospheric aberrations in the uplink beam and, therefore, ameliorate Free Space Optical Communication links performance and the generation of Laser Guide Stars for conventional AO systems. The research focuses on the active correction of Ground to Space laser beams (optical links and artificial stars), as downlink communications resemble conventional astronomical observations when applying Adaptive Optics techniques: the light is originated in space and it travels downwards through the atmosphere to the receiver (where the AO system would be placed), whereas the uplink needs to be corrected before existing the launching telescope by measuring the atmospheric wavefront with an a-priori unknown reference source. The uplink pre-compensation entails a scientific and technological challenge. The uplink correction problem was deeply studied by the formulation of all possible solutions, which are properly modelled and simulated with an already existing Adaptive Optics Matlab toolbox, into which new functionalities were coded and integrated (upwards Fresnel propagation, new concept wavefront sensor, etc.). Based on the simulation outcome, the corresponding requirements were formulated for the design of an uplink corrector AO system from the very last element to the control strategy. After the proper hardware acquisition (of both COTS elements and custom-built components), the uplink corrector laboratory scale prototype was built and integrated at IAC laboratory facilities. Finally from January 2019 to May 2019, the Uplink Wavefront Corrector System was integrated at the Optical Ground Station telescope at Teide Observatory, successfully demonstrating the uplink precompensation of the laser beam
    corecore