699 research outputs found

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Stereoscopic Depth Perception Through Foliage

    Full text link
    Both humans and computational methods struggle to discriminate the depths of objects hidden beneath foliage. However, such discrimination becomes feasible when we combine computational optical synthetic aperture sensing with the human ability to fuse stereoscopic images. For object identification tasks, as required in search and rescue, wildlife observation, surveillance, and early wildfire detection, depth assists in differentiating true from false findings, such as people, animals, or vehicles vs. sun-heated patches at the ground level or in the tree crowns, or ground fires vs. tree trunks. We used video captured by a drone above dense woodland to test users' ability to discriminate depth. We found that this is impossible when viewing monoscopic video and relying on motion parallax. The same was true with stereoscopic video because of the occlusions caused by foliage. However, when synthetic aperture sensing was used to reduce occlusions and disparity-scaled stereoscopic video was presented, whereas computational (stereoscopic matching) methods were unsuccessful, human observers successfully discriminated depth. This shows the potential of systems which exploit the synergy between computational methods and human vision to perform tasks that neither can perform alone

    Electromagnetic Scattering Characteristics of Composite Targets and Software Development Based on PO Algorithm

    Get PDF
    Physical optics (PO) algorithm is a high-frequency electromagnetic (EM) algorithm, which is widely used to solve the EM scattering problems of electrically large composite targets. Due to the PO algorithm only considers the induced current in the bright region irradiated by EM wave, the computational memory and time consumption are superior than other high-frequency algorithms, and the calculation accuracy is pretty fine. Based on the PO algorithm, this thesis focuses on the occlusion judgement of PO algorithm and its application in composite targets. The main contents of this thesis are as follows: 1. The occlusion judgement software system for PO algorithm is developed. The main function of this software is to judge the bright region of the target under the irradiation of EM wave. This software uses two judgement methods: ray tracing method based on CPU and Z-Buffer method based on CPU and GPU. Moreover, due to the compromise between patch size and patch number, both methods have errors at the edge of bright and shadow regions. This thesis discusses the error and reduces it. 2. Based on PO algorithm, the EM scattering characteristics of targets covered by plasma sheath are discussed. We simulate the plasma sheath flow field data of hypersonic vehicle by FASTRAN software, compare and analyze the plasma sheath electron number density at different flight heights and speeds. On this basis, the bistatic RCS of hypersonic vehicle head-on irradiation under different flight heights and speeds is calculated by using the PO algorithm of layered medium. 3. SAR image simulation of tree ground composite target is carried out based on PO algorithm and Non-Uniform Fast Fourier Transform (NUFFT) method. Firstly, we introduce the geometric modeling and EM parameter modeling of tree ground composite target, and the scattering characteristics of tree ground composite target are obtained by using PO algorithm. Finally, the scattering field of the target is processed by NUFFT method, and the SAR simulation images of multiple trees scene are obtained

    Multidimensional Optical Sensing and Imaging Systems (MOSIS): From Macro to Micro Scales

    Get PDF
    Multidimensional optical imaging systems for information processing and visualization technologies have numerous applications in fields such as manufacturing, medical sciences, entertainment, robotics, surveillance, and defense. Among different three-dimensional (3-D) imaging methods, integral imaging is a promising multiperspective sensing and display technique. Compared with other 3-D imaging techniques, integral imaging can capture a scene using an incoherent light source and generate real 3-D images for observation without any special viewing devices. This review paper describes passive multidimensional imaging systems combined with different integral imaging configurations. One example is the integral-imaging-based multidimensional optical sensing and imaging systems (MOSIS), which can be used for 3-D visualization, seeing through obscurations, material inspection, and object recognition from microscales to long range imaging. This system utilizes many degrees of freedom such as time and space multiplexing, depth information, polarimetric, temporal, photon flux and multispectral information based on integral imaging to record and reconstruct the multidimensionally integrated scene. Image fusion may be used to integrate the multidimensional images obtained by polarimetric sensors, multispectral cameras, and various multiplexing techniques. The multidimensional images contain substantially more information compared with two-dimensional (2-D) images or conventional 3-D images. In addition, we present recent progress and applications of 3-D integral imaging including human gesture recognition in the time domain, depth estimation, mid-wave-infrared photon counting, 3-D polarimetric imaging for object shape and material identification, dynamic integral imaging implemented with liquid-crystal devices, and 3-D endoscopy for healthcare applications.B. Javidi wishes to acknowledge support by the National Science Foundation (NSF) under Grant NSF/IIS-1422179, and DARPA and US Army under contract number W911NF-13-1-0485. The work of P. Latorre Carmona, A. Martínez-Uso, J. M. Sotoca and F. Pla was supported by the Spanish Ministry of Economy under the project ESP2013-48458-C4-3-P, and by MICINN under the project MTM2013-48371-C2-2-PDGI, by Generalitat Valenciana under the project PROMETEO-II/2014/062, and by Universitat Jaume I through project P11B2014-09. The work of M. Martínez-Corral and G. Saavedra was supported by the Spanish Ministry of Economy and Competitiveness under the grant DPI2015-66458-C2-1R, and by the Generalitat Valenciana, Spain under the project PROMETEOII/2014/072

    Convolutional Neural Networks - Generalizability and Interpretations

    Get PDF
    • …
    corecore