1,089 research outputs found

    Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    Get PDF
    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement

    Superresolution Enhancement of Hyperspectral CHRIS/Proba Images With a Thin-Plate Spline Nonrigid Transform Model

    Get PDF
    Given the hyperspectral-oriented waveband configuration of multiangular CHRIS/Proba imagery, the scope of its application could widen if the present 18-m resolution would be improved. The multiangular images of CHRIS could be used as input for superresolution (SR) image reconstruction. A critical procedure in SR is an accurate registration of the low-resolution images. Conventional methods based on affine transformation may not be effective given the local geometric distortion in high off-nadir angular images. This paper examines the use of a non-rigid transform to improve the result of a nonuniform interpolation and deconvolution SR method. A scale-invariant feature transform is used to collect control points (CPs). To ensure the quality of CPs, a rigorous screening procedure is designed: 1) an ambiguity test; 2) the m-estimator sample consensus method; and 3) an iterative method using statistical characteristics of the distribution of random errors. A thin-plate spline (TPS) nonrigid transform is then used for the registration. The proposed registration method is examined with a Delaunay triangulation-based nonuniform interpolation and reconstruction SR method. Our results show that the TPS nonrigid transform allows accurate registration of angular images. SR results obtained from simulated LR images are evaluated using three quantitative measures, namely, relative mean-square error, structural similarity, and edge stability. Compared to the SR methods that use an affine transform, our proposed method performs better with all three evaluation measures. With a higher level of spatial detail, SR-enhanced CHRIS images might be more effective than the original data in various applications.JRC.H.7-Climate Risk Managemen

    Far-Field Tunable Nano-focusing Based on Metallic Slits Surrounded with Nonlinear-Variant Widths and Linear-Variant Depths of Circular Dielectric Grating

    Full text link
    In this work, we design a new tunable nanofocusing lens by the linear-variant depths and nonlinear-variant widths of circular grating for far field practical applications. The constructively interference of cylindrical surface plasmon launched by the subwavelength metallic structure can form a subdiffraction-limited focus, and the focal length of the this structures can be adjusted if the each groove depth and width of circular grating are arranged in traced profile. According to the numerical calculation, the range of focusing points shift is much more than other plasmonic lens, and the relative phase of emitting light scattered by surface plasmon coupling circular grating can be modulated by the nonlinear-variant width and linear-variant depth. The simulation result indicates that the different relative phase of emitting light lead to variant focal length. We firstly show a unique phenomenon for the linear-variant depths and nonlinear-variant widths of circular grating that the positive change and negative change of the depths and widths of grooves can result in different of variation trend between relative phases and focal lengths. These results paved the road for utilizing the plasmonic lens in high-density optical storage, nanolithography, superresolution optical microscopic imaging, optical trapping, and sensing.Comment: 14pages,9figure

    Superresolution imaging: A survey of current techniques

    Full text link
    Cristóbal, G., Gil, E., Šroubek, F., Flusser, J., Miravet, C., Rodríguez, F. B., “Superresolution imaging: A survey of current techniques”, Proceedings of SPIE - The International Society for Optical Engineering, 7074, 2008. Copyright 2008. Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Imaging plays a key role in many diverse areas of application, such as astronomy, remote sensing, microscopy, and tomography. Owing to imperfections of measuring devices (e.g., optical degradations, limited size of sensors) and instability of the observed scene (e.g., object motion, media turbulence), acquired images can be indistinct, noisy, and may exhibit insufficient spatial and temporal resolution. In particular, several external effects blur images. Techniques for recovering the original image include blind deconvolution (to remove blur) and superresolution (SR). The stability of these methods depends on having more than one image of the same frame. Differences between images are necessary to provide new information, but they can be almost unperceivable. State-of-the-art SR techniques achieve remarkable results in resolution enhancement by estimating the subpixel shifts between images, but they lack any apparatus for calculating the blurs. In this paper, after introducing a review of current SR techniques we describe two recently developed SR methods by the authors. First, we introduce a variational method that minimizes a regularized energy function with respect to the high resolution image and blurs. In this way we establish a unifying way to simultaneously estimate the blurs and the high resolution image. By estimating blurs we automatically estimate shifts with subpixel accuracy, which is inherent for good SR performance. Second, an innovative learning-based algorithm using a neural architecture for SR is described. Comparative experiments on real data illustrate the robustness and utilization of both methods.This research has been partially supported by the following grants: TEC2007-67025/TCM, TEC2006-28009-E, BFI-2003-07276, TIN-2004-04363-C03-03 by the Spanish Ministry of Science and Innovation, and by PROFIT projects FIT-070000-2003-475 and FIT-330100-2004-91. Also, this work has been partially supported by the Czech Ministry of Education under the project No. 1M0572 (Research Center DAR) and by the Czech Science Foundation under the project No. GACR 102/08/1593 and the CSIC-CAS bilateral project 2006CZ002
    corecore