500,691 research outputs found

    Spectro-Perfectionism: An Algorithmic Framework for Photon Noise-Limited Extraction of Optical Fiber Spectroscopy

    Full text link
    We describe a new algorithm for the "perfect" extraction of one-dimensional spectra from two-dimensional (2D) digital images of optical fiber spectrographs, based on accurate 2D forward modeling of the raw pixel data. The algorithm is correct for arbitrarily complicated 2D point-spread functions (PSFs), as compared to the traditional optimal extraction algorithm, which is only correct for a limited class of separable PSFs. The algorithm results in statistically independent extracted samples in the 1D spectrum, and preserves the full native resolution of the 2D spectrograph without degradation. Both the statistical errors and the 1D resolution of the extracted spectrum are accurately determined, allowing a correct chi-squared comparison of any model spectrum with the data. Using a model PSF similar to that found in the red channel of the Sloan Digital Sky Survey spectrograph, we compare the performance of our algorithm to that of cross-section based optimal extraction, and also demonstrate that our method allows coaddition and foreground estimation to be carried out as an integral part of the extraction step. This work demonstrates the feasibility of current- and next-generation multi-fiber spectrographs for faint galaxy surveys even in the presence of strong night-sky foregrounds. We describe the handling of subtleties arising from fiber-to-fiber crosstalk, discuss some of the likely challenges in deploying our method to the analysis of a full-scale survey, and note that our algorithm could be generalized into an optimal method for the rectification and combination of astronomical imaging data.Comment: 9 pages, 4 figures, emulateapj; minor corrections and clarifications; to be published in the PAS

    Image-based calibration of a deformable mirror in wide-field microscopy

    Get PDF
    Optical aberrations limit resolution in biological tissues, and their influence is particularly large for promising techniques like light-sheet microscopy. In principle, image quality might be improved by adaptive optics (AO), in which aberrations are corrected using a deformable mirror (DM). To implement AO in microscopy, one requires a method to measure wavefront aberrations, but the most commonly used methods have limitations for samples lacking point-source emitters. Here we implement an image-based wavefront-sensing technique, a variant of generalized phase-diverse imaging called multi-frame blind deconvolution, and exploit it to calibrate a DM in a light-sheet microscope. We describe two methods of parameterizing the influence of the DM on aberrations: a traditional Zernike expansion requiring 1,040 parameters, and a direct physical model of the DM requiring just 8 or 110 parameters. By randomizing voltages on all actuators, we show that the Zernike expansion successfully predicts wavefronts to an accuracy of approximately 30 nm (rms) even for large aberrations. We thus show that image-based wavefront sensing, which requires no additional optical equipment, allows for a simple but powerful method to calibrate a deformable optical element in a microscope setting

    Dense Point-Cloud Representation of a Scene using Monocular Vision

    Get PDF
    We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings. We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique. We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased. Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques

    A Framework for SAR-Optical Stereogrammetry over Urban Areas

    Get PDF
    Currently, numerous remote sensing satellites provide a huge volume of diverse earth observation data. As these data show different features regarding resolution, accuracy, coverage, and spectral imaging ability, fusion techniques are required to integrate the different properties of each sensor and produce useful information. For example, synthetic aperture radar (SAR) data can be fused with optical imagery to produce 3D information using stereogrammetric methods. The main focus of this study is to investigate the possibility of applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical image pairs. For this purpose, the applicability of semi-global matching is investigated in this unconventional multi-sensor setting. To support the image matching by reducing the search space and accelerating the identification of correct, reliable matches, the possibility of establishing an epipolarity constraint for VHR SAR-optical image pairs is investigated as well. In addition, it is shown that the absolute geolocation accuracy of VHR optical imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be improved by a multi-sensor block adjustment formulation based on rational polynomial coefficients. Finally, the feasibility of generating point clouds with a median accuracy of about 2m is demonstrated and confirms the potential of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec

    Hallucinating dense optical flow from sparse lidar for autonomous vehicles

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper we propose a novel approach to estimate dense optical flow from sparse lidar data acquired on an autonomous vehicle. This is intended to be used as a drop-in replacement of any image-based optical flow system when images are not reliable due to e.g. adverse weather conditions or at night. In order to infer high resolution 2D flows from discrete range data we devise a three-block architecture of multiscale filters that combines multiple intermediate objectives, both in the lidar and image domain. To train this network we introduce a dataset with approximately 20K lidar samples of the Kitti dataset which we have augmented with a pseudo ground-truth image-based optical flow computed using FlowNet2. We demonstrate the effectiveness of our approach on Kitti, and show that despite using the low-resolution and sparse measurements of the lidar, we can regress dense optical flow maps which are at par with those estimated with image-based methods.Peer ReviewedPostprint (author's final draft
    • …
    corecore