791 research outputs found

    In-Flight CCD Distortion Calibration for Pushbroom Satellites Based on Subpixel Correlation

    Get PDF
    We describe a method that allows for accurate inflight calibration of the interior orientation of any pushbroom camera and that in particular solves the problem of modeling the distortions induced by charge coupled device (CCD) misalignments. The distortion induced on the ground by each CCD is measured using subpixel correlation between the orthorectified image to be calibrated and an orthorectified reference image that is assumed distortion free. Distortions are modeled as camera defects, which are assumed constant over time. Our results show that in-flight interior orientation calibration reduces internal camera biases by one order of magnitude. In particular, we fully characterize and model the Satellite Pour l'Observation de la Terre (SPOT) 4-HRV1 sensor, and we conjecture that distortions mostly result from the mechanical strain produced when the satellite was launched rather than from effects of on-orbit thermal variations or aging. The derived calibration models have been integrated to the software package Coregistration of Optically Sensed Images and Correlation (COSI-Corr), freely available from the Caltech Tectonics Observatory website. Such calibration models are particularly useful in reducing biases in digital elevation models (DEMs) generated from stereo matching and in improving the accuracy of change detection algorithms

    Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Full text link
    Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera's 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera's pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC) algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m / 1.05∘^{\circ} and 0.18 m / 2.39∘^{\circ}. We also propose several approaches to displaying and interpreting the 6D results in a human readable way.Comment: Published in MDPI Sensors, 30 October 201

    Exploration of a Polarized Surface Bidirectional Reflectance Model Using the Ground-Based Multiangle Spectropolarimetric Imager

    Get PDF
    Accurate characterization of surface reflection is essential for retrieval of aerosols using downward-looking remote sensors. In this paper, observations from the Ground-based Multiangle SpectroPolarimetric Imager (GroundMSPI) are used to evaluate a surface polarized bidirectional reflectance distribution function (PBRDF) model. GroundMSPI is an eight-band spectropolarimetric camera mounted on a rotating gimbal to acquire pushbroom imagery of outdoor landscapes. The camera uses a very accurate photoelastic-modulator-based polarimetric imaging technique to acquire Stokes vector measurements in three of the instrument's bands (470, 660, and 865 nm). A description of the instrument is presented, and observations of selected targets within a scene acquired on 6 January 2010 are analyzed. Data collected during the course of the day as the Sun moved across the sky provided a range of illumination geometries that facilitated evaluation of the surface model, which is comprised of a volumetric reflection term represented by the modified Rahman-Pinty-Verstraete function plus a specular reflection term generated by a randomly oriented array of Fresnel-reflecting microfacets. While the model is fairly successful in predicting the polarized reflection from two grass targets in the scene, it does a poorer job for two manmade targets (a parking lot and a truck roof), possibly due to their greater degree of geometric organization. Several empirical adjustments to the model are explored and lead to improved fits to the data. For all targets, the data support the notion of spectral invariance in the angular shape of the unpolarized and polarized surface reflection. As noted by others, this behavior provides valuable constraints on the aerosol retrieval problem, and highlights the importance of multiangle observations.NASAJPLCenter for Space Researc

    Application of imaging system geometric models to a synthetic image generation system

    Get PDF
    A generalized imaging system geometric model has been incorporated into the Center for Imaging Science Digital Imaging and Remote Sensing Image Generation (DIRSIG) software system. The camera model is capable of simulating the geometric characteristics of frame cameras, line scanners and pushbroom scanners. The user of the model has the ability to define both the sensor internal orientation as well as provide time varying external orientation parameters. The model has been successfully validated through the use of both diagnostic simulated scenes as well as quantitative comparisons between actual imagery and simulated imagery

    Modeling the MTF and noise characteristics of an image chain for a synthetic image generation system

    Get PDF
    This is an approach for modeling sensor degradation effects using an image chain applied to a synthetic radiance image. The sensor effects are applied in the frequency domain by cascading modulation transfer functions (MTF) and phase transfer functions (PTF) from the different stages in the acquisition portion of the image chain. The sensor simulation is intended to not only degrade an image to make it look real, but to do so in a manner that conserves the image\u27s radiometry. Some common transfer functions steps include; effects from the atmosphere, optical diffraction, detector size, and scanning motion. The chain is modeled in a modular format that allows for simplified use. AVS was chosen for the operating platform because of its drag and click user interface. The sen sor model includes the addition of noise from various stages and allows the user to include any noise type. The frequency representations of the images are calculated using the Fast Fourier Transform (FFT) and the optical transfer function (OTF) for the exit pupil function is calculated by an auto correlation of a digital representation of the exit pupil. Analysis of the simulation image quality is conducted by comparing the empirical MTFs between a truth image and a simulated image. Also, a visual comparison between the image features is made for further validation

    Technology needs of advanced Earth observation spacecraft

    Get PDF
    Remote sensing missions were synthesized which could contribute significantly to the understanding of global environmental parameters. Instruments capable of sensing important land and sea parameters are combined with a large antenna designed to passively quantify surface emitted radiation at several wavelengths. A conceptual design for this large deployable antenna was developed. All subsystems required to make the antenna an autonomous spacecraft were conceptually designed. The entire package, including necessary orbit transfer propulsion, is folded to package within the Space Transportation System (STS) cargo bay. After separation, the antenna, its integral feed mast, radiometer receivers, power system, and other instruments are automatically deployed and transferred to the operational orbit. The design resulted in an antenna with a major antenna dimension of 120 meters, weighing 7650 kilograms, and operating at an altitude of 700 kilometers

    Detection of leaf structures in close-range hyperspectral images using morphological fusion

    Get PDF
    Close-range hyperspectral images are a promising source of information in plant biology, in particular, for in vivo study of physiological changes. In this study, we investigate how data fusion can improve the detection of leaf elements by combining pixel reflectance and morphological information. The detection of image regions associated to the leaf structures is the first step toward quantitative analysis on the physical effects that genetic manipulation, disease infections, and environmental conditions have in plants. We tested our fusion approach on Musa acuminata (banana) leaf images and compared its discriminant capability to similar techniques used in remote sensing. Experimental results demonstrate the efficiency of our fusion approach, with significant improvements over some conventional methods

    mHealth hyperspectral learning for instantaneous spatiospectral imaging of hemodynamics

    Get PDF
    Hyperspectral imaging acquires data in both the spatial and frequency domains to offer abundant physical or biological information. However, conventional hyperspectral imaging has intrinsic limitations of bulky instruments, slow data acquisition rate, and spatiospectral tradeoff. Here we introduce hyperspectral learning for snapshot hyperspectral imaging in which sampled hyperspectral data in a small subarea are incorporated into a learning algorithm to recover the hypercube. Hyperspectral learning exploits the idea that a photograph is more than merely a picture and contains detailed spectral information. A small sampling of hyperspectral data enables spectrally informed learning to recover a hypercube from an RGB image. Hyperspectral learning is capable of recovering full spectroscopic resolution in the hypercube, comparable to high spectral resolutions of scientific spectrometers. Hyperspectral learning also enables ultrafast dynamic imaging, leveraging ultraslow video recording in an off-the-shelf smartphone, given that a video comprises a time series of multiple RGB images. To demonstrate its versatility, an experimental model of vascular development is used to extract hemodynamic parameters via statistical and deep-learning approaches. Subsequently, the hemodynamics of peripheral microcirculation is assessed at an ultrafast temporal resolution up to a millisecond, using a conventional smartphone camera. This spectrally informed learning method is analogous to compressed sensing; however, it further allows for reliable hypercube recovery and key feature extractions with a transparent learning algorithm. This learning-powered snapshot hyperspectral imaging method yields high spectral and temporal resolutions and eliminates the spatiospectral tradeoff, offering simple hardware requirements and potential applications of various machine-learning techniques.Comment: This paper will appear in PNAS Nexu
    • 

    corecore