726 research outputs found

    Estimation of leaf area index and its sunlit portion from DSCOVR EPIC data: theoretical basis

    Full text link
    This paper presents the theoretical basis of the algorithm designed for the generation of leaf area index and diurnal course of its sunlit portion from NASA's Earth Polychromatic Imaging Camera (EPIC) onboard NOAA's Deep Space Climate Observatory (DSCOVR). The Look-up-Table (LUT) approach implemented in the MODIS operational LAI/FPAR algorithm is adopted. The LUT, which is the heart of the approach, has been significantly modified. First, its parameterization incorporates the canopy hot spot phenomenon and recent advances in the theory of canopy spectral invariants. This allows more accurate decoupling of the structural and radiometric components of the measured Bidirectional Reflectance Factor (BRF), improves scaling properties of the LUT and consequently simplifies adjustments of the algorithm for data spatial resolution and spectral band compositions. Second, the stochastic radiative transfer equations are used to generate the LUT for all biome types. The equations naturally account for radiative effects of the three-dimensional canopy structure on the BRF and allow for an accurate discrimination between sunlit and shaded leaf areas. Third, the LUT entries are measurable, i.e., they can be independently derived from both below canopy measurements of the transmitted and above canopy measurements of reflected radiation fields. This feature makes possible direct validation of the LUT, facilitates identification of its deficiencies and development of refinements. Analyses of field data on canopy structure and leaf optics collected at 18 sites in the Hyytiälä forest in southern boreal zone in Finland and hyperspectral images acquired by the EO-1 Hyperion sensor support the theoretical basis.Shared Services Center NAS

    Hyperspectral Imaging of a Turbine Engine Exhaust Plume to Determine Radiance, Temperature, and Concentration Spatial Distributions

    Get PDF
    The usefulness of imaging Fourier transform spectroscopy (IFTS) when looking at a rapidly varying turbine engine exhaust scene was explored by characterizing the scene change artifacts (SCAs) present in the plume and the effect they have on the calibrated spectra using the Telops, Inc.-manufactured Field-portable Imaging Radiometric Spectrometer Technology, Midwave Extended (FIRST-MWE). It was determined that IFTS technology can be applied to the problem of a rapidly varying turbine engine exhaust plume due to the zero mean, stochastic nature of the SCAs, through the use of temporal averaging. The FIRST-MWE produced radiometrically calibrated hyperspectral datacubes, with calibration uncertainty of 35% in the 1800 - 2500 cm-1 (4 - 5.5 µm) spectral region for pixels with signal-to-noise ratio (SNR) greater than 1.5; the large uncertainty was due to the presence of SCAs. Spatial distributions of temperature and chemical species concentration pathlengths for CO2, CO, and H2O were extracted from the radiometrically calibrated hyperspectral datacubes using a simple radiative transfer model for diesel and kerosene fuels, each with fuel flow rates of 300 cm3/min and 225 cm3/min. The temperatures were found to be, on average, within 212 K of in situ measurements, the difference attributed to the simplicity of the model. Although no in situ concentration measurements were made, the concentrations of CO2 and CO were found to be within expected limits set by the ambient atmospheric parameters and the calculated products of the turbine engine, on the order of 1015 and 1017 molecules/cm3, respectively

    An end-to-end hyperspectral scene simulator with alternate adjacency effect models and its comparison with cameoSim

    Get PDF
    In this research, we developed a new rendering-based end to end Hyperspectral scene simulator CHIMES (Cranfield Hyperspectral Image Modelling and Evaluation System), which generates nadir images of passively illuminated 3-D outdoor scenes in Visible, Near Infrared (NIR) and Short-Wave Infrared (SWIR) regions, ranging from 360 nm to 2520 nm. MODTRAN TM (MODerate resolution TRANsmission), is used to generate the sky-dome environment map which includes sun and sky radiance along with the polarisation effect of the sky due to Rayleigh scattering. Moreover, we perform path tracing and implement ray interaction with medium and volumetric backscattering at rendering time to model the adjacency effect. We propose two variants of adjacency models, the first one incorporates a single spectral albedo as the averaged background of the scene, this model is called the Background One-Spectra Adjacency Effect Model (BOAEM), which is a CameoSim like model created for performance comparison. The second model calculates background albedo from a pixel’s neighbourhood, whose size depends on the air volume between sensor and target, and differential air density up to sensor altitude. Average background reflectance of all neighbourhood pixel is computed at rendering time for estimating the total upwelled scattered radiance, by volumetric scattering. This model is termed the Texture-Spectra Incorporated Adjacency Effect Model (TIAEM). Moreover, for estimating the underlying atmospheric condition MODTRAN is run with varying aerosol optical thickness and its total ground reflected radiance (TGRR) is compared with TGRR of known in-scene material. The Goodness of fit is evaluated in each iteration, and MODTRAN’s output with the best fit is selected. We perform a tri-modal validation of simulators on a real hyperspectral scene by varying atmospheric condition, terrain surface models and proposed variants of adjacency models. We compared results of our model with Lockheed Martin’s well-established scene simulator CameoSim and acquired Ground Truth (GT) by Hyspex cameras. In clear-sky conditions, both models of CHIMES and CameoSim are in close agreement, however, in searched overcast conditions CHIMES BOAEM is shown to perform better than CameoSim in terms of ℓ1 -norm error of the whole scene with respect to GT. TIAEM produces better radiance shape and covariance of background statistics with respect to Ground Truth (GT), which is key to good target detection performance. We also report that the results of CameoSim have a many-fold higher error for the same scene when the flat surface terrain is replaced with a Digital Elevation Model (DEM) based rugged one

    Land surface temperature and emissivity retrieval from thermal infrared hyperspectral imaging

    Get PDF
    A new algorithm, optimized land surface temperature and emissivity retrieval (OLSTER), is presented to compensate for atmospheric effects and retrieve land surface temperature (LST) and emissivity from airborne thermal infrared hyperspectral data. The OLSTER algorithm is designed to retrieve properties of both natural and man-made materials. Multi-directional or multi-temporal observations are not required, and the scenes do not have to be dominated by blackbody features. The OLSTER algorithm consists of a preprocessing step, an iterative search for nearblackbody pixels, and an iterative constrained optimization loop. The preprocessing step provides initial estimates of LST per pixel and the atmospheric parameters of transmittance and upwelling radiance for the entire image. Pixels that are under- or overcompensated by the estimated atmospheric parameters are classified as near-blackbody and lower emissivity pixels, respectively. A constrained optimization of the atmospheric parameters using generalized reduced gradients on the near-blackbody pixels ensures physical results. The downwelling radiance is estimated from the upwelling radiance by applying a look-up table of coefficients based on a polynomial regression of radiative transfer model runs for the same sensor altitude. The LST and emissivity per pixel are retrieved simultaneously using the well established ISSTES algorithm. The OLSTER algorithm retrieves land surface temperatures within about ± 1.0 K, and emissivities within about ± 0.01 based on numerical simulation and validation work comparing results from sensor data with ground truth measurements. The OLSTER algorithm is currently one of only a few algorithms available that have been documented to retrieve accurate land surface temperatures and absolute land surface spectral emissivities from passive airborne hyperspectral LWIR sensor imagery

    Spectral misregistration correction and simulation for hyperspectral imagery

    Get PDF
    Radiometrically calibrated radiance hyperspectral images can be converted into reflectance images using atmospheric correction in order to extract useful ground information. There are some artifacts in the converted reflectance images due to spectrally misregistered sensor and atmospheric model error. These artifacts give coherent saw-tooth effects in the spectra of the reflectance imagery. These effects degrade the performance of classification and target detection algorithms and make them difficult to compare with ground target spectra. Three spectral misregistration compensation methods were developed in order to compensate for the consistent noise effects. If a ground truth spectrum exists for a test image, the ground truth spectrum can be divided by the spectrum derived from the reflectance image. This will give a coefficient indicating the difference between the ground truth spectrum and the noisy spectrum in the reflectance image. Multiplying this coefficient spectrum and the reflectance image spectrum can correct the saw-tooth effects. The other methods use the Cubic Spline smoothing technique. Cubic Spline smoothing is a fitting algorithm with a non-local smoothing method. Cubic spline smoothing can smooth out the saw-tooth noise in the spectra then the correction coefficient can be calculated as describe above. It is important to find relatively pure and unmixed pixels for the correction coefficient. Two methods for identifying relatively pure pixels were used for this research. The first is the Uniform Region method that is to identify the pixels with small standard deviation values among neighbor pixels. The second method is the Least Ratio method that is used to calculate ratios (standard deviation between smoothed and non-smoothed spectra divided by average reflectance of the spectra) and then calculate the correction coefficient using pixels having small ratios. Spectral misregistration was also simulated using MODTRAN lookup table and DIRSIG (The Digital Imaging and Remote Sensing Image Generation) synthetic image to understand and characterize the effect of spectral misregistration. The spectral misregistration compensation algorithms were tested and verified by the performance measurement of classification and target detection algorithms for test images (real and synthetic images)

    High spatial resolution imaging of methane and other trace gases with the airborne Hyperspectral Thermal Emission Spectrometer (HyTES)

    Get PDF
    Currently large uncertainties exist associated with the attribution and quantification of fugitive emissions of criteria pollutants and greenhouse gases such as methane across large regions and key economic sectors. In this study, data from the airborne Hyperspectral Thermal Emission Spectrometer (HyTES) have been used to develop robust and reliable techniques for the detection and wide-area mapping of emission plumes of methane and other atmospheric trace gas species over challenging and diverse environmental conditions with high spatial resolution that permits direct attribution to sources. HyTES is a pushbroom imaging spectrometer with high spectral resolution (256 bands from 7.5 to 12 µm), wide swath (1–2 km), and high spatial resolution (∼ 2 m at 1 km altitude) that incorporates new thermal infrared (TIR) remote sensing technologies. In this study we introduce a hybrid clutter matched filter (CMF) and plume dilation algorithm applied to HyTES observations to efficiently detect and characterize the spatial structures of individual plumes of CH_4, H_2S, NH_3, NO_2, and SO_2 emitters. The sensitivity and field of regard of HyTES allows rapid and frequent airborne surveys of large areas including facilities not readily accessible from the surface. The HyTES CMF algorithm produces plume intensity images of methane and other gases from strong emission sources. The combination of high spatial resolution and multi-species imaging capability provides source attribution in complex environments. The CMF-based detection of strong emission sources over large areas is a fast and powerful tool needed to focus on more computationally intensive retrieval algorithms to quantify emissions with error estimates, and is useful for expediting mitigation efforts and addressing critical science questions

    High spatial resolution imaging of methane and other trace gases with the airborne Hyperspectral Thermal Emission Spectrometer (HyTES)

    Get PDF
    Currently large uncertainties exist associated with the attribution and quantification of fugitive emissions of criteria pollutants and greenhouse gases such as methane across large regions and key economic sectors. In this study, data from the airborne Hyperspectral Thermal Emission Spectrometer (HyTES) have been used to develop robust and reliable techniques for the detection and wide-area mapping of emission plumes of methane and other atmospheric trace gas species over challenging and diverse environmental conditions with high spatial resolution that permits direct attribution to sources. HyTES is a pushbroom imaging spectrometer with high spectral resolution (256 bands from 7.5 to 12 µm), wide swath (1–2 km), and high spatial resolution (∼ 2 m at 1 km altitude) that incorporates new thermal infrared (TIR) remote sensing technologies. In this study we introduce a hybrid clutter matched filter (CMF) and plume dilation algorithm applied to HyTES observations to efficiently detect and characterize the spatial structures of individual plumes of CH4, H2S, NH3, NO2, and SO2 emitters. The sensitivity and field of regard of HyTES allows rapid and frequent airborne surveys of large areas including facilities not readily accessible from the surface. The HyTES CMF algorithm produces plume intensity images of methane and other gases from strong emission sources. The combination of high spatial resolution and multi-species imaging capability provides source attribution in complex environments. The CMF-based detection of strong emission sources over large areas is a fast and powerful tool needed to focus on more computationally intensive retrieval algorithms to quantify emissions with error estimates, and is useful for expediting mitigation efforts and addressing critical science questions

    Design and Model Verification of an Infrared Chromotomographic Imaging System

    Get PDF
    A prism chromotomographic hyperspectral imaging sensor is being developed to aid in the study of bomb phenomenology. Reliable chromotomographic reconstruction depends on accurate knowledge of the sensor specific point spread function over all wavelengths of interest. The purpose of this research is to generate the required point spread functions using wave optics techniques and a phase screen model of system aberrations. Phase screens are generated using the Richardson-Lucy algorithm for extracting point spread functions and Gerchberg-Saxton algorithm for phase retrieval. These phase screens are verified by comparing the modeled results of a blackbody source with measurements made using a chromotomographic sensor. The sensor itself is constructed as part of this research. Comparison between the measured and simulated results is based upon the noise statistics of the measured image. Four comparisons between measured and modeled data, each made at a different prism rotation angle, provide the basis for the conclusions of this research. Based on these results, the phase screen technique appears to be valid so long as constraints are placed on the field of view and spectral region over which the screens are applied

    Optimal Exploitation of the Sentinel-2 Spectral Capabilities for Crop Leaf Area Index Mapping

    Get PDF
    The continuously increasing demand of accurate quantitative high quality information on land surface properties will be faced by a new generation of environmental Earth observation (EO) missions. One current example, associated with a high potential to contribute to those demands, is the multi-spectral ESA Sentinel-2 (S2) system. The present study focuses on the evaluation of spectral information content needed for crop leaf area index (LAI) mapping in view of the future sensors. Data from a field campaign were used to determine the optimal spectral sampling from available S2 bands applying inversion of a radiative transfer model (PROSAIL) with look-up table (LUT) and artificial neural network (ANN) approaches. Overall LAI estimation performance of the proposed LUT approach (LUTN₅₀) was comparable in terms of retrieval performances with a tested and approved ANN method. Employing seven- and eight-band combinations, the LUTN₅₀ approach obtained LAI RMSE of 0.53 and normalized LAI RMSE of 0.12, which was comparable to the results of the ANN. However, the LUTN50 method showed a higher robustness and insensitivity to different band settings. Most frequently selected wavebands were located in near infrared and red edge spectral regions. In conclusion, our results emphasize the potential benefits of the Sentinel-2 mission for agricultural applications

    Illumination Invariant Outdoor Perception

    Get PDF
    This thesis proposes the use of a multi-modal sensor approach to achieve illumination invariance in images taken in outdoor environments. The approach is automatic in that it does not require user input for initialisation, and is not reliant on the input of atmospheric radiative transfer models. While it is common to use pixel colour and intensity as features in high level vision algorithms, their performance is severely limited by the uncontrolled lighting and complex geometric structure of outdoor scenes. The appearance of a material is dependent on the incident illumination, which can vary due to spatial and temporal factors. This variability causes identical materials to appear differently depending on their location. Illumination invariant representations of the scene can potentially improve the performance of high level vision algorithms as they allow discrimination between pixels to occur based on the underlying material characteristics. The proposed approach to obtaining illumination invariance utilises fused image and geometric data. An approximation of the outdoor illumination is used to derive per-pixel scaling factors. This has the effect of relighting the entire scene using a single illuminant that is common in terms of colour and intensity for all pixels. The approach is extended to radiometric normalisation and the multi-image scenario, meaning that the resultant dataset is both spatially and temporally illumination invariant. The proposed illumination invariance approach is evaluated on several datasets and shows that spatial and temporal invariance can be achieved without loss of spectral dimensionality. The system requires very few tuning parameters, meaning that expert knowledge is not required in order for its operation. This has potential implications for robotics and remote sensing applications where perception systems play an integral role in developing a rich understanding of the scene
    corecore