431 research outputs found

    Evaluation of Sentinel-2 Red-Edge Bands for Empirical Estimation of Green LAI and Chlorophyll Content

    Get PDF
    ESA’s upcoming satellite Sentinel-2 will provide Earth images of high spatial, spectral and temporal resolution and aims to ensure continuity for Landsat and SPOT observations. In comparison to the latter sensors, Sentinel-2 incorporates three new spectral bands in the red-edge region, which are centered at 705, 740 and 783 nm. This study addresses the importance of these new bands for the retrieval and monitoring of two important biophysical parameters: green leaf area index (LAI) and chlorophyll content (Ch). With data from several ESA field campaigns over agricultural sites (SPARC, AgriSAR, CEFLES2) we have evaluated the efficacy of two empirical methods that specifically make use of the new Sentinel-2 bands. First, it was shown that LAI can be derived from a generic normalized difference index (NDI) using hyperspectral data, with 674 nm with 712 nm as best performing bands. These bands are positioned closely to the Sentinel-2 B4 (665 nm) and the new red-edge B5 (705 nm) band. The method has been applied to simulated Sentinel-2 data. The resulting green LAI map was validated against field data of various crop types, thereby spanning a LAI between 0 and 6, and yielded a RMSE of 0.6. Second, the recently developed “Normalized Area Over reflectance Curve” (NAOC), an index that derives Ch from hyperspectral data, was studied on its compatibility with simulated Sentinel-2 data. This index integrates the reflectance curve between 643 and 795 nm, thereby including the new Sentinel-2 bands in the red-edge region. We found that these new bands significantly improve the accuracy of Ch estimation. Both methods emphasize the importance of red-edge bands for operational estimation of biophysical parameters from Sentinel-2

    Merging the Minnaert-k parameter with spectral unmixing to map forest heterogeneity with CHRIS/PROBA data

    Full text link
    The Compact High Resolution Imaging Spectrometer (CHRIS) mounted onboard the Project for Onboard Autonomy (PROBA) spacecraft is capable of sampling reflected radiation at five viewing angles over the visible and near-infrared regions of the solar spectrum with high spatial resolution. We combined the spectral domain with the angular domain of CHRIS data in order to map the surface heterogeneity of an Alpine coniferous forest during winter. In the spectral domain, linear spectral unmixing of the nadir image resulted in a canopy cover map. In the angular domain, pixelwise inversion of the Rahman-Pinty-Verstraete (RPV) model at a single wavelength at the red edge (722 nm) yielded a map of the Minnaert-k parameter that provided information on surface heterogeneity at a subpixel scale. However, the interpretation of the Minnaert-k parameter is not always straightforward because fully vegetated targets typically produce the same type of reflectance anisotropy as non-vegetated targets. Merging both maps resulted in a forest cover heterogeneity map, which contains more detailed information on canopy heterogeneity at the CHRIS subpixel scale than is possible to realize from a single-source optical data set

    Using the Minnaert-k parameter derived from CHRIS/PROBA data for forest heterogeneity mapping

    Full text link
    CHRIS/PROBA is capable of sampling reflected radiation at five viewing angles over the visible and near-infrared regions of the solar spectrum with a relatively high spatial resolution (~17m). We exploited both the spectral and angular domain of CHRIS data in order to map the surface heterogeneity of an Alpine coniferous forest during winter. In the spectral domain, linear spectral unmixing of the nadir image resulted in a canopy cover map. In the angular domain, pixelwise inversion of the Rahman–Pinty–Verstraete (RPV) model at a single wavelength at the red edge (722 nm) yielded a map of the Minnaert-k parameter that provided information on surface heterogeneity at subpixel scale. Merging both maps resulted in a forest cover heterogeneity map, which contains more detailed information on canopy heterogeneity at the CHRIS subpixel scale than can be obtained from a single-source data set

    Fusing optical and SAR time series for LAI gap filling with multioutput Gaussian processes

    Get PDF
    The availability of satellite optical information is often hampered by the natural presence of clouds, which can be problematic for many applications. Persistent clouds over agricultural fields can mask key stages of crop growth, leading to unreliable yield predictions. Synthetic Aperture Radar (SAR) provides all-weather imagery which can potentially overcome this limitation, but given its high and distinct sensitivity to different surface properties, the fusion of SAR and optical data still remains an open challenge. In this work, we propose the use of Multi-Output Gaussian Process (MOGP) regression, a machine learning technique that learns automatically the statistical relationships among multisensor time series, to detect vegetated areas over which the synergy between SAR-optical imageries is profitable. For this purpose, we use the Sentinel-1 Radar Vegetation Index (RVI) and Sentinel-2 Leaf Area Index (LAI) time series over a study area in north west of the Iberian peninsula. Through a physical interpretation of MOGP trained models, we show its ability to provide estimations of LAI even over cloudy periods using the information shared with RVI, which guarantees the solution keeps always tied to real measurements. Results demonstrate the advantage of MOGP especially for long data gaps, where optical-based methods notoriously fail. The leave-one-image-out assessment technique applied to the whole vegetation cover shows MOGP predictions improve standard GP estimations over short-time gaps (R 2 of 74% vs 68%, RMSE of 0.4 vs 0.44 [m 2 m −2 ]) and especially over long-time gaps (R 2 of 33% vs 12%, RMSE of 0.5 vs 1.09 [m 2 m −2 ])

    An Emulator Toolbox to Approximate Radiative Transfer Models with Statistical Learning

    Get PDF
    Physically-based radiative transfer models (RTMs) help in understanding the processes occurring on the Earth’s surface and their interactions with vegetation and atmosphere. When it comes to studying vegetation properties, RTMs allows us to study light interception by plant canopies and are used in the retrieval of biophysical variables through model inversion. However, advanced RTMs can take a long computational time, which makes them unfeasible in many real applications. To overcome this problem, it has been proposed to substitute RTMs through so-called emulators. Emulators are statistical models that approximate the functioning of RTMs. Emulators are advantageous in real practice because of the computational efficiency and excellent accuracy and flexibility for extrapolation. We hereby present an “Emulator toolbox” that enables analysing multi-output machine learning regression algorithms (MO-MLRAs) on their ability to approximate an RTM. The toolbox is included in the free-access ARTMO’s MATLAB suite for parameter retrieval and model inversion and currently contains both linear and non-linear MO-MLRAs, namely partial least squares regression (PLSR), kernel ridge regression (KRR) and neural networks (NN). These MO-MLRAs have been evaluated on their precision and speed to approximate the soil vegetation atmosphere transfer model SCOPE (Soil Canopy Observation, Photochemistry and Energy balance). SCOPE generates, amongst others, sun-induced chlorophyll fluorescence as the output signal. KRR and NN were evaluated as capable of reconstructing fluorescence spectra with great precision. Relative errors fell below 0.5% when trained with 500 or more samples using cross-validation and principal component analysis to alleviate the underdetermination problem. Moreover, NN reconstructed fluorescence spectra about 50-times faster and KRR about 800-times faster than SCOPE. The Emulator toolbox is foreseen to open new opportunities in the use of advanced RTMs, in which both consistent physical assumptions and data-driven machine learning algorithms live together
    • 

    corecore