100 research outputs found

    Monitoring vegetation dynamics using MERIS fused images

    Get PDF
    The MEdium Resolution Imaging Spectrometer (MERIS) can be used to monitor vegetation dynamics at regional to global scales. However, the spatial resolutions provided by this sensor (300 or 1200 m) might not be appropriate to monitor fragmented landscapes. This is why the synergistic use of MERIS full resolution (300 m) and Landsat TM (25 m) data is studied in this paper. An unmixing-based data fusion approach was used to produce images that have the spectral and temporal resolutions provided by MERIS and the spatial resolution of Landsat TM. The central part of The Netherlands was selected to illustrate this approach. Seven MERIS full resolution and one Landsat TM image were available over this area. The radiometric characteristics of the fused images were evaluated at 25 and at 300 m. After this quantitative quality assessment, the best fused images were used to compute NDVI, MTCI and MGVI profiles for the main land cover types present in the study area

    Monitoring vegetation dynamics using meris fused images

    Full text link
    The MEdium Resolution Imaging Spectrometer (MERIS) can be used to monitor vegetation dynamics at regional to global scales. However, the spatial resolutions provided by this sensor (300 or 1200 m) might not be appropriate to monitor fragmented landscapes. This is why the synergistic use of MERIS full resolution (300 m) and a high spatial resolution land use/land cover database (25 m) is studied in this paper. An unmixing-based data fusion approach was used to produce images that have the spectral and temporal resolutions provided by MERIS and a Landsat-like spatial resolution. The central part of The Netherlands was selected to illustrate this approach. Seven MERIS full resolution and one Landsat TM image were available over this area too. The radiometric characteristics of the fused images were evaluated at 25 and at 300 m. After this quantitative quality assessment, the best fused images were used to compute MTCI and MGVI profiles for the main land cover types present in the study area

    Combining hyperspectral UAV and mulitspectral FORMOSAT-2 imagery for precision agriculture applications

    Get PDF
    Precision agriculture requires detailed information regarding the crop status variability within a field. Remote sensing provides an efficient way to obtain such information through observing biophysical parameters, such as canopy nitrogen content, leaf coverage, and plant biomass. However, individual remote sensing sensors often fail to provide information which meets the spatial and temporal resolution required by precision agriculture. The purpose of this study is to investigate methods which can be used to combine imagery from various sensors in order to create a new dataset which comes closer to meeting these requirements. More specifically, this study combined multispectral satellite imagery (Formosat-2) and hyperspectral Unmanned Aerial Vehicle (UAV) imagery of a potato field in the Netherlands. The imagery from both platforms was combined in two ways. Firstly, data fusion methods brought the spatial resolution of the Formosat-2 imagery (8 m) down to the spatial resolution of the UAV imagery (1 m). Two data fusion methods were applied: an unmixing-based algorithm and the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The unmixing-based method produced vegetation indices which were highly correlated to the measured LAI (rs= 0.866) and canopy chlorophyll values (rs=0.884), whereas the STARFM obtained lower correlations. Secondly, a Spectral-Temporal Reflectance Surface (STRS) was constructed to interpolate a daily 101 band reflectance spectra using both sources of imagery. A novel STRS method was presented, which utilizes Bayesian theory to obtain realistic spectra and accounts for sensor uncertainties. The resulting surface obtained a high correlation to LAI (rs=0.858) and canopy chlorophyll (rs=0.788) measurements at field level. The multi-sensor datasets were able to characterize significant differences of crop status due to differing nitrogen fertilization regimes from June to August. Meanwhile, the yield prediction models based purely on the vegetation indices extracted from the unmixing-based fusion dataset explained 52.7% of the yield variation, whereas the STRS dataset was able to explain 72.9% of the yield variability. The results of the current study indicate that the limitations of each individual sensor can be largely surpassed by combining multiple sources of imagery, which is beneficial for agricultural management. Further research could focus on the integration of data fusion and STRS techniques, and the inclusion of imagery from additional sensors.Samenvatting In een wereld waar toekomstige voedselzekerheid bedreigd wordt, biedt precisielandbouw een oplossing die de oogst kan maximaliseren terwijl de economische en ecologische kosten van voedselproductie beperkt worden. Om dit te kunnen doen is gedetailleerde informatie over de staat van het gewas nodig. Remote sensing is een manier om biofysische informatie, waaronder stikstof gehaltes en biomassa, te verkrijgen. De informatie van een individuele sensor is echter vaak niet genoeg om aan de hoge eisen betreft ruimtelijke en temporele resolutie te voldoen. Deze studie combineert daarom de informatie afkomstig van verschillende sensoren, namelijk multispectrale satelliet beelden (Formosat-2) en hyperspectral Unmanned Aerial Vehicle (UAV) beelden van een aardappel veld, in een poging om aan de hoge informatie eisen van precisielandbouw te voldoen. Ten eerste werd gebruik gemaakt van datafusie om de acht Formosat-2 beelden met een resolutie van 8 m te combineren met de vier UAV beelden met een resolutie van 1 m. De resulterende dataset bestaat uit acht beelden met een resolutie van 1 m. Twee methodes werden toegepast, de zogenaamde STARFM methode en een unmixing-based methode. De unmixing-based methode produceerde beelden met een hoge correlatie op de Leaf Area Index (LAI) (rs= 0.866) en chlorofyl gehalte (rs=0.884) gemeten op veldnieveau. De STARFM methode presteerde slechter, met correlaties van respectievelijk rs=0.477 en rs=0.431. Ten tweede werden Spectral-Temporal Reflectance Surfaces (STRSs) ontwikkeld die een dagelijks spectrum weergeven met 101 spectrale banden. Om dit te doen is een nieuwe STRS methode gebaseerd op de Bayesiaanse theorie ontwikkeld. Deze produceert realistische spectra met een overeenkomstige onzekerheid. Deze STRSs vertoonden hoge correlaties met de LAI (rs=0.858) en het chlorofyl gehalte (rs=0.788) gemeten op veldnieveau. De bruikbaarheid van deze twee soorten datasets werd geanalyseerd door middel van de berekening van een aantal vegetatie-indexen. De resultaten tonen dat de multi-sensor datasets capabel zijn om significante verschillen in de groei van gewassen vast te stellen tijdens het groeiseizoen zelf. Bovendien werden regressiemodellen toegepast om de bruikbaarheid van de datasets voor oogst voorspellingen. De unmixing-based datafusie verklaarde 52.7% van de variatie in oogst, terwijl de STRS 72.9% van de variabiliteit verklaarden. De resultaten van het huidige onderzoek tonen aan dat de beperkingen van een individuele sensor grotendeels overtroffen kunnen worden door het gebruik van meerdere sensoren. Het combineren van verschillende sensoren, of het nu Formosat-2 en UAV beelden zijn of andere ruimtelijke informatiebronnen, kan de hoge informatie eisen van de precisielandbouw tegemoet komen.In the context of threatened global food security, precision agriculture is one strategy to maximize yield to meet the increased demands of food, while minimizing both economic and environmental costs of food production. This is done by applying variable management strategies, which means the fertilizer or irrigation rates within a field are adjusted according to the crop needs in that specific part of the field. This implies that accurate crop status information must be available regularly for many different points in the field. Remote sensing can provide this information, but it is difficult to meet the information requirements when using only one sensor. For example, satellites collect imagery regularly and over large areas, but may be blocked by clouds. Unmanned Aerial Vehicles (UAVs), commonly known as drones, are more flexible but have higher operational costs. The purpose of this study was to use fusion methods to combine satellite (Formosat-2) with UAV imagery of a potato field in the Netherlands. Firstly, data fusion was applied. The eight Formosat-2 images with 8 m x 8 m pixels were combined with four UAV images with 1 m x 1 m pixels to obtain a new dataset of eight images with 1 m x 1 m pixels. Unmixing-based data fusion produced images which had a high correlation to field measurements obtained from the potato field during the growing season. The results of a second data fusion method, STARFM, were less reliable in this study. The UAV images were hyperspectral, meaning they contained very detailed information spanning a large part of the electromagnetic spectrum. Much of this information was lost in the data fusion methods because the Formosat-2 images were multispectral, representing a more limited portion of the spectrum. Therefore, a second analysis investigated the use of Spectral-Temporal Reflectance Surfaces (STRS), which allow information from different portions of the electromagnetic spectrum to be combined. These STRS provided daily hyperspectral observations, which were also verified as accurate by comparing them to reference data. Finally, this study demonstrated the ability of both data fusion and STRS to identify which parts of the potato field had lower photosynthetic production during the growing season. Data fusion was capable of explaining 52.7% of the yield variation through regression models, whereas the STRS explained 72.9%. To conclude, this study indicates how to combine crop status information from different sensors to support precision agriculture management decisions

    Unmixing-Based Fusion of Hyperspatial and Hyperspectral Airborne Imagery for Early Detection of Vegetation Stress

    Get PDF
    "© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.” Upon publication, authors are asked to include either a link to the abstract of the published article in IEEE Xplore®, or the article’s Digital Object Identifier (DOI).Many applications require a timely acquisition of high spatial and spectral resolution remote sensing data. This is often not achievable since spaceborne remote sensing instruments face a tradeoff between spatial and spectral resolution, while airborne sensors mounted on a manned aircraft are too expensive to acquire a high temporal resolution. This gap between information needs and data availability inspires research on using Remotely Piloted Aircraft Systems (RPAS) to capture the desired high spectral and spatial information, furthermore providing temporal flexibility. Present hyperspectral imagers on board lightweight RPAS are still rare, due to the operational complexity, sensor weight, and instability. This paper looks into the use of a hyperspectral-hyperspatial fusion technique for an improved biophysical parameter retrieval and physiological assessment in agricultural crops. First, a biophysical parameter extraction study is performed on a simulated citrus orchard. Subsequently, the unmixing-based fusion is applied on a real test case in commercial citrus orchards with discontinuous canopies, in which a more efficient and accurate estimation of water stress is achieved by fusing thermal hyperspatial and hyperspectral (APEX) imagery. Narrowband reflectance indices that have proven their effectiveness as previsual indicators of water stress, such as the Photochemical Reflectance Index (PRI), show a significant increase in tree water-stress detection when applied on the fused dataset compared to the original hyperspectral APEX dataset (R-2 = 0.62, p 0.1). Maximal R-2 values of 0.93 and 0.86 are obtained by a linear relationship between the vegetation index and the resp., water and chlorophyll, parameter content maps.This work was supported in part by the Belgian Science Policy Office in the frame of the Stereo II program (Hypermix project-SR/00/141), in part by the project Chameleon of the Flemish Agency for Innovation by Science and Technology (IWT), and in part by the Spanish Ministry of Science and Education (MEC) for the projects AGL2012-40053-C03-01 and CONSOLIDER RIDECO (CSD2006-67). The European Facility for Airborne Research EUFAR (www.eufar.net) funded the flight campaign (Transnational Access Project 'Hyper-Stress'). The work of D. S. Intrigliolo was supported by the Spanish Ministry of Economy and Competitiveness program "Ramon y Cajal."Delalieux, S.; Zarco-Tejada, PJ.; Tits, L.; Jiménez Bello, MÁ.; Intrigliolo Molina, DS.; Somers, B. (2014). Unmixing-Based Fusion of Hyperspatial and Hyperspectral Airborne Imagery for Early Detection of Vegetation Stress. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 7(6):2571-2582. https://doi.org/10.1109/JSTARS.2014.2330352S257125827

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    SFSDAF: an enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion

    Get PDF
    Spatio-temporal image fusion methods have become a popular means to produce remotely sensed data sets that have both fine spatial and temporal resolution. Accurate prediction of reflectance change is difficult, especially when the change is caused by both phenological change and land cover class changes. Although several spatio-temporal fusion methods such as the Flexible Spatiotemporal DAta Fusion (FSDAF) directly derive land cover phenological change information (such as endmember change) at different dates, the direct derivation of land cover class change information is challenging. In this paper, an enhanced FSDAF that incorporates sub-pixel class fraction change information (SFSDAF) is proposed. By directly deriving the sub-pixel land cover class fraction change information the proposed method allows accurate prediction even for heterogeneous regions that undergo a land cover class change. In particular, SFSDAF directly derives fine spatial resolution endmember change and class fraction change at the date of the observed image pair and the date of prediction, which can help identify image reflectance change resulting from different sources. SFSDAF predicts a fine resolution image at the time of acquisition of coarse resolution images using only one prior coarse and fine resolution image pair, and accommodates variations in reflectance due to both natural fluctuations in class spectral response (e.g. due to phenology) and land cover class change. The method is illustrated using degraded and real images and compared against three established spatio-temporal methods. The results show that the SFSDAF produced the least blurred images and the most accurate predictions of fine resolution reflectance values, especially for regions of heterogeneous landscape and regions that undergo some land cover class change. Consequently, the SFSDAF has considerable potential in monitoring Earth surface dynamics

    Mapping annual forest cover by fusing PALSAR/PALSAR-2 and MODIS NDVI during 2007–2016

    Get PDF
    Advanced Land Observing Satellite (ALOS) Phased Arrayed L-band Synthetic Aperture Radar (PALSAR) HH and HV polarization data were used previously to produce annual, global 25 m forest maps between 2007 and 2010, and the latest global forest maps of 2015 and 2016 were produced by using the ALOS-2 PALSAR-2 data. However, annual 25 m spatial resolution forest maps during 2011–2014 are missing because of the gap in operation between ALOS and ALOS-2, preventing the construction of a continuous, fine resolution time-series dataset on the world's forests. In contrast, the MODerate Resolution Imaging Spectroradiometer (MODIS) NDVI images were available globally since 2000. This research developed a novel method to produce annual 25 m forest maps during 2007–2016 by fusing the fine spatial resolution, but asynchronous PALSAR/PALSAR-2 with coarse spatial resolution, but synchronous MODIS NDVI data, thus, filling the four-year gap in the ALOS and ALOS-2 time-series, as well as enhancing the existing mapping activity. The method was developed concentrating on two key objectives: 1) producing more accurate 25 m forest maps by integrating PALSAR/PALSAR-2 and MODIS NDVI data during 2007–2010 and 2015–2016; 2) reconstructing annual 25 m forest maps from time-series MODIS NDVI images during 2011–2014. Specifically, a decision tree classification was developed for forest mapping based on both the PALSAR/PALSAR-2 and MODIS NDVI data, and a new spatial-temporal super-resolution mapping was proposed to reconstruct the 25 m forest maps from time-series MODIS NDVI images. Three study sites including Paraguay, the USA and Russia were chosen, as they represent the world's three main forest types: tropical forest, temperate broadleaf and mixed forest, and boreal conifer forest, respectively. Compared with traditional methods, the proposed approach produced the most accurate continuous time-series of fine spatial resolution forest maps both visually and quantitatively. For the forest maps during 2007–2010 and 2015–2016, the results had greater overall accuracy values (>98%) than those of the original JAXA forest product. For the reconstructed 25 m forest maps during 2011–2014, the increases in classifications accuracy relative to three benchmark methods were statistically significant, and the overall accuracy values of the three study sites were almost universally >92%. The proposed approach, therefore, has great potential to support the production of annual 25 m forest maps by fusing PALSAR/PALSAR-2 and MODIS NDVI during 2007–2016

    Enhancing Spatio-Temporal Fusion of MODIS and Landsat Data by Incorporating 250 m MODIS Data

    Get PDF
    Spatio-temporal fusion of MODIS and Landsat data aims to produce new data that have simultaneously the Landsat spatial resolution and MODIS temporal resolution. It is an ill-posed problem involving large uncertainty, especially for reproduction of abrupt changes and heterogeneous landscapes. In this paper, we proposed to incorporate the freely available 250 m MODIS images into spatio-temporal fusion to increase prediction accuracy. The 250 m MODIS bands 1 and 2 are fused with 500 m MODIS bands 3-7 using the advanced area-to-point regression kriging approach. Based on a standard spatio-temporal fusion approach, the interim 250 m fused MODIS data are then downscaled to 30 m with the aid of the available 30 m Landsat data on temporally close days. The 250 m data can provide more information for the abrupt changes and heterogeneous landscapes than the original 500 m MODIS data, thus increasing the accuracy of spatio-temporal fusion predictions. The effectiveness of the proposed scheme was demonstrated using two datasets
    corecore