216 research outputs found

    Combining hyperspectral UAV and mulitspectral FORMOSAT-2 imagery for precision agriculture applications

    Get PDF
    Precision agriculture requires detailed information regarding the crop status variability within a field. Remote sensing provides an efficient way to obtain such information through observing biophysical parameters, such as canopy nitrogen content, leaf coverage, and plant biomass. However, individual remote sensing sensors often fail to provide information which meets the spatial and temporal resolution required by precision agriculture. The purpose of this study is to investigate methods which can be used to combine imagery from various sensors in order to create a new dataset which comes closer to meeting these requirements. More specifically, this study combined multispectral satellite imagery (Formosat-2) and hyperspectral Unmanned Aerial Vehicle (UAV) imagery of a potato field in the Netherlands. The imagery from both platforms was combined in two ways. Firstly, data fusion methods brought the spatial resolution of the Formosat-2 imagery (8 m) down to the spatial resolution of the UAV imagery (1 m). Two data fusion methods were applied: an unmixing-based algorithm and the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM). The unmixing-based method produced vegetation indices which were highly correlated to the measured LAI (rs= 0.866) and canopy chlorophyll values (rs=0.884), whereas the STARFM obtained lower correlations. Secondly, a Spectral-Temporal Reflectance Surface (STRS) was constructed to interpolate a daily 101 band reflectance spectra using both sources of imagery. A novel STRS method was presented, which utilizes Bayesian theory to obtain realistic spectra and accounts for sensor uncertainties. The resulting surface obtained a high correlation to LAI (rs=0.858) and canopy chlorophyll (rs=0.788) measurements at field level. The multi-sensor datasets were able to characterize significant differences of crop status due to differing nitrogen fertilization regimes from June to August. Meanwhile, the yield prediction models based purely on the vegetation indices extracted from the unmixing-based fusion dataset explained 52.7% of the yield variation, whereas the STRS dataset was able to explain 72.9% of the yield variability. The results of the current study indicate that the limitations of each individual sensor can be largely surpassed by combining multiple sources of imagery, which is beneficial for agricultural management. Further research could focus on the integration of data fusion and STRS techniques, and the inclusion of imagery from additional sensors.Samenvatting In een wereld waar toekomstige voedselzekerheid bedreigd wordt, biedt precisielandbouw een oplossing die de oogst kan maximaliseren terwijl de economische en ecologische kosten van voedselproductie beperkt worden. Om dit te kunnen doen is gedetailleerde informatie over de staat van het gewas nodig. Remote sensing is een manier om biofysische informatie, waaronder stikstof gehaltes en biomassa, te verkrijgen. De informatie van een individuele sensor is echter vaak niet genoeg om aan de hoge eisen betreft ruimtelijke en temporele resolutie te voldoen. Deze studie combineert daarom de informatie afkomstig van verschillende sensoren, namelijk multispectrale satelliet beelden (Formosat-2) en hyperspectral Unmanned Aerial Vehicle (UAV) beelden van een aardappel veld, in een poging om aan de hoge informatie eisen van precisielandbouw te voldoen. Ten eerste werd gebruik gemaakt van datafusie om de acht Formosat-2 beelden met een resolutie van 8 m te combineren met de vier UAV beelden met een resolutie van 1 m. De resulterende dataset bestaat uit acht beelden met een resolutie van 1 m. Twee methodes werden toegepast, de zogenaamde STARFM methode en een unmixing-based methode. De unmixing-based methode produceerde beelden met een hoge correlatie op de Leaf Area Index (LAI) (rs= 0.866) en chlorofyl gehalte (rs=0.884) gemeten op veldnieveau. De STARFM methode presteerde slechter, met correlaties van respectievelijk rs=0.477 en rs=0.431. Ten tweede werden Spectral-Temporal Reflectance Surfaces (STRSs) ontwikkeld die een dagelijks spectrum weergeven met 101 spectrale banden. Om dit te doen is een nieuwe STRS methode gebaseerd op de Bayesiaanse theorie ontwikkeld. Deze produceert realistische spectra met een overeenkomstige onzekerheid. Deze STRSs vertoonden hoge correlaties met de LAI (rs=0.858) en het chlorofyl gehalte (rs=0.788) gemeten op veldnieveau. De bruikbaarheid van deze twee soorten datasets werd geanalyseerd door middel van de berekening van een aantal vegetatie-indexen. De resultaten tonen dat de multi-sensor datasets capabel zijn om significante verschillen in de groei van gewassen vast te stellen tijdens het groeiseizoen zelf. Bovendien werden regressiemodellen toegepast om de bruikbaarheid van de datasets voor oogst voorspellingen. De unmixing-based datafusie verklaarde 52.7% van de variatie in oogst, terwijl de STRS 72.9% van de variabiliteit verklaarden. De resultaten van het huidige onderzoek tonen aan dat de beperkingen van een individuele sensor grotendeels overtroffen kunnen worden door het gebruik van meerdere sensoren. Het combineren van verschillende sensoren, of het nu Formosat-2 en UAV beelden zijn of andere ruimtelijke informatiebronnen, kan de hoge informatie eisen van de precisielandbouw tegemoet komen.In the context of threatened global food security, precision agriculture is one strategy to maximize yield to meet the increased demands of food, while minimizing both economic and environmental costs of food production. This is done by applying variable management strategies, which means the fertilizer or irrigation rates within a field are adjusted according to the crop needs in that specific part of the field. This implies that accurate crop status information must be available regularly for many different points in the field. Remote sensing can provide this information, but it is difficult to meet the information requirements when using only one sensor. For example, satellites collect imagery regularly and over large areas, but may be blocked by clouds. Unmanned Aerial Vehicles (UAVs), commonly known as drones, are more flexible but have higher operational costs. The purpose of this study was to use fusion methods to combine satellite (Formosat-2) with UAV imagery of a potato field in the Netherlands. Firstly, data fusion was applied. The eight Formosat-2 images with 8 m x 8 m pixels were combined with four UAV images with 1 m x 1 m pixels to obtain a new dataset of eight images with 1 m x 1 m pixels. Unmixing-based data fusion produced images which had a high correlation to field measurements obtained from the potato field during the growing season. The results of a second data fusion method, STARFM, were less reliable in this study. The UAV images were hyperspectral, meaning they contained very detailed information spanning a large part of the electromagnetic spectrum. Much of this information was lost in the data fusion methods because the Formosat-2 images were multispectral, representing a more limited portion of the spectrum. Therefore, a second analysis investigated the use of Spectral-Temporal Reflectance Surfaces (STRS), which allow information from different portions of the electromagnetic spectrum to be combined. These STRS provided daily hyperspectral observations, which were also verified as accurate by comparing them to reference data. Finally, this study demonstrated the ability of both data fusion and STRS to identify which parts of the potato field had lower photosynthetic production during the growing season. Data fusion was capable of explaining 52.7% of the yield variation through regression models, whereas the STRS explained 72.9%. To conclude, this study indicates how to combine crop status information from different sensors to support precision agriculture management decisions

    Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

    Get PDF
    In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and Fit-FC model from the weight function-based category, an unmixing-based data fusion (UBDF) method from the unmixing-based category, the one-pair learning method from the learning-based category, and the Flexible Spatiotemporal DAta Fusion (FSDAF) method from hybrid category. The relationship between the performances of the STIF methods and scene-level and local-level landscape heterogeneity index (LHI) and temporal variation index (TVI) were analyzed. Our results showed that (1) the FSDAF model was most robust regardless of variations in LHI and TVI at both scene level and local level, while it was less computationally efficient than the other models except for one-pair learning; (2) Fit-FC had the highest computing efficiency. It was accurate in predicting reflectance but less accurate than FSDAF and one-pair learning in capturing image structures; (3) One-pair learning had advantages in prediction of large-area land cover change with the capability of preserving image structures. However, it was the least computational efficient model; (4) STARFM was good at predicting phenological change, while it was not suitable for applications of land cover type change; (5) UBDF is not recommended for cases with strong temporal changes or abrupt changes. These findings could provide guidelines for users to select appropriate STIF method for their own applications

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Unmixing-Based Fusion of Hyperspatial and Hyperspectral Airborne Imagery for Early Detection of Vegetation Stress

    Get PDF
    "© 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.” Upon publication, authors are asked to include either a link to the abstract of the published article in IEEE Xplore®, or the article’s Digital Object Identifier (DOI).Many applications require a timely acquisition of high spatial and spectral resolution remote sensing data. This is often not achievable since spaceborne remote sensing instruments face a tradeoff between spatial and spectral resolution, while airborne sensors mounted on a manned aircraft are too expensive to acquire a high temporal resolution. This gap between information needs and data availability inspires research on using Remotely Piloted Aircraft Systems (RPAS) to capture the desired high spectral and spatial information, furthermore providing temporal flexibility. Present hyperspectral imagers on board lightweight RPAS are still rare, due to the operational complexity, sensor weight, and instability. This paper looks into the use of a hyperspectral-hyperspatial fusion technique for an improved biophysical parameter retrieval and physiological assessment in agricultural crops. First, a biophysical parameter extraction study is performed on a simulated citrus orchard. Subsequently, the unmixing-based fusion is applied on a real test case in commercial citrus orchards with discontinuous canopies, in which a more efficient and accurate estimation of water stress is achieved by fusing thermal hyperspatial and hyperspectral (APEX) imagery. Narrowband reflectance indices that have proven their effectiveness as previsual indicators of water stress, such as the Photochemical Reflectance Index (PRI), show a significant increase in tree water-stress detection when applied on the fused dataset compared to the original hyperspectral APEX dataset (R-2 = 0.62, p 0.1). Maximal R-2 values of 0.93 and 0.86 are obtained by a linear relationship between the vegetation index and the resp., water and chlorophyll, parameter content maps.This work was supported in part by the Belgian Science Policy Office in the frame of the Stereo II program (Hypermix project-SR/00/141), in part by the project Chameleon of the Flemish Agency for Innovation by Science and Technology (IWT), and in part by the Spanish Ministry of Science and Education (MEC) for the projects AGL2012-40053-C03-01 and CONSOLIDER RIDECO (CSD2006-67). The European Facility for Airborne Research EUFAR (www.eufar.net) funded the flight campaign (Transnational Access Project 'Hyper-Stress'). The work of D. S. Intrigliolo was supported by the Spanish Ministry of Economy and Competitiveness program "Ramon y Cajal."Delalieux, S.; Zarco-Tejada, PJ.; Tits, L.; Jiménez Bello, MÁ.; Intrigliolo Molina, DS.; Somers, B. (2014). Unmixing-Based Fusion of Hyperspatial and Hyperspectral Airborne Imagery for Early Detection of Vegetation Stress. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 7(6):2571-2582. https://doi.org/10.1109/JSTARS.2014.2330352S257125827

    Bidirectional recurrent imputation and abundance estimation of LULC classes with MODIS multispectral time series and geo-topographic and climatic data

    Full text link
    Remotely sensed data are dominated by mixed Land Use and Land Cover (LULC) types. Spectral unmixing (SU) is a key technique that disentangles mixed pixels into constituent LULC types and their abundance fractions. While existing studies on Deep Learning (DL) for SU typically focus on single time-step hyperspectral (HS) or multispectral (MS) data, our work pioneers SU using MODIS MS time series, addressing missing data with end-to-end DL models. Our approach enhances a Long-Short Term Memory (LSTM)-based model by incorporating geographic, topographic (geo-topographic), and climatic ancillary information. Notably, our method eliminates the need for explicit endmember extraction, instead learning the input-output relationship between mixed spectra and LULC abundances through supervised learning. Experimental results demonstrate that integrating spectral-temporal input data with geo-topographic and climatic information significantly improves the estimation of LULC abundances in mixed pixels. To facilitate this study, we curated a novel labeled dataset for Andalusia (Spain) with monthly MODIS multispectral time series at 460m resolution for 2013. Named Andalusia MultiSpectral MultiTemporal Unmixing (Andalusia-MSMTU), this dataset provides pixel-level annotations of LULC abundances along with ancillary information. The dataset (https://zenodo.org/records/7752348) and code (https://github.com/jrodriguezortega/MSMTU) are available to the public

    Enhancing Spatio-Temporal Fusion of MODIS and Landsat Data by Incorporating 250 m MODIS Data

    Get PDF
    Spatio-temporal fusion of MODIS and Landsat data aims to produce new data that have simultaneously the Landsat spatial resolution and MODIS temporal resolution. It is an ill-posed problem involving large uncertainty, especially for reproduction of abrupt changes and heterogeneous landscapes. In this paper, we proposed to incorporate the freely available 250 m MODIS images into spatio-temporal fusion to increase prediction accuracy. The 250 m MODIS bands 1 and 2 are fused with 500 m MODIS bands 3-7 using the advanced area-to-point regression kriging approach. Based on a standard spatio-temporal fusion approach, the interim 250 m fused MODIS data are then downscaled to 30 m with the aid of the available 30 m Landsat data on temporally close days. The 250 m data can provide more information for the abrupt changes and heterogeneous landscapes than the original 500 m MODIS data, thus increasing the accuracy of spatio-temporal fusion predictions. The effectiveness of the proposed scheme was demonstrated using two datasets

    Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps

    Get PDF
    Studies of land cover dynamics would benefit greatly from the generation of land cover maps at both fine spatial and temporal resolutions. Fine spatial resolution images are usually acquired relatively infrequently, whereas coarse spatial resolution images may be acquired with a high repetition rate but may not capture the spatial detail of the land cover mosaic of the region of interest. Traditional image spatial–temporal fusion methods focus on the blending of pixel spectra reflectance values and do not directly provide land cover maps or information on land cover dynamics. In this research, a novel Spatial–Temporal remotely sensed Images and land cover Maps Fusion Model (STIMFM) is proposed to produce land cover maps at both fine spatial and temporal resolutions using a series of coarse spatial resolution images together with a few fine spatial resolution land cover maps that pre- and post-date the series of coarse spatial resolution images. STIMFM integrates both the spatial and temporal dependences of fine spatial resolution pixels and outputs a series of fine spatial–temporal resolution land cover maps instead of reflectance images, which can be used directly for studies of land cover dynamics. Here, three experiments based on simulated and real remotely sensed images were undertaken to evaluate the STIMFM for studies of land cover change. These experiments included comparative assessment of methods based on single-date image such as the super-resolution approaches (e.g., pixel swapping-based super-resolution mapping) and the state-of-the-art spatial–temporal fusion approach that used the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Flexible Spatiotemporal DAta Fusion model (FSDAF) to predict the fine-resolution images, in which the maximum likelihood classifier and the automated land cover updating approach based on integrated change detection and classification method were then applied to generate the fine-resolution land cover maps. Results show that the methods based on single-date image failed to predict the pixels of changed and unchanged land cover with high accuracy. The land cover maps that were obtained by classification of the reflectance images outputted from ESTARFM and FSDAF contained substantial misclassification, and the classification accuracy was lower for pixels of changed land cover than for pixels of unchanged land cover. In addition, STIMFM predicted fine spatial–temporal resolution land cover maps from a series of Landsat images and a few Google Earth images, to which ESTARFM and FSDAF that require correlation in reflectance bands in coarse and fine images cannot be applied. Notably, STIMFM generated higher accuracy for pixels of both changed and unchanged land cover in comparison with other methods

    Unmixing-based Spatiotemporal Image Fusion Based on the Self-trained Random Forest Regression and Residual Compensation

    Get PDF
    Spatiotemporal satellite image fusion (STIF) has been widely applied in land surface monitoring to generate high spatial and high temporal reflectance images from satellite sensors. This paper proposed a new unmixing-based spatiotemporal fusion method that is composed of a self-trained random forest machine learning regression (R), low resolution (LR) endmember estimation (E), high resolution (HR) surface reflectance image reconstruction (R), and residual compensation (C), that is, RERC. RERC uses a self-trained random forest to train and predict the relationship between spectra and the corresponding class fractions. This process is flexible without any ancillary training dataset, and does not possess the limitations of linear spectral unmixing, which requires the number of endmembers to be no more than the number of spectral bands. The running time of the random forest regression is about ~1% of the running time of the linear mixture model. In addition, RERC adopts a spectral reflectance residual compensation approach to refine the fused image to make full use of the information from the LR image. RERC was assessed in the fusion of a prediction time MODIS with a Landsat image using two benchmark datasets, and was assessed in fusing images with different numbers of spectral bands by fusing a known time Landsat image (seven bands used) with a known time very-high-resolution PlanetScope image (four spectral bands). RERC was assessed in the fusion of MODIS-Landsat imagery in large areas at the national scale for the Republic of Ireland and France. The code is available at https://www.researchgate.net/proiile/Xiao_Li52
    corecore