302 research outputs found

    Assessing the role of EO in biodiversity monitoring: options for integrating in-situ observations with EO within the context of the EBONE concept

    Get PDF
    The European Biodiversity Observation Network (EBONE) is a European contribution on terrestrial monitoring to GEO BON, the Group on Earth Observations Biodiversity Observation Network. EBONE’s aims are to develop a system of biodiversity observation at regional, national and European levels by assessing existing approaches in terms of their validity and applicability starting in Europe, then expanding to regions in Africa. The objective of EBONE is to deliver: 1. A sound scientific basis for the production of statistical estimates of stock and change of key indicators; 2. The development of a system for estimating past changes and forecasting and testing policy options and management strategies for threatened ecosystems and species; 3. A proposal for a cost-effective biodiversity monitoring system. There is a consensus that Earth Observation (EO) has a role to play in monitoring biodiversity. With its capacity to observe detailed spatial patterns and variability across large areas at regular intervals, our instinct suggests that EO could deliver the type of spatial and temporal coverage that is beyond reach with in-situ efforts. Furthermore, when considering the emerging networks of in-situ observations, the prospect of enhancing the quality of the information whilst reducing cost through integration is compelling. This report gives a realistic assessment of the role of EO in biodiversity monitoring and the options for integrating in-situ observations with EO within the context of the EBONE concept (cfr. EBONE-ID1.4). The assessment is mainly based on a set of targeted pilot studies. Building on this assessment, the report then presents a series of recommendations on the best options for using EO in an effective, consistent and sustainable biodiversity monitoring scheme. The issues that we faced were many: 1. Integration can be interpreted in different ways. One possible interpretation is: the combined use of independent data sets to deliver a different but improved data set; another is: the use of one data set to complement another dataset. 2. The targeted improvement will vary with stakeholder group: some will seek for more efficiency, others for more reliable estimates (accuracy and/or precision); others for more detail in space and/or time or more of everything. 3. Integration requires a link between the datasets (EO and in-situ). The strength of the link between reflected electromagnetic radiation and the habitats and their biodiversity observed in-situ is function of many variables, for example: the spatial scale of the observations; timing of the observations; the adopted nomenclature for classification; the complexity of the landscape in terms of composition, spatial structure and the physical environment; the habitat and land cover types under consideration. 4. The type of the EO data available varies (function of e.g. budget, size and location of region, cloudiness, national and/or international investment in airborne campaigns or space technology) which determines its capability to deliver the required output. EO and in-situ could be combined in different ways, depending on the type of integration we wanted to achieve and the targeted improvement. We aimed for an improvement in accuracy (i.e. the reduction in error of our indicator estimate calculated for an environmental zone). Furthermore, EO would also provide the spatial patterns for correlated in-situ data. EBONE in its initial development, focused on three main indicators covering: (i) the extent and change of habitats of European interest in the context of a general habitat assessment; (ii) abundance and distribution of selected species (birds, butterflies and plants); and (iii) fragmentation of natural and semi-natural areas. For habitat extent, we decided that it did not matter how in-situ was integrated with EO as long as we could demonstrate that acceptable accuracies could be achieved and the precision could consistently be improved. The nomenclature used to map habitats in-situ was the General Habitat Classification. We considered the following options where the EO and in-situ play different roles: using in-situ samples to re-calibrate a habitat map independently derived from EO; improving the accuracy of in-situ sampled habitat statistics, by post-stratification with correlated EO data; and using in-situ samples to train the classification of EO data into habitat types where the EO data delivers full coverage or a larger number of samples. For some of the above cases we also considered the impact that the sampling strategy employed to deliver the samples would have on the accuracy and precision achieved. Restricted access to European wide species data prevented work on the indicator ‘abundance and distribution of species’. With respect to the indicator ‘fragmentation’, we investigated ways of delivering EO derived measures of habitat patterns that are meaningful to sampled in-situ observations

    Multisource and Multitemporal Data Fusion in Remote Sensing

    Get PDF
    The sharp and recent increase in the availability of data captured by different sensors combined with their considerably heterogeneous natures poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary datasets, however, opens up the possibility of utilizing multimodal datasets in a joint manner to further improve the performance of the processing approaches with respect to the application at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several spaceborne sensors, the integration of the temporal information with the spatial and/or spectral/backscattering information of the remotely sensed data is possible and helps to move from a representation of 2D/3D data to 4D data structures, where the time variable adds new information as well as challenges for the information extraction algorithms. There are a huge number of research works dedicated to multisource and multitemporal data fusion, but the methods for the fusion of different modalities have expanded in different paths according to each research community. This paper brings together the advances of multisource and multitemporal data fusion approaches with respect to different research communities and provides a thorough and discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to conduct novel investigations on this challenging topic by supplying sufficient detail and references

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included

    Mapping of multitemporal rice (Oryza sativa L.) growth stages using remote sensing with multi-sensor and machine learning : a thesis dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Earth Science at Massey University, Manawatū, New Zealand

    Get PDF
    Figure 2.1 is adapted and re-used under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.Rice (Oryza Sativa) plays a pivotal role in food security for Asian countries, especially in Indonesia. Due to the increasing pressure of environmental changes, such as land use and climate, rice cultivation areas need to be monitored regularly and spatially to ensure sustainable rice production. Moreover, timely information of rice growth stages (RGS) can lead to more efficient of inputs distribution from water, seed, fertilizer, and pesticide. One of the efficient solutions for regularly mapping the rice crop is using Earth observation satellites. Moreover, the increasing availability of open access satellite images such as Landsat-8, Sentinel-1, and Sentinel-2 provides ample opportunities to map continuous and high-resolution rice growth stages with greater accuracy. The majority of the literature has focused on mapping rice area, cropping patterns and relied mainly on the phenology of vegetation. However, the mapping process of RGS was difficult to assess the accuracy, time-consuming, and depended on only one sensor. In this work, we discuss the use of machine learning algorithms (MLA) for mapping paddy RGS with multiple remote sensing data in near-real-time. The study area was Java Island, which is the primary rice producer in Indonesia. This study has investigated: (1) the mapping of RGS using Landsat-8 imagery and different MLAs, and their rigorous performance was evaluated by conducting a multitemporal analysis; (2) the temporal consistency of predicting RGS using Sentinel-2, MOD13Q1, and Sentinel-1 data; (3) evaluating the correlation of local statistics data and paddy RGS using Sentinel-2, PROBA-V, and Sentinel-1 with MLAs. The ground truth datasets were collected from multi-year web camera data (2014-2016) and three months of the field campaign in different regions of Java (2018). The study considered the RGS in the analysis to be vegetative, reproductive, ripening, bare land, and flooding, and MLAs such as support vector machines (SVMs), random forest (RF), and artificial neural network (ANN) were used. The temporal consistency matrix was used to compare the classification maps within three sensor datasets (Landsat-8 OLI, Sentinel-2, and Sentinel-2, MOD13Q1, Sentinel-1) and in four periods (5, 10, 15, 16 days). Moreover, the result of the RGS map was also compared with monthly data from local statistics within each sub-district using cross-correlation analysis. The result from the analysis shows that SVM with a radial base function outperformed the RF and ANN and proved to be a robust method for small-size datasets (< 1,000 points). Compared to Sentinel-2, Landsat-8 OLI gives less accuracy due to the lack of a red-edge band and larger pixel size (30 x 30 m). Integration of Sentinel-2, MOD13Q1, and Sentinel-1 improved the classification performance and increased the temporal availability of cloud-free maps. The integration of PROBA-V and Sentinel-1 improved the classification accuracy from the Landsat-8 result, consistent with the monthly rice planting area statistics at the sub-district level. The western area of Java has the highest accuracy and consistency since the cropping pattern only relied on rice cultivation. In contrast, less accuracy was noticed in the eastern area because of upland rice cultivation due to limited irrigation facilities and mixed cropping. In addition, the cultivation of shallots to the north of Nganjuk Regency interferes with the model predictions because the cultivation of shallots resembles the vegetative phase due to the water banks. One future research idea is the auto-detection of the cropping index in the complex landscape to be able to use it for mapping RGS on a global scale. Detection of the rice area and RGS using Google Earth Engine (GEE) can be an action plan to disseminate the information quickly on a planetary scale. Our results show that the multitemporal Sentinel-1 combined with RF can detect rice areas with high accuracy (>91%). Similarly, accurate RGS maps can be detected by integrating multiple remote sensing (Sentinel-2, Landsat-8 OLI, and MOD13Q1) data with acceptable accuracy (76.4%), with high temporal frequency and lower cloud interference (every 16 days). Overall, this study shows that remote sensing combined with the machine learning methodology can deliver information on RGS in a timely fashion, which is easy to scale up and consistent both in time and space and matches the local statistics. This thesis is also in line with the existing rice monitoring projects such as Crop Monitor, Crop Watch, AMIS, and Sen4Agri to support disseminating information over a large area. To sum up, the proposed workflow and detailed map provide a more accurate method and information in near real-time for stakeholders, such as governmental agencies against the existing mapping method. This method can be introduced to provide accurate information to rice farmers promptly with sufficient inputs such as irrigation, seeds, and fertilisers for ensuring national food security from the shifting planting time due to climate change

    Crop monitoring and yield estimation using polarimetric SAR and optical satellite data in southwestern Ontario

    Get PDF
    Optical satellite data have been proven as an efficient source to extract crop information and monitor crop growth conditions over large areas. In local- to subfield-scale crop monitoring studies, both high spatial resolution and high temporal resolution of the image data are important. However, the acquisition of optical data is limited by the constant contamination of clouds in cloudy areas. This thesis explores the potential of polarimetric Synthetic Aperture Radar (SAR) satellite data and the spatio-temporal data fusion approach in crop monitoring and yield estimation applications in southwestern Ontario. Firstly, the sensitivity of 16 parameters derived from C-band Radarsat-2 polarimetric SAR data to crop height and fractional vegetation cover (FVC) was investigated. The results show that the SAR backscatters are affected by many factors unrelated to the crop canopy such as the incidence angle and the soil background and the degree of sensitivity varies with the crop types, growing stages, and the polarimetric SAR parameters. Secondly, the Minimum Noise Fraction (MNF) transformation, for the first time, was applied to multitemporal Radarsat-2 polarimetric SAR data in cropland area mapping based on the random forest classifier. An overall classification accuracy of 95.89% was achieved using the MNF transformation of the multi-temporal coherency matrix acquired from July to November. Then, a spatio-temporal data fusion method was developed to generate Normalized Difference Vegetation Index (NDVI) time series with both high spatial and high temporal resolution in heterogeneous regions using Landsat and MODIS imagery. The proposed method outperforms two other widely used methods. Finally, an improved crop phenology detection method was proposed, and the phenology information was then forced into the Simple Algorithm for Yield Estimation (SAFY) model to estimate crop biomass and yield. Compared with the SAFY model without forcing the remotely sensed phenology and a simple light use efficiency (LUE) model, the SAFY incorporating the remotely sensed phenology can improve the accuracy of biomass estimation by about 4% in relative Root Mean Square Error (RRMSE). The studies in this thesis improve the ability to monitor crop growth status and production at subfield scale

    Large Area Land Cover Mapping Using Deep Neural Networks and Landsat Time-Series Observations

    Get PDF
    This dissertation focuses on analysis and implementation of deep learning methodologies in the field of remote sensing to enhance land cover classification accuracy, which has important applications in many areas of environmental planning and natural resources management. The first manuscript conducted a land cover analysis on 26 Landsat scenes in the United States by considering six classifier variants. An extensive grid search was conducted to optimize classifier parameters using only the spectral components of each pixel. Results showed no gain in using deep networks by using only spectral components over conventional classifiers, possibly due to the small reference sample size and richness of features. The effect of changing training data size, class distribution, or scene heterogeneity were also studied and we found all of them having significant effect on classifier accuracy. The second manuscript reviewed 103 research papers on the application of deep learning methodologies in remote sensing, with emphasis on per-pixel classification of mono-temporal data and utilizing spectral and spatial data dimensions. A meta-analysis quantified deep network architecture improvement over selected convolutional classifiers. The effect of network size, learning methodology, input data dimensionality and training data size were also studied, with deep models providing enhanced performance over conventional one using spectral and spatial data. The analysis found that input dataset was a major limitation and available datasets have already been utilized to their maximum capacity. The third manuscript described the steps to build the full environment for dataset generation based on Landsat time-series data using spectral, spatial, and temporal information available for each pixel. A large dataset containing one sample block from each of 84 ecoregions in the conterminous United States (CONUS) was created and then processed by a hybrid convolutional+recurrent deep network, and the network structure was optimized with thousands of simulations. The developed model achieved an overall accuracy of 98% on the test dataset. Also, the model was evaluated for its overall and per-class performance under different conditions, including individual blocks, individual or combined Landsat sensors, and different sequence lengths. The analysis found that although the deep model performance per each block is superior to other candidates, the per block performance still varies considerably from block to block. This suggests extending the work by model fine-tuning for local areas. The analysis also found that including more time stamps or combining different Landsat sensor observations in the model input significantly enhances the model performance

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor
    • …
    corecore