10 research outputs found

    Exploiting hyperspectral and multispectral images in the detection of tree species: A review

    Get PDF
    Classification of tree species provides important data in forest monitoring, sustainable forest management and planning. The recent developments in Multi Spectral (MS) and Hyper Spectral (HS) Imaging sensors in remote sensing have made the detection of tree species easier and accurate. With this systematic review study, it is aimed to understand the contribution of using the Multi Spectral and Hyper Spectral Imaging data in the detection of tree species while highlighting recent advances in the field and emphasizing important directions together with new possibilities for future inquiries. In this review, researchers and decision makers will be informed in two different subjects: First one is about the processing steps of exploiting Multi Spectral and HS images and the second one is about determining the advantages of exploiting Multi Spectral and Hyper Spectral images in the application area of detecting tree species. In this way exploiting satellite data will be facilitated. This will also provide an economical gain for using commercial Multi Spectral and Hyper Spectral Imaging data. Moreover, it should be also kept in mind that, as the number of spectral tags that will be obtained from each tree type are different, both the processing method and the classification method will change accordingly. This review, studies were grouped according to the data exploited (only Hyper Spectral images, only Multi Spectral images and their combinations), type of tree monitored and the processing method used. Then, the contribution of the image data used in the study was evaluated according to the accuracy of classification, the suitable type of tree and the classification method

    Reviews and syntheses:Remotely sensed optical time series for monitoring vegetation productivity

    Get PDF
    International audienceAbstract. Vegetation productivity is a critical indicator of global ecosystem health and is impacted by human activities and climate change. A wide range of optical sensing platforms, from ground-based to airborne and satellite, provide spatially continuous information on terrestrial vegetation status and functioning. As optical Earth observation (EO) data are usually routinely acquired, vegetation can be monitored repeatedly over time; reflecting seasonal vegetation patterns and trends in vegetation productivity metrics. Such metrics include e.g., gross primary productivity, net primary productivity, biomass or yield. To summarize current knowledge, in this paper, we systematically reviewed time series (TS) literature for assessing state-of-the-art vegetation productivity monitoring approaches for different ecosystems based on optical remote sensing (RS) data. As the integration of solar-induced fluorescence (SIF) data in vegetation productivity processing chains has emerged as a promising source, we also include this relatively recent sensor modality. We define three methodological categories to derive productivity metrics from remotely sensed TS of vegetation indices or quantitative traits: (i) trend analysis and anomaly detection, (ii) land surface phenology, and (iii) integration and assimilation of TS-derived metrics into statistical and process-based dynamic vegetation models (DVM). Although the majority of used TS data streams originate from data acquired from satellite platforms, TS data from aircraft and unoccupied aerial vehicles have found their way into productivity monitoring studies. To facilitate processing, we provide a list of common toolboxes for inferring productivity metrics and information from TS data. We further discuss validation strategies of the RS-data derived productivity metrics: (1) using in situ measured data, such as yield, (2) sensor networks of distinct sensors, including spectroradiometers, flux towers, or phenological cameras, and (3) inter-comparison of different productivity products or modelled estimates. Finally, we address current challenges and propose a conceptual framework for productivity metrics derivation, including fully-integrated DVMs and radiative transfer models here labelled as "Digital Twin". This novel framework meets the requirements of multiple ecosystems and enables both an improved understanding of vegetation temporal dynamics in response to climate and environmental drivers and also enhances the accuracy of vegetation productivity monitoring

    Reviews and syntheses: Remotely sensed optical time series for monitoring vegetation productivity

    Get PDF
    Vegetation productivity is a critical indicator of global ecosystem health and is impacted by human activities and climate change. A wide range of optical sensing platforms, from ground-based to airborne and satellite, provide spatially continuous information on terrestrial vegetation status and functioning. As optical Earth observation (EO) data are usually routinely acquired, vegetation can be monitored repeatedly over time, reflecting seasonal vegetation patterns and trends in vegetation productivity metrics. Such metrics include gross primary productivity, net primary productivity, biomass, or yield. To summarize current knowledge, in this paper we systematically reviewed time series (TS) literature for assessing state-of-the-art vegetation productivity monitoring approaches for different ecosystems based on optical remote sensing (RS) data. As the integration of solar-induced fluorescence (SIF) data in vegetation productivity processing chains has emerged as a promising source, we also include this relatively recent sensor modality. We define three methodological categories to derive productivity metrics from remotely sensed TS of vegetation indices or quantitative traits: (i) trend analysis and anomaly detection, (ii) land surface phenology, and (iii) integration and assimilation of TS-derived metrics into statistical and process-based dynamic vegetation models (DVMs). Although the majority of used TS data streams originate from data acquired from satellite platforms, TS data from aircraft and unoccupied aerial vehicles have found their way into productivity monitoring studies. To facilitate processing, we provide a list of common toolboxes for inferring productivity metrics and information from TS data. We further discuss validation strategies of the RS data derived productivity metrics: (1) using in situ measured data, such as yield; (2) sensor networks of distinct sensors, including spectroradiometers, flux towers, or phenological cameras; and (3) inter-comparison of different productivity metrics. Finally, we address current challenges and propose a conceptual framework for productivity metrics derivation, including fully integrated DVMs and radiative transfer models here labelled as “Digital Twin”. This novel framework meets the requirements of multiple ecosystems and enables both an improved understanding of vegetation temporal dynamics in response to climate and environmental drivers and enhances the accuracy of vegetation productivity monitoring

    Multi-sensor spectral synergies for crop stress detection and monitoring in the optical domain: A review

    Get PDF
    Remote detection and monitoring of the vegetation responses to stress became relevant for sustainable agriculture. Ongoing developments in optical remote sensing technologies have provided tools to increase our understanding of stress-related physiological processes. Therefore, this study aimed to provide an overview of the main spectral technologies and retrieval approaches for detecting crop stress in agriculture. Firstly, we present integrated views on: i) biotic and abiotic stress factors, the phases of stress, and respective plant responses, and ii) the affected traits, appropriate spectral domains and corresponding methods for measuring traits remotely. Secondly, representative results of a systematic literature analysis are highlighted, identifying the current status and possible future trends in stress detection and monitoring. Distinct plant responses occurring under short-term, medium-term or severe chronic stress exposure can be captured with remote sensing due to specific light interaction processes, such as absorption and scattering manifested in the reflected radiance, i.e. visible (VIS), near infrared (NIR), shortwave infrared, and emitted radiance, i.e. solar-induced fluorescence and thermal infrared (TIR). From the analysis of 96 research papers, the following trends can be observed: increasing usage of satellite and unmanned aerial vehicle data in parallel with a shift in methods from simpler parametric approaches towards more advanced physically-based and hybrid models. Most study designs were largely driven by sensor availability and practical economic reasons, leading to the common usage of VIS-NIR-TIR sensor combinations. The majority of reviewed studies compared stress proxies calculated from single-source sensor domains rather than using data in a synergistic way. We identified new ways forward as guidance for improved synergistic usage of spectral domains for stress detection: (1) combined acquisition of data from multiple sensors for analysing multiple stress responses simultaneously (holistic view); (2) simultaneous retrieval of plant traits combining multi-domain radiative transfer models and machine learning methods; (3) assimilation of estimated plant traits from distinct spectral domains into integrated crop growth models. As a future outlook, we recommend combining multiple remote sensing data streams into crop model assimilation schemes to build up Digital Twins of agroecosystems, which may provide the most efficient way to detect the diversity of environmental and biotic stresses and thus enable respective management decisions

    Improved classification of remote sensing imagery using image fusion techniques

    No full text
    Remote sensing is a quick and inexpensive way of gathering information about the Earth. It enables one to constantly get updated information from satellite images for real-time local and global mapping of environmental changes. Current classification methods used for extracting relevant knowledge from this huge information pool are not very efficient because of the limited training samples and high dimensionality of the images. Information fusion is often used in order to improve the classification accuracy prior or after performing classification. However, these techniques cannot always successfully overcome the aforementioned issues. Therefore, in this thesis, new methods are introduced in order to increase the classification accuracy of remotely sensed data by means of information fusion techniques. This thesis is structured in three parts. In the first part, a novel pixel based image fusion technique is introduced to fuse optical and SAR image data in order to increase classification accuracy. Fused images obtained via conventional fusion methods may not contain enough information for subsequent processing such as classification or feature extraction. The proposed method aims to keep the maximum contextual and spatial information from the source data by exploiting the relationship between spatial domain cumulants and wavelet domain cumulants. The novelty of the method consists in integrating the relationship between spatial and wavelet domain cumulants of the source images into an image fusion process as well as in employing these wavelet cumulants for optimisation of weights in a Cauchy convolution based image fusion scheme. In the second part, a novel feature based image fusion method is proposed in order to increase the classification accuracy of hyperspectral images. An application of Empirical Mode Decomposition (EMD) to wavelet based dimensionality reduction is presented with an aim to generate the smallest set I of features that leads to better classification accuracy compared to single tech! niques. Useful spectral information for hyperspectral image classi6cation can be oj:>tained by applying the Wavelet Transform (WT) to each hyperspectral signature. As EMD has the ability to describe short term spatial changes in frequencies, it helps to get a better understanding of the spatial information of the signal. In order to take advantage of both spectral and spatial information, a novel dimensionality reduction method is introduced, which relies on using the wavelet transform of EMD features. This leads to better class separability and hence to better classification.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Land use/land cover mapping from airborne hyperspectral images with machine learning algorithms and contextual information

    No full text
    Land use and Land cover (LULC) mapping is one of the most important application areas of remote sensing which requires both spectral and spatial resolutions in order to decrease the spectral ambiguity of different land cover types. Airborne hyperspectral images are among those data which perfectly suits to that kind of applications because of their high number of spectral bands and the ability to see small details on the field. As this technology has newly developed, most of the image processing methods are for the medium resolution sensors and they are not capable of dealing with high resolution images. Therefore, in this study a new framework is proposed to improve the classification accuracy of land use/cover mapping applications and to achieve a greater reliability in the process of mapping land use map using high resolution hyperspectral image data. In order to achieve it, spatial information is incorporated together with spectral information by exploiting feature extraction methods like Grey Level Co-occurrence Matrix (GLCM), Gabor and Morphological Attribute Profile (MAP) on dimensionally reduced image with highest accuracy. Then, machine learning algorithms like Random Forest (RF) and Support Vector Machine (SVM) are used to investigate the contribution of texture information in the classification of high resolution hyperspectral images. In addition to that, further analysis is conducted with object based RF classification to investigate the contribution of contextual information. Finally, overall accuracy, producer’s/user’s accuracy, the quantity and allocation based disagreements and location and quantity based kappa agreements are calculated together with McNemar tests for the accuracy assessment. According to our results, proposed framework which incorporates Gabor texture information and exploits Discrete Wavelet Transform based dimensionality reduction method increase the overall classification accuracy up to 9%. Amongst individual classes, Gabor features boosted classification accuracies of all the classes (soil, road, vegetation, building and shadow) to 7%, 6%, 6%, 8%, 9%, and 24% respectively with producer’s accuracy. Besides, 17% and 10% increase obtained in user’s accuracy with MAP (area) feature in classifying road and shadow classes respectively. Moreover, when the object based classification is conducted, it is seen that the OA of pixel based classification is increased further by 1.07%. An increase between 2% and 4% is achieved with producer’s accuracy in soil, vegetation and building classes and an increase between 1% and 3% is achieved by user’s accuracy in soil, road, vegetation and shadow classes. In the end, accurate LULC map is produced with object based RF classification of gabor features added airborne hyperspectral image which is dimensionally reduced with DWT method

    Land use/land cover mapping from airborne hyperspectral images with machine learning algorithms and contextual information

    No full text
    Land use and Land cover (LULC) mapping is one of the most important application areas of remote sensing which requires both spectral and spatial resolutions in order to decrease the spectral ambiguity of different land cover types. Airborne hyperspectral images are among those data which perfectly suits to that kind of applications because of their high number of spectral bands and the ability to see small details on the field. As this technology has newly developed, most of the image processing methods are for the medium resolution sensors and they are not capable of dealing with high resolution images. Therefore, in this study a new framework is proposed to improve the classification accuracy of land use/cover mapping applications and to achieve a greater reliability in the process of mapping land use map using high resolution hyperspectral image data. In order to achieve it, spatial information is incorporated together with spectral information by exploiting feature extraction methods like Grey Level Co-occurrence Matrix (GLCM), Gabor and Morphological Attribute Profile (MAP) on dimensionally reduced image with highest accuracy. Then, machine learning algorithms like Random Forest (RF) and Support Vector Machine (SVM) are used to investigate the contribution of texture information in the classification of high resolution hyperspectral images. In addition to that, further analysis is conducted with object based RF classification to investigate the contribution of contextual information. Finally, overall accuracy, producer’s/user’s accuracy, the quantity and allocation based disagreements and location and quantity based kappa agreements are calculated together with McNemar tests for the accuracy assessment. According to our results, proposed framework which incorporates Gabor texture information and exploits Discrete Wavelet Transform based dimensionality reduction method increase the overall classification accuracy up to 9%. Amongst individual classes, Gabor features boosted classification accuracies of all the classes (soil, road, vegetation, building and shadow) to 7%, 6%, 6%, 8%, 9%, and 24% respectively with producer’s accuracy. Besides, 17% and 10% increase obtained in user’s accuracy with MAP(area) feature in classifying road and shadow classes respectively. Moreover, when the object based classification is conducted, it is seen that the OA of pixel based classification is increased further by 1.07%. An increase between 2% and 4% is achieved with producer’s accuracy in soil, vegetation and building classes and an increase between 1% and 3% is achieved by user’s accuracy in soil, road, vegetation and shadow classes. In the end, accurate LULC map is produced with object based RF classification of gabor features added airborne hyperspectral image which is dimensionally reduced with DWT method

    Investigating persistent scatterer InSAR (PSInSAR) technique efficiency for landslides mapping: a case study in Artvin dam area, in Turkey

    No full text
    Monitoring and determining landslides in dam reservoirs is very crucial as it is one of the main factors of dam failures in the world. Coruh river basin is one of the most important river basin in the Northeast part of Turkey which accompanies five big dams. Although persistent scatterer InSAR (PSInSAR) method is a powerful remote sensing technique which can measure and monitor displacements of the Earth’s surface over time, its validation is a challenging issue because of the heterogeneous PS data. In this study, the efficiency of PSInSAR is investigated by proposing two different validation methods in order to see the consistency of the determined mean deformation velocities obtained with series of Sentinel-1A SAR-images. In the first method, 3D coordinates of reference points are projected to 1D displacement values in line of sight direction and then compared with the radar displacements of PS points. In the second method, new displacement values of PS points around reference points are identified from an interpolation map in order to be compared with the original displacements of reference points. In the end, it is showed that the displacements found by PSInSAR method are consistent with the reference points’ displacements measured in the study area. Finally, this work’s specific objectives are to present solutions to the challenging validation problem, to show the effectiveness of PSInSAR method and to describe the remaining challenges in PS analysis of landslide applications in dam areas

    A novel decision fusion approach to improving classification accuracy of hyperspectral images

    No full text

    Multi-sensor spectral synergies for crop stress detection and monitoring in the optical domain:A review

    No full text
    Remote detection and monitoring of the vegetation responses to stress became relevant for sustainable agriculture. Ongoing developments in optical remote sensing technologies have provided tools to increase our understanding of stress-related physiological processes. Therefore, this study aimed to provide an overview of the main spectral technologies and retrieval approaches for detecting crop stress in agriculture. Firstly, we present integrated views on: i) biotic and abiotic stress factors, the phases of stress, and respective plant responses, and ii) the affected traits, appropriate spectral domains and corresponding methods for measuring traits remotely. Secondly, representative results of a systematic literature analysis are highlighted, identifying the current status and possible future trends in stress detection and monitoring. Distinct plant responses occurring under short-term, medium-term or severe chronic stress exposure can be captured with remote sensing due to specific light interaction processes, such as absorption and scattering manifested in the reflected radiance, i.e. visible (VIS), near infrared (NIR), shortwave infrared, and emitted radiance, i.e. solar-induced fluorescence and thermal infrared (TIR). From the analysis of 96 research papers, the following trends can be observed: increasing usage of satellite and unmanned aerial vehicle data in parallel with a shift in methods from simpler parametric approaches towards more advanced physically-based and hybrid models. Most study designs were largely driven by sensor availability and practical economic reasons, leading to the common usage of VIS-NIR-TIR sensor combinations. The majority of reviewed studies compared stress proxies calculated from single-source sensor domains rather than using data in a synergistic way. We identified new ways forward as guidance for improved synergistic usage of spectral domains for stress detection: (1) combined acquisition of data from multiple sensors for analysing multiple stress responses simultaneously (holistic view); (2) simultaneous retrieval of plant traits combining multi-domain radiative transfer models and machine learning methods; (3) assimilation of estimated plant traits from distinct spectral domains into integrated crop growth models. As a future outlook, we recommend combining multiple remote sensing data streams into crop model assimilation schemes to build up Digital Twins of agroecosystems, which may provide the most efficient way to detect the diversity of environmental and biotic stresses and thus enable respective management decisions
    corecore