491 research outputs found

    Temporal optimisation of image acquisition for land cover classification with random forest and MODIS time-series

    Get PDF
    The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8–10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of 2009/2010 can alter the temporal separability pattern significantly. Due to the extensive use of the NDVI for land cover discrimination, the findings of this study should be transferrable to data from other optical sensors with a higher spatial resolution. However, the high impact of outliers from the general climatic pattern highlights the limitation of spatial transferability to locations with different climatic and land cover conditions. The use of high-temporal, moderate resolution data such as MODIS in conjunction with machine-learning techniques proved to be a good base for the prediction of image acquisition timing for optimal land cover classification results

    Investigation of Coastal Vegetation Dynamics and Persistence in Response to Hydrologic and Climatic Events Using Remote Sensing

    Get PDF
    Coastal Wetlands (CW) provide numerous imperative functions and provide an economic base for human societies. Therefore, it is imperative to track and quantify both short and long-term changes in these systems. In this dissertation, CW dynamics related to hydro-meteorological signals were investigated using a series of LANDSAT-derived normalized difference vegetation index (NDVI) data and hydro-meteorological time-series data in Apalachicola Bay, Florida, from 1984 to 2015. NDVI in forested wetlands exhibited more persistence compared to that for scrub and emergent wetlands. NDVI fluctuations generally lagged temperature by approximately three months, and water level by approximately two months. This analysis provided insight into long-term CW dynamics in the Northern Gulf of Mexico. Long-term studies like this are dependent on optical remote sensing data such as Landsat which is frequently partially obscured due to clouds and this can that makes the time-series sparse and unusable during meteorologically active seasons. Therefore, a multi-sensor, virtual constellation method is proposed and demonstrated to recover the information lost due to cloud cover. This method, named Tri-Sensor Fusion (TSF), produces a simulated constellation for NDVI by integrating data from three compatible satellite sensors. The visible and near-infrared (VNIR) bands of Landsat-8 (L8), Sentinel-2, and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were utilized to map NDVI and to compensate each satellite sensor\u27s shortcomings in visible coverage area. The quantitative comparison results showed a Root Mean Squared Error (RMSE) and Coefficient of Determination (R2) of 0.0020 sr-1 and 0.88, respectively between true observed and fused L8 NDVI. Statistical test results and qualitative performance evaluation suggest that TSF was able to synthesize the missing pixels accurately in terms of the absolute magnitude of NDVI. The fusion improved the spatial coverage of CWs reasonably well and ultimately increases the continuity of NDVI data for long term studies

    A manifold learning approach to target detection in high-resolution hyperspectral imagery

    Get PDF
    Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying targets such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m \u3c\u3c d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space

    Thermal infrared work at ITC:a personal, historic perspective of transitions

    Get PDF

    Deep Learning for Remote Sensing Image Processing

    Get PDF
    Remote sensing images have many applications such as ground object detection, environmental change monitoring, urban growth monitoring and natural disaster damage assessment. As of 2019, there were roughly 700 satellites listing “earth observation” as their primary application. Both spatial and temporal resolutions of satellite images have improved consistently in recent years and provided opportunities in resolving fine details on the Earth\u27s surface. In the past decade, deep learning techniques have revolutionized many applications in the field of computer vision but have not fully been explored in remote sensing image processing. In this dissertation, several state-of-the-art deep learning models have been investigated and customized for satellite image processing in the applications of landcover classification and ground object detection. First, a simple and effective Convolutional Neural Network (CNN) model is developed to detect fresh soil from tunnel digging activities near the U.S. and Mexico border by using pansharpened synthetic hyperspectral images. These tunnels’ exits are usually hidden under warehouses and are used for illegal activities, for example, by drug dealers. Detecting fresh soil nearby is an indirect way to search for these tunnels. While multispectral images have been used widely and regularly in remote sensing since the 1970s, with the fast advances in hyperspectral sensors, hyperspectral imagery is becoming popular. A combination of 80 synthetic hyperspectral channels with the original eight multispectral channels collected by the WorldView-2 satellite are used by CNN to detect fresh soil. Experimental results show that detection performance can be significantly improved by the combination of synthetic hyperspectral images with those original multispectral channels. Second, an end-to-end, pixel-level Fully Convolutional Network (FCN) model is implemented to estimate the number of refugee tents in the Rukban area near the Syrian-Jordan border using high-resolution multispectral satellite images collected by WordView-2. Rukban is a desert area crossing the border between Syria and Jordan, and thousands of Syrian refugees have fled into this area since the Syrian civil war in 2014. In the past few years, the number of refugee shelters for the forcibly displaced Syrian refugees in this area has increased rapidly. Estimating the location and number of refugee tents has become a key factor in maintaining the sustainability of the refugee shelter camps. Manually counting the shelters is labor-intensive and sometimes prohibitive given the large quantities. In addition, these shelters/tents are usually small in size, irregular in shape, and sparsely distributed in a very large area and could be easily missed by the traditional image-analysis techniques, making the image-based approaches also challenging. The FCN model is also boosted by transfer learning with the knowledge in the pre-trained VGG-16 model. Experimental results show that the FCN model is very accurate and has less than 2% of error. Last, we investigate the Generative Adversarial Networks (GAN) to augment training data to improve the training of FCN model for refugee tent detection. Segmentation based methods like FCN require a large amount of finely labeled images for training. In practice, this is labor-intensive, time consuming, and tedious. The data-hungry problem is currently a big hurdle for this application. Experimental results show that the GAN model is a better tool as compared to traditional methods for data augmentation. Overall, our research made a significant contribution to remote sensing image processin
    • …
    corecore